Tom Wallis's Avatar

Tom Wallis

@tsawallis

Vision scientist. Professor at the Centre for Cognitive Science, Technical University of Darmstadt, Germany. πŸ‡¦πŸ‡ΊπŸ‡©πŸ‡ͺ. pronoun.is/he https://www.psychologie.tu-darmstadt.de/perception

4,699
Followers
1,427
Following
319
Posts
21.09.2023
Joined
Posts Following

Latest posts by Tom Wallis @tsawallis

@plosbiology.org is formalizing its long‑standing practice of asking authors to share research code, introducing a mandatory #code‑sharing policy and clarifying what is meant by code sharing.

Learn more and find guidance on best practice: plos.io/4reyX3v

09.03.2026 17:55 πŸ‘ 35 πŸ” 18 πŸ’¬ 1 πŸ“Œ 2

Last week to apply for a 3yr postdoc with @tsawallis.bsky.social, Frank JΓ€kel and myself. Deadline is March 15th hmc-lab.com/TAMPostdoc.h...

09.03.2026 11:02 πŸ‘ 17 πŸ” 13 πŸ’¬ 0 πŸ“Œ 0
Post image

🧡 Some personal news:

My new book, ON COURAGE – with Pulitzer Prize-winning journalist @juliaangwin.com – is available for pre-order NOW (out June 30).

It’s a deeply reported manual with sixteen lessons for how each of us can defy authoritarianism.

Pre-order: www.harpercollins.com/products/on-...

09.03.2026 15:03 πŸ‘ 220 πŸ” 67 πŸ’¬ 2 πŸ“Œ 6
06.03.2026 17:45 πŸ‘ 34 πŸ” 10 πŸ’¬ 1 πŸ“Œ 2

ok, internet friends. let's say you have a big grant proposal or other writing project requiring you to put your head down and crank out a shit ton of text. what music are you blasting

this is a judgment-free zone, I want all your best recs no matter how embarrassing

26.02.2026 00:31 πŸ‘ 22 πŸ” 1 πŸ’¬ 54 πŸ“Œ 2
Album cover for The Bird of a Thousand Voices by Tigran Hamasyan.

Album cover for The Bird of a Thousand Voices by Tigran Hamasyan.

03.03.2026 08:19 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Album cover of Ultrahang, album by Chris Potter’s Underground group

Album cover of Ultrahang, album by Chris Potter’s Underground group

I generally can’t listen to music with lyrics while writing. Modern jazz or death metal are my go tos.

03.03.2026 08:14 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Postdoc position -- Social Learning and Cultural Evolution Postdoc position -- Social Learning and Cultural Evolution posted on March 2, 2026 We are currently seeking a highly motivated individual...

πŸš€ Postdoc Alert! Are you passionate about social learning & cultural evolution? @dominikdeffner.bsky.social & I have a 3-year position with freedom to develop your research and work on cutting-edge multiplayer and immersive experiments. Apply by March 30! hmc-lab.com/SocialLearni... Pls share πŸ™

02.03.2026 10:45 πŸ‘ 59 πŸ” 62 πŸ’¬ 2 πŸ“Œ 3
Cookin' On 3 Burners Official | Celebrate the future sound of yesterday! Cookin’ On 3 Burners are Australia’s hardest hitting Hammond Organ Trio joining the dots between Deep Funk, Raw Soul, Organ Jazz & Boogaloo.

At least you can still be β€œcookin on N burners”, although burner…

www.cookinon3burners.com

28.02.2026 09:30 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image Post image

New paper on a long-shot I've been obsessed with for a year:

How much are AI reasoning gains confounded by expanding the training corpus 10000x? How much LLM performance is down to "shallow" generalisation (approximate pattern-matching to highly-related training data)?

t.co/CH2vP0Y7OF

27.02.2026 17:25 πŸ‘ 63 πŸ” 16 πŸ’¬ 1 πŸ“Œ 2
Post image

LLMs should be a private cognitive tool, like a calculator, which is not currently possible under our existing model of corporate AI. Crucially, they do not think and have no agency, again in the same way as a calculator

26.02.2026 09:45 πŸ‘ 33 πŸ” 5 πŸ’¬ 3 πŸ“Œ 3
Four panels showing how an image of a kiwi fruit can be filtered to understand contrast energy.

Four panels showing how an image of a kiwi fruit can be filtered to understand contrast energy.

So @reubenrideaux.bsky.social and I decided to run an image-processing workshop at this year's EPC/APCV. We will be teaching people how to compute the contrast energy of kiwi fruit, I guess. Sign up now: visualneuroscience.auckland.ac.nz/epc-apcv-2026/

@expsyanz.bsky.social

25.02.2026 01:35 πŸ‘ 15 πŸ” 6 πŸ’¬ 1 πŸ“Œ 1

bsky.app/profile/tsaw...

I was already an LLM skeptic, and then this happened to my student and I.

25.02.2026 05:48 πŸ‘ 1 πŸ” 0 πŸ’¬ 2 πŸ“Œ 0
Preview
Pace of ecology drives the tempo of visual perception across the animal kingdom - Nature Ecology & Evolution Using phylogenetic comparative methods across 237 species from disparate phyla, the authors show that species with fast-paced ecologies have higher temporal resolution of perception.

Pace of ecology drives the tempo of visual perception across the animal kingdom www.nature.com/articles/s41... - new paper with Clinton Haarlem, Cliodhna Hynes and colleagues

Different species see the world as fast as they need to...

24.02.2026 10:40 πŸ‘ 87 πŸ” 37 πŸ’¬ 4 πŸ“Œ 1

Anthropic was supposed to be the "good" one, right?

23.02.2026 13:36 πŸ‘ 72 πŸ” 23 πŸ’¬ 4 πŸ“Œ 0
Post image

Why do children struggle to recognise objects in cluttered scenes more than adults? Our new paper looks at the development of visual acuity and crowding across childhood, and the way the visual system fine tunes our ability to see detail: www.nature.com/articles/s41...

23.02.2026 17:11 πŸ‘ 36 πŸ” 17 πŸ’¬ 2 πŸ“Œ 1

Come work with @tsawallis.bsky.social, Frank JΓ€kel and myself! 3 yr Postdoc on category learning w/ structured, program-like representations. Funded by the www.theadaptivemind.de Excellence Cluster! Deadline is Mar 15th, details πŸ‘‡ Please share widely πŸ™

23.02.2026 12:30 πŸ‘ 15 πŸ” 9 πŸ’¬ 0 πŸ“Œ 1
Image shows a code example for the Python package introduced in this post.

The code is as following (for screenreaders):
import mitsuba_scene_description as msd
import mitsuba as mi

mi.set_variant("llvm_ad_rgb")

# Define components
diffuse = msd.SmoothDiffuseMaterial(reflectance=msd.RGB([0.8, 0.2, 0.2]))
ball = msd.Sphere(
    radius=1.0,
    bsdf=diffuse,
    to_world=msd.Transform().translate(0, 0, 3).scale(0.4),
)
cam = msd.PerspectivePinholeCamera(
    fov=45,
    to_world=msd.Transform().look_at(
        origin=[0, 1, -6], target=[0, 0, 0], up=[0, 1, 0]
    ),
)
integrator = msd.PathTracer()
emitter = msd.ConstantEnvironmentEmitter()

# builder pattern
scene = (
    msd.SceneBuilder()
    .integrator(integrator)
    .sensor(cam)
    .shape("ball", ball)
    .emitter("sun", emitter)
    .build()
)

# or 
scene = msd.Scene(
    integrator=integrator,
    sensors=cam,  # also accepts a list for multi-sensor setups
    shapes={"ball": ball},
    emitters={"sun": emitter},
)

mi.load_dict(scene.to_dict())
# will return:
{'ball': {'bsdf': {'reflectance': {'type': 'rgb', 'value': [0.8, 0.2, 0.2]},
                   'type': 'diffuse'},
          'radius': 1.0,
          'to_world': Transform[
  matrix=[[0.4, 0, 0, 0],
          [0, 0.4, 0, 0],
          [0, 0, 0.4, 1.2],
          [0, 0, 0, 1]],
  ...
],
          'type': 'sphere'},
 'integrator': {'type': 'path'},
 'sensor': {'fov': 45,
            'to_world': Transform[...],
            'type': 'perspective'},
 'sun': {'type': 'constant'},
 'type': 'scene'}

Image shows a code example for the Python package introduced in this post. The code is as following (for screenreaders): import mitsuba_scene_description as msd import mitsuba as mi mi.set_variant("llvm_ad_rgb") # Define components diffuse = msd.SmoothDiffuseMaterial(reflectance=msd.RGB([0.8, 0.2, 0.2])) ball = msd.Sphere( radius=1.0, bsdf=diffuse, to_world=msd.Transform().translate(0, 0, 3).scale(0.4), ) cam = msd.PerspectivePinholeCamera( fov=45, to_world=msd.Transform().look_at( origin=[0, 1, -6], target=[0, 0, 0], up=[0, 1, 0] ), ) integrator = msd.PathTracer() emitter = msd.ConstantEnvironmentEmitter() # builder pattern scene = ( msd.SceneBuilder() .integrator(integrator) .sensor(cam) .shape("ball", ball) .emitter("sun", emitter) .build() ) # or scene = msd.Scene( integrator=integrator, sensors=cam, # also accepts a list for multi-sensor setups shapes={"ball": ball}, emitters={"sun": emitter}, ) mi.load_dict(scene.to_dict()) # will return: {'ball': {'bsdf': {'reflectance': {'type': 'rgb', 'value': [0.8, 0.2, 0.2]}, 'type': 'diffuse'}, 'radius': 1.0, 'to_world': Transform[ matrix=[[0.4, 0, 0, 0], [0, 0.4, 0, 0], [0, 0, 0.4, 1.2], [0, 0, 0, 1]], ... ], 'type': 'sphere'}, 'integrator': {'type': 'path'}, 'sensor': {'fov': 45, 'to_world': Transform[...], 'type': 'perspective'}, 'sun': {'type': 'constant'}, 'type': 'scene'}

G'day!
I've just published a new version of mitsuba-scene-description to GitHub and PyPI: github.com/pixelsandpoi...

I've changed the generation process, so you no longer need to manually clone and build the API yourself. The Mitsuba plugin API will now be generated during package build.

1/x

23.02.2026 10:09 πŸ‘ 4 πŸ” 2 πŸ’¬ 1 πŸ“Œ 0
The Adaptive Mind – Uncovering how the human mind adapts to ever-changing conditions

The position is for three years and is funded by the excellence cluster The Adaptive Mind (theadaptivemind-excellencecluster.de).

Please repost!

23.02.2026 08:53 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

**Postdoc position in human category learning**

@thecharleywu.bsky.social, Frank JΓ€kel and I are seeking a postdoctoral fellow to lead a joint project on human category learning at the Centre for Cognitive Science @tuda.bsky.social.

www.career.tu-darmstadt.de/tu-darmstadt...

23.02.2026 08:53 πŸ‘ 39 πŸ” 28 πŸ’¬ 1 πŸ“Œ 1

β€œTraining a human takes 20 years of food”

22.02.2026 12:54 πŸ‘ 2500 πŸ” 625 πŸ’¬ 41 πŸ“Œ 14

Anyway. Kudos to @niallw.bsky.social for noticing the error, and my doctoral student @ben.graphics for quickly and transparently fixing it.

20.02.2026 10:24 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Indeed. I’m already an LLM skeptic for moral and pedagogical reasons, but I also haven’t adopted a hard ban or similar. If you get stuff-ups like this in such an innocuous case, it makes me extremely skeptical of larger and more important uses.

20.02.2026 10:24 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

@niallw.bsky.social pointed out an error in a preprint of ours that contained a fabricated citation. The cause: we used Claude to fix an arxiv upload / compile error. It decided the best way to do that was to remove an actual citation and replace it with a fabricated one.

bsky.app/profile/ben....

20.02.2026 07:39 πŸ‘ 12 πŸ” 5 πŸ’¬ 2 πŸ“Œ 1

Are you certain it’s not also altering the underlying data for β€œnicer” display?

20.02.2026 07:33 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
PNAS Proceedings of the National Academy of Sciences (PNAS), a peer reviewed journal of the National Academy of Sciences (NAS) - an authoritative source of high-impact, original research that broadly spans...

Bots have made their way to Prolific experiments. Our lab has stopped online testing of adults entirely now for this reason - we want to know if what we study is real. Probably data collected 2-3 years ago are ok, but moving forward we just can't know. www.pnas.org/doi/10.1073/...

19.02.2026 15:14 πŸ‘ 170 πŸ” 98 πŸ’¬ 6 πŸ“Œ 11

Unfortunately, I left Claude fixing Arxiv compilation errors. Apparently, it changed a citation in the process (that was right in v1 of the paper). I am currently reviewing all the citations now again by hand and preparing a new version that fixes these. I will also post them here:
1/x

19.02.2026 09:48 πŸ‘ 7 πŸ” 1 πŸ’¬ 3 πŸ“Œ 3

Great! Yes, I see loads of potential.

17.02.2026 06:28 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Sorry for the wait (been to China for 4 weeks between the years), but I finally managed to update the pre-print with more results and the code is now also available on: github.com/ag-perceptio...

If you run into issues, let me know.

16.02.2026 17:45 πŸ‘ 4 πŸ” 2 πŸ’¬ 1 πŸ“Œ 1
Post image

Wow, hello Bluesky! We're the Brisbane Experimental Psychology Student Initiative (BEPSI)

We host monthly meetings with fresh research (cog, dev, comp, clin, neuro, forensic, psychophys) followed by wholesome networking (summer lawn bowls, so good)

1st meeting Weds 27th Feb 4pm - more soon! πŸ§ πŸ€—

16.02.2026 06:24 πŸ‘ 12 πŸ” 3 πŸ’¬ 0 πŸ“Œ 0