I'm so so sorry to hear, Guido. Sending all my love to you and Sara's family β€οΈ
I'm so so sorry to hear, Guido. Sending all my love to you and Sara's family β€οΈ
Had the privilege of being Will's first PhD student and can confirm the 3.5 years of supervisor training I put him through were highly successful. Couldnβt recommend this opportunity enough
10 PhD positions at JLU Giessen in the new Research Training Group "PIMON"! We will explore how humans perceive and interact with materials and objects in natural environments.
More information on the project, the PIs, and how to apply here:
www.uni-giessen.de/de/ueber-uns...
Please share!
Why do children struggle to recognise objects in cluttered scenes more than adults? Our new paper looks at the development of visual acuity and crowding across childhood, and the way the visual system fine tunes our ability to see detail: www.nature.com/articles/s41...
**Postdoc position in human category learning**
@thecharleywu.bsky.social, Frank JΓ€kel and I are seeking a postdoctoral fellow to lead a joint project on human category learning at the Centre for Cognitive Science @tuda.bsky.social.
www.career.tu-darmstadt.de/tu-darmstadt...
Wow, hello Bluesky! We're the Brisbane Experimental Psychology Student Initiative (BEPSI)
We host monthly meetings with fresh research (cog, dev, comp, clin, neuro, forensic, psychophys) followed by wholesome networking (summer lawn bowls, so good)
1st meeting Weds 27th Feb 4pm - more soon! π§ π€
π’ Workshop announcement.
We are super excited to announce the workshop Perceptual Inferences, from philosophy to neuroscience, organized by Alexander SchΓΌtz and Daniel Kaiser.
π Rauischholzhausen Castle, near Marburg, Germany
ποΈ June 8 to 10, 2026.
1/4
Check out our latest preprint: In naturalistic (virtual) scenes, we tested how spatial (proximity to anchor) and cognitive (scene semantics) factors influence allocentric representations of local target objects in a memory-guided placement task: π
osf.io/preprints/ps...
Super happy to announce that our Research Training Group "PIMON" is funded by the @dfg.de ! Starting in October, we will have exciting opportunities for PhD students that want to explore object and material perception & interaction in GieΓen @jlugiessen.bsky.social ! Just look at this amazing team!
View from your office onto Giessen and surrounding villages.
Please repost! I am looking for a PhD candidate in the area of Computational Cognitive Neuroscience to start in early 2026.
The position is funded as part of the Excellence Cluster "The Adaptive Mind" at @jlugiessen.bsky.social.
Please apply here until Nov 25:
www.uni-giessen.de/de/ueber-uns...
From line drawings to scene perception β our new review argues for moving beyond experimenter-driven manipulations toward participant-driven approaches to reveal whatβs in our internal models of the visual world. ποΈβοΈπ
royalsocietypublishing.org/doi/10.1098/...
As this is my final PhD paper to be published, I want to give a special shout-out to my supervisors, @willjharrison.bsky.social and Jason. Thereβs no way to do them justice in 300 characters, so Iβve attached an excerpt from my thesis acknowledgements that still rings true today.
11/11
As always, I couldn't have done it without the team: @tsawallis.bsky.social, Jason Mattingley, and @willjharrison.bsky.social. This project (almost) never felt like work and was (mostly) pure joy β in no small part because of them.
10/11
Overall, our results suggest that judgements made for naturalistic stimuli are strongly associated with those predicted by very simple features, without needing to rely on more complex visual properties.
9/11
To be clear, we're not suggesting that these are the two magic features that explain all judgements for naturalistic images. We by no means conducted an exhaustive investigation of all possible predictors, and welcome the possibility that others could be just as, or even more useful.
8/11
If we distort the patches (e.g., by thresholding pixel values or reducing the patch to edges), we can still predict participants' responses.
In this case, after heavily altering the pixel values, only the structural similarity predictor remained significant.
7/11
We computed similarity using two metrics, comparing the standard to the target and foil separately:
1. Pixel-wise luminance RMS error - how well matched the luminance values are
2. Phase-invariant structural similarity between the patches - how well matched the amplitude spectra are
6/11
We fit a GLMM to participants' responses and found they could be explained by assuming participants selected the patch most similar to the standard.
Here, average participant responses are the datapoints and solid lines show the GLMM predictions.
5/11
Oh, and the foil was always taken from the same spatial location as the target, but from its own photo.
4/11
The standard patch was always taken from the centre of a broader photo, with the target taken from one of 33 possible locations that varied in distance and azimuth offset from the standard.
Note: participants were never told about these conditions and never saw the full photos
3/11
We had participants tell us which of two image patches they thought was most likely to belong to the same scene as a preceding standard.
One patch (the target) always came from the same broader photograph as the standard, and the other (the foil) came from an entirely different photograph.
2/11
The final bit of work from my PhD just got published at JOV! We looked at similarity judgements made for naturalistic image patches, and whether these are predicted by simple image statistics⦠(spoiler: yep!)
Link to paper: doi.org/10.1167/jov....
1/11
Thanks for reading if you've made it this far! I've had to skip a lot of the details, so if you're interested in learning more, feel free to have a read of the paper - here's the link again:
www.nature.com/articles/s41...
10/10
Massive shout out to the team: @reubenrideaux.bsky.social, Jason Mattingley & @willjharrison.bsky.social.
This project was the problem child of my PhD and has frequently come face to face with abandonment. It's publication truly wouldn't have been possible without their unwavering support.
9/10
Overall, we were pretty surprised and discuss some of the potential reasons for our basically null result in the paper. Ultimately, however, we feel our results strongly demonstrate the need for more thorough exploration of adaptation in response to more naturalistic viewing conditions.
8/10
Overall biases didnβt shift significantly in response to the adaptors. Remember the standard white bar shown with comment #3? Its orientation relative to the adaptor is plotted on the x-axis. We also looked at whether the biases changed over the course of the session - short answer: nope.
7/10
After collecting a bunch of data (and then re-collecting it after finding an error), counterbalancing the combinations of adaptor orientations, subtracting out individualsβ baseline biases, and fitting the data with a GLMM, we foundβ¦ not a whole lotβ¦
6/10
For Session 3, participants saw the second half of Casablanca with a different adaptor orientation to what they experienced in the first session. Across sessions, each participant saw one cardinal and one oblique adaptor. Otherwise, the clip/trial structure was the same as session 2.
5/10
In session 2, participants saw the first half of Casablanca - we assigned them to experience one of the four potential adaptor orientations above for that session. The movie was shown in 30 second clips, separated by 5 trials of the same perceptual task as in session 1.
4/10
The study itself had participants come in for three sessions - in the first, we just got their baseline performance at our perceptual task: is the central grating tilted to the right or left of the peripheral standard white bar?
3/10