Trending
Emily A-Izzeddin's Avatar

Emily A-Izzeddin

@emilya-izzeddin

Postdoccing with the Flemingos at JLU Giessen Doing vision things πŸ‘€πŸ¦©πŸͺΌπŸ¦˜ (she/her)

116
Followers
163
Following
33
Posts
21.10.2024
Joined
Posts Following

Latest posts by Emily A-Izzeddin @emilya-izzeddin

I'm so so sorry to hear, Guido. Sending all my love to you and Sara's family ❀️

11.03.2026 12:53 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Had the privilege of being Will's first PhD student and can confirm the 3.5 years of supervisor training I put him through were highly successful. Couldn’t recommend this opportunity enough

11.03.2026 10:23 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

10 PhD positions at JLU Giessen in the new Research Training Group "PIMON"! We will explore how humans perceive and interact with materials and objects in natural environments.
More information on the project, the PIs, and how to apply here:
www.uni-giessen.de/de/ueber-uns...
Please share!

26.02.2026 09:29 πŸ‘ 33 πŸ” 19 πŸ’¬ 1 πŸ“Œ 1
Post image

Why do children struggle to recognise objects in cluttered scenes more than adults? Our new paper looks at the development of visual acuity and crowding across childhood, and the way the visual system fine tunes our ability to see detail: www.nature.com/articles/s41...

23.02.2026 17:11 πŸ‘ 36 πŸ” 17 πŸ’¬ 2 πŸ“Œ 1

**Postdoc position in human category learning**

@thecharleywu.bsky.social, Frank JΓ€kel and I are seeking a postdoctoral fellow to lead a joint project on human category learning at the Centre for Cognitive Science @tuda.bsky.social.

www.career.tu-darmstadt.de/tu-darmstadt...

23.02.2026 08:53 πŸ‘ 39 πŸ” 28 πŸ’¬ 1 πŸ“Œ 1
Post image

Wow, hello Bluesky! We're the Brisbane Experimental Psychology Student Initiative (BEPSI)

We host monthly meetings with fresh research (cog, dev, comp, clin, neuro, forensic, psychophys) followed by wholesome networking (summer lawn bowls, so good)

1st meeting Weds 27th Feb 4pm - more soon! πŸ§ πŸ€—

16.02.2026 06:24 πŸ‘ 13 πŸ” 3 πŸ’¬ 0 πŸ“Œ 0
Post image

πŸ“’ Workshop announcement.

We are super excited to announce the workshop Perceptual Inferences, from philosophy to neuroscience, organized by Alexander SchΓΌtz and Daniel Kaiser.

πŸ“ Rauischholzhausen Castle, near Marburg, Germany
πŸ—“οΈ June 8 to 10, 2026.
1/4

10.02.2026 09:00 πŸ‘ 42 πŸ” 17 πŸ’¬ 1 πŸ“Œ 1
Post image

Check out our latest preprint: In naturalistic (virtual) scenes, we tested how spatial (proximity to anchor) and cognitive (scene semantics) factors influence allocentric representations of local target objects in a memory-guided placement task: πŸ‘‡
osf.io/preprints/ps...

09.02.2026 23:51 πŸ‘ 8 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0
Post image

Super happy to announce that our Research Training Group "PIMON" is funded by the @dfg.de ! Starting in October, we will have exciting opportunities for PhD students that want to explore object and material perception & interaction in Gießen @jlugiessen.bsky.social ! Just look at this amazing team!

03.12.2025 12:46 πŸ‘ 31 πŸ” 5 πŸ’¬ 1 πŸ“Œ 1
View from your office onto Giessen and surrounding villages.

View from your office onto Giessen and surrounding villages.

Please repost! I am looking for a PhD candidate in the area of Computational Cognitive Neuroscience to start in early 2026.

The position is funded as part of the Excellence Cluster "The Adaptive Mind" at @jlugiessen.bsky.social.

Please apply here until Nov 25:
www.uni-giessen.de/de/ueber-uns...

04.11.2025 13:57 πŸ‘ 80 πŸ” 98 πŸ’¬ 1 πŸ“Œ 4
Preview
Characterizing internal models of the visual environment | Proceedings of the Royal Society B: Biological Sciences Despite the complexity of real-world environments, natural vision is seamlessly efficient. To explain this efficiency, researchers often use predictive processing frameworks, in which perceptual effic...

From line drawings to scene perception β€” our new review argues for moving beyond experimenter-driven manipulations toward participant-driven approaches to reveal what’s in our internal models of the visual world. πŸ‘οΈβœοΈπŸ›‹
royalsocietypublishing.org/doi/10.1098/...

08.10.2025 09:07 πŸ‘ 39 πŸ” 6 πŸ’¬ 1 πŸ“Œ 3
Post image

As this is my final PhD paper to be published, I want to give a special shout-out to my supervisors, @willjharrison.bsky.social and Jason. There’s no way to do them justice in 300 characters, so I’ve attached an excerpt from my thesis acknowledgements that still rings true today.

11/11

08.10.2025 07:12 πŸ‘ 5 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

As always, I couldn't have done it without the team: @tsawallis.bsky.social, Jason Mattingley, and @willjharrison.bsky.social. This project (almost) never felt like work and was (mostly) pure joy β€” in no small part because of them.

10/11

08.10.2025 07:12 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Overall, our results suggest that judgements made for naturalistic stimuli are strongly associated with those predicted by very simple features, without needing to rely on more complex visual properties.

9/11

08.10.2025 07:12 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

To be clear, we're not suggesting that these are the two magic features that explain all judgements for naturalistic images. We by no means conducted an exhaustive investigation of all possible predictors, and welcome the possibility that others could be just as, or even more useful.

8/11

08.10.2025 07:12 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

If we distort the patches (e.g., by thresholding pixel values or reducing the patch to edges), we can still predict participants' responses.

In this case, after heavily altering the pixel values, only the structural similarity predictor remained significant.

7/11

08.10.2025 07:12 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

We computed similarity using two metrics, comparing the standard to the target and foil separately:

1. Pixel-wise luminance RMS error - how well matched the luminance values are

2. Phase-invariant structural similarity between the patches - how well matched the amplitude spectra are

6/11

08.10.2025 07:12 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

We fit a GLMM to participants' responses and found they could be explained by assuming participants selected the patch most similar to the standard.

Here, average participant responses are the datapoints and solid lines show the GLMM predictions.

5/11

08.10.2025 07:12 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Oh, and the foil was always taken from the same spatial location as the target, but from its own photo.

4/11

08.10.2025 07:12 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

The standard patch was always taken from the centre of a broader photo, with the target taken from one of 33 possible locations that varied in distance and azimuth offset from the standard.

Note: participants were never told about these conditions and never saw the full photos

3/11

08.10.2025 07:12 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

We had participants tell us which of two image patches they thought was most likely to belong to the same scene as a preceding standard.

One patch (the target) always came from the same broader photograph as the standard, and the other (the foil) came from an entirely different photograph.

2/11

08.10.2025 07:12 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Low-level features predict perceived similarity for naturalistic images | JOV | ARVO Journals

The final bit of work from my PhD just got published at JOV! We looked at similarity judgements made for naturalistic image patches, and whether these are predicted by simple image statistics… (spoiler: yep!)

Link to paper: doi.org/10.1167/jov....

1/11

08.10.2025 07:12 πŸ‘ 13 πŸ” 6 πŸ’¬ 1 πŸ“Œ 1
Preview
Investigating orientation adaptation following naturalistic film viewing - Scientific Reports Scientific Reports - Investigating orientation adaptation following naturalistic film viewing

Thanks for reading if you've made it this far! I've had to skip a lot of the details, so if you're interested in learning more, feel free to have a read of the paper - here's the link again:

www.nature.com/articles/s41...

10/10

29.09.2025 08:27 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 1

Massive shout out to the team: @reubenrideaux.bsky.social, Jason Mattingley & @willjharrison.bsky.social.

This project was the problem child of my PhD and has frequently come face to face with abandonment. It's publication truly wouldn't have been possible without their unwavering support.

9/10

29.09.2025 08:27 πŸ‘ 4 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Overall, we were pretty surprised and discuss some of the potential reasons for our basically null result in the paper. Ultimately, however, we feel our results strongly demonstrate the need for more thorough exploration of adaptation in response to more naturalistic viewing conditions.

8/10

29.09.2025 08:27 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Overall biases didn’t shift significantly in response to the adaptors. Remember the standard white bar shown with comment #3? Its orientation relative to the adaptor is plotted on the x-axis. We also looked at whether the biases changed over the course of the session - short answer: nope.

7/10

29.09.2025 08:27 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

After collecting a bunch of data (and then re-collecting it after finding an error), counterbalancing the combinations of adaptor orientations, subtracting out individuals’ baseline biases, and fitting the data with a GLMM, we found… not a whole lot…

6/10

29.09.2025 08:27 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

For Session 3, participants saw the second half of Casablanca with a different adaptor orientation to what they experienced in the first session. Across sessions, each participant saw one cardinal and one oblique adaptor. Otherwise, the clip/trial structure was the same as session 2.

5/10

29.09.2025 08:27 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

In session 2, participants saw the first half of Casablanca - we assigned them to experience one of the four potential adaptor orientations above for that session. The movie was shown in 30 second clips, separated by 5 trials of the same perceptual task as in session 1.

4/10

29.09.2025 08:27 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

The study itself had participants come in for three sessions - in the first, we just got their baseline performance at our perceptual task: is the central grating tilted to the right or left of the peripheral standard white bar?

3/10

29.09.2025 08:27 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0