Join us! We are opening many postdoc positions both in London and in Gothenburg!
London
πhttps://www.lesswrong.com/posts/GTt33CasvWjxxazJw/hiring-principia-research-fellows
π
deadline: March 26th
Gothenburg
π www.chalmers.se/en/about-cha...
π
deadline: April 1st
25.02.2026 12:01
π 6
π 3
π¬ 0
π 0
Excited to launch Principia, a nonprofit research organisation at the intersection of deep learning theory and AI safety.
Our goal is to develop theory for modern machine learning systems that can help us understand complex network behaviors, including those critical for AI safety and alignment.
1
16.02.2026 09:27
π 91
π 26
π¬ 1
π 1
#CCN2026 Proceedings submissions are open and due in *two* weeks! Info about how to submit in the thread belowπ
Come share your science and hang out in NYC in August. :)
28.01.2026 18:28
π 2
π 1
π¬ 0
π 0
@dataonbrainmind.bsky.social starting now in Room 10 with opening remarks from @crji.bsky.social and the first invited talk from @dyamins.bsky.social!
07.12.2025 16:15
π 11
π 3
π¬ 0
π 0
Thrilled to start 2026 as faculty in Psych & CS
@ualberta.bsky.social + Amii.ca Fellow! π₯³ Recruiting students to develop theories of cognition in natural & artificial systems π€ππ§ . Find me at #NeurIPS2025 workshops (speaking coginterp.github.io/neurips2025 & organising @dataonbrainmind.bsky.social)
06.12.2025 19:26
π 103
π 27
π¬ 4
π 1
Two posts from Bluesky. The first one shows a figure from a paper published in Nature Scientific Reports full of totally incoherent AI fabricated gibberish words. The other a comment on a recently published paper by eLife discussing the paper and its peer reviews which were published along with the paper.
Nature Sci Rep publishes incoherent AI slop. eLife publishes a paper which the reviewers didn't agree with, making all the comments and responses public with thoughtful commentary. One of these journals got delisted by Web of Science for quality concerns from not doing peer review. Guess which one?
27.11.2025 13:35
π 156
π 69
π¬ 4
π 8
Probabilistic ML in scientific pipelines
I'm on the academic job market!
I design and analyze probabilistic machine-learning methods---motivated by real-world scientific constraints, and developed in collaboration with scientists in biology, chemistry, and physics.
A few highlights of my research areas are:
07.11.2025 14:47
π 38
π 14
π¬ 2
π 0
Applying to do a postdoc or PhD in theoretical ML or neuroscience this year? Consider joining my group (starting next Fall) at UT Austin!
POD Postdoc: oden.utexas.edu/programs-and... CSEM PhD: oden.utexas.edu/academics/pr...
23.10.2025 21:36
π 33
π 11
π¬ 1
π 0
Hoping you find out and share! π€
03.10.2025 03:36
π 0
π 0
π¬ 0
π 0
Congrats Richard!!
23.09.2025 14:03
π 1
π 0
π¬ 0
π 0
Postdoctoral Fellow - Language Models and Neuroscience - Careers@UAlberta.ca
University of Alberta: Careers@UAlberta.ca
I am hiring a post doc at UAlberta, affiliated with Amii! We study language processing in the brain using LLMs and neuroimaging. Looking for someone with experience with ideally both neuroimaging and LLMs, or a willingness to learn. Email me with Qs
apps.ualberta.ca/careers/post...
15.09.2025 21:43
π 14
π 7
π¬ 0
π 1
Frontiers | Summary statistics of learning link changing neural representations to behavior
How can we make sense of large-scale recordings of neural activity across learning? Theories of neural network learning with their origins in statistical phy...
Since I'm back on BlueSky - with @frostedblakess.bsky.social and @cpehlevan.bsky.social we wrote a brief perspective on how ideas about summary statistics from the statistical physics of learning could potentially help inform neural data analysis... (1/2)
04.09.2025 18:30
π 34
π 10
π¬ 1
π 0
Data on the Brain & Mind
π’ 10 days left to submit to the Data on the Brain & Mind Workshop at #NeurIPS2025!
π Call for:
β’ Findings (4 or 8 pages)
β’ Tutorials
If youβre submitting to ICLR or NeurIPS, consider submitting here tooβand highlight how to use a cog neuro dataset in our tutorial track!
π data-brain-mind.github.io
25.08.2025 15:43
π 8
π 5
π¬ 0
π 0
Iβm recruiting committee members for the Technical Program Committee at #CCN2026.
Please apply if you want to help make submission, review & selection of contributed work (Extended Abstracts & Proceedings) more useful for everyone! π
Helps to have: programming/communications/editorial experience.
19.08.2025 14:12
π 19
π 14
π¬ 3
π 1
arguably the most important component of AI for neuroscience:
data, and its usability
11.08.2025 11:28
π 20
π 2
π¬ 1
π 0
The rumors are true! #CCN2026 will be held at NYU. @toddgureckis.bsky.social and I will be executive-chairing. Get in touch if you want to be involved!
15.08.2025 16:43
π 172
π 30
π¬ 4
π 6
many thanks to my collaborators, @saxelab.bsky.social and especially Lukas :)
13.08.2025 15:45
π 2
π 0
π¬ 0
π 0
I like the how Rosa Cao (sites.google.com/site/luosha) & @dyamins.bsky.social speculated about task constraints here (doi.org/10.1016/j.co...). I think the Platonic Representation hypothesis is a version of their argument, for multi-modal learning.
13.08.2025 14:20
π 2
π 0
π¬ 0
π 0
Definitely! Task constraints certainly play a role in determining representational structure, which might interact with what we consider here (efficiency of implementation). We don't explicitly study it. Someone should!
13.08.2025 14:19
π 1
π 0
π¬ 1
π 0
ICML Poster Not all solutions are created equal: An analytical dissociation of functional and representational similarity in deep linear neural networksICML 2025
Main takeaway: Valid representational comparison relies on implicit assumptions (task-optimization *plus* efficient implementation). β οΈ More work to do on making these assumptions explicit!
π§ CCN poster (today): 2025.ccneuro.org/poster/?id=w...
π ICML paper (July): icml.cc/virtual/2025/poster/44890
13.08.2025 11:30
π 15
π 1
π¬ 0
π 0
Our theory predicts that representational alignment is consistent with *efficient* implementation of similar function. Comparing representations is ill-posed in general, but becomes well-posed under minimum-norm constraints, which we link to computational advantages (noise robustness).
13.08.2025 11:30
π 5
π 0
π¬ 1
π 0
Function-representation dissociation in ReLU networks. (A-B) MNIST representations before/after prediction-preserving reparametrisation. (C) RSM after function-preserving reparametrisation. (D-E) Performance under input/parameter noise for different solution types.
Function-representation dissociations and the representation-computation link persist in deep nonlinear networks! Using function-invariant reparametrisations (@bsimsek.bsky.social), we break representational identifiability but degrade generalization (a computational consequence).
13.08.2025 11:30
π 1
π 0
π¬ 1
π 0
Hidden-layer representations for a semantic hierarchy task. (A) Task structure. (B) Input/target encoding. (C-E) Hidden representations and representational similarity matrices for task-agnostic (C: LSS) vs. task-specific (D: MRNS, E: MWNS) solutions.
We demonstrate that representation analysis and comparison is ill-posed, giving both false negatives and false positives, unless we work with *task-specific representations*. These are interpretable *and* robust to noise (i.e., representational identifiability comes with computational advantages).
13.08.2025 11:30
π 3
π 1
π¬ 1
π 0
The solution manifold. (A) Solution manifold for a 3-parameter linear network, showing GLS and constrained LSS, MRNS, and MWNS solutions. (B-E) Input/output weight relationships and parametrisation structure for each solution type.
We parametrised this solution hierarchy to find differences in handling of task-irrelevant dimensions: Some solutions compress away (creating task-specific, interpretable representations), while others preserve arbitrary structure in null spaces (creating arbitrary, uninterpretable representations).
13.08.2025 11:30
π 2
π 0
π¬ 1
π 0
Task solution hierarchy defined by implicit regularisation objectives.
To analyse this dissociation in a tractable model of representation learning, we characterize *all* task solutions for two-layer linear networks. Within this solution manifold, we identify a solution hierarchy in terms of what implicit objectives are minimized (in addition to the task objective).
13.08.2025 11:30
π 4
π 0
π¬ 1
π 0
Example of a failure case. (A) A random walk on the solution manifold of a two-layer linear network reveals that weights can change continuously, inducing changes in the (B) network parametrisation and thus the (C) hidden-layer representations, while preserving the (D) network output.
Deep networks have parameter symmetries, so we can walk through solution space, changing all weights and representations, while keeping output fixed. In the worst case, function and representation are *dissociated*.
(Networks can have the same function with the same or different representation.)
13.08.2025 11:30
π 3
π 0
π¬ 1
π 0
Are similar representations in neural nets evidence of shared computation? In new theory work w/ Lukas Braun (lukasbraun.com) & @saxelab.bsky.social, we prove that representational comparisons are ill-posed in general, unless networks are efficient.
@icmlconf.bsky.social @cogcompneuro.bsky.social
13.08.2025 11:30
π 72
π 20
π¬ 3
π 0
Co-organized with @susanneharidi.bsky.social, @marcelbinz.bsky.social, Rodrigo Carrasco-Davis, @clementinedomine.bsky.socialβ¬, @eringrant.me, @modirshanechi.bsky.socialβ¬ π³
13.08.2025 07:00
π 2
π 0
π¬ 0
π 0
Want to contribute to this debate at #CCN2025? Please come to our session today, fill out the anonymous survey (forms.gle/yDBBcBZybGjogksC8), and comment on the GAC page (sites.google.com/ccneuro.org/gac2020/gacs-by-year/2025-gacs/2025-1)! Your perspectives will shape our eventual GAC paper. π₯
13.08.2025 07:00
π 2
π 0
π¬ 1
π 0