Thank you so much, Gorka. That truly means a lot.
Working with your reciprocity code early on actually sparked many of the ideas that eventually led to this work. Iβm genuinely grateful for that.
Thank you so much, Gorka. That truly means a lot.
Working with your reciprocity code early on actually sparked many of the ideas that eventually led to this work. Iβm genuinely grateful for that.
Please let me know if you would like to explore how our tools and networks can help you better understand and design #complex #dynamical #systems.
Grateful to collaborate on this work with @kayson.bsky.social and Claus C Hilgetag. Thanks a lot for the ideas, discussions, and teamwork that made this possible. Special thanks to Arnaud MessΓ© for insightful feedback and to the anonymous #reviewers for their constructive and very helpful comments.
Synthetic data and code used to implement the algorithm and perform network analysis are openly available on GitHub:
github.com/m00rcheh/NRC...
Across synthetic benchmarks and #connectomes, we show how graded reciprocity influences #spectral properties, #structure, #communities, and even computational capacity. Iβm excited about the broader implications for #network_science, #neuroscience and #NeuroAI where directional connectivity matters.
π π Excited to share that our paper has just been published in Chaos.
Here, we introduce Network Reciprocity Control (NRC) #algorithms that make it possible to systematically tune #reciprocity and directional #asymmetry in #networks while preserving density or total weight.
A hitchhikerβs guide to information theoretical measures in psychology Niels van Santen, Yves Rosseel, Daniele Marinazzo https://doi.org/10.1016/j.jmp.2025.102969 Highlights β’ Information theory and psychology have a rich history β’ Information theoretical measures can be disconnected from information theory β’ These measures complement variance-based measures of variability and association β’ They are more general with respect to interpretation and possible data types β’ There are many extensions towards the investigation of higher-order interactions. Abstract In psychology, as in other sciences, information theory can be used as a tool to complement more standard regression-based methods of data analysis. It is important to see the potential of information theoretical measures as statistical tools without implying a connection to their origins in communication theory and engineering. The use of these measures may provide us with additional insights due to their sensitivity to non-linear relationships, their flexibility to the mixing of data types, and their more straightforward generalization towards investigating higher-order interactions. We briefly reintroduce information theory and compare several measures such as mutual information and co-information with correlation and regression-based methods for the investigation of variable dependence.
A hitchhikerβs guide to information theoretical measures in psychology
by @nielsvs.bsky.social with me and Yves Rosseel
authors.elsevier.com/c/1mOwr53na-...
osf.io/preprints/ps...
Close-up of pale pink daisy flowers with yellow centers and green foliage in the foreground, set against a softly blurred background of frost-covered trees under a gray winter sky.
Happy New Year everyone! β¨π
I hope the new year has begun peacefully and promises good health and prosperity for you.
π¨new work with the dream team @danakarca.bsky.social @loopyluppi.bsky.social @fatemehhadaeghi.bsky.social @stuartoldham.bsky.social @duncanastle.bsky.social
We use game theory and show the brain is not optimally wired for communication and thereβs more to its story:
www.biorxiv.org/content/10.6...
Joint modelling of brain and behaviour dynamics with artificial intelligence
www.nature.com/articles/s41...
A remarkable journey of resilience and transformation, from the chaotic corridors of group homes to the halls of Columbia and Stanford, EMERGENCE is a coming-of-age tale where heartbreak and humor meet the scientific wonder of modern artificial intelligence.
π Preorder: tinyurl.com/fzcxb5ea
Our new manuscript, led by Emily Corrigan, examines inhibitory neuron diversity across approximately 160 million years of evolutionary divergence, as part of BRAIN Initiative Cell Atlas Network (BICAN) developing brain atlas package: www.nature.com/articles/s41...
Talks from #SNUFA 2025 are now available on YouTube:
youtube.com/playlist?lis...
π€π§ π§ͺ
BIG ANNOUNCEMENTπ£: I havenβt been this excited to be part of something new in 15 yearsβ¦ Thrilled to reveal the passion project Iβve been working on for the past year and a half!ππ₯³ (thread π)
Spiking NN fans - the #SNUFA workshop (Nov 5-6) agenda is finalised and online now. Make sure to register (free) soon. (Note you can register for either day and come to both.)
Agenda: snufa.net/2025/
Registration: www.eventbrite.co.uk/e/snufa-2025...
Thanks to all who voted on abstracts!
π€π§ π§ͺ
π¨ New preprint!
βA Computational Perspective on the No-Strong-Loops Principle in Brain Networksβ
www.biorxiv.org/content/10.1...
Over the past 3 years, weβve been investigating why cortical networks avoid strong reciprocal loops β and what this means for computation.
A thought-provoking perspective from the visionary @giacomoi.bsky.social, calling for neuromorphic computing to return to its root: fundamental neuroscience; an inspiring vision for the future of NeuroAI π€©
I would also like to thank prominent figures in the field, Sara Solla, Petra Vertes, @kenmiller.bsky.social, @bendfulcher.bsky.social, @jlizier.bsky.social, @danakarca.bsky.social, MarcusKaiser, Gorka Zamora-LΓ³pez, and Patrick Desrosiers, who provided feedback during lab visits and conferences.
This work has been in progress for 3+ years.
Grateful to my co-authors, Claus Hilgetag, @kayson.bsky.social, and Moein Khajehnejad for their invaluable contributions,
Implications:
π§ Neuroscience β functional rationale for the evolutionary suppression of strong reciprocal motifs.
π€ NeuroAI β reciprocity as a tunable design parameter in recurrent & neuromorphic networks.
So why does the brain avoid strong loops?
Because reciprocity systematically hurts computation.
Suppressing strong loops preserves:
working memory
representational diversity
stable but flexible dynamics
We validated this on empirical connectomes (macaque long-distance, macaque visual cortex, marmoset).
Result: the same pattern.
Strong reciprocity consistently undermines memory and representational richness.
Spectral analysis explains why:
- Higher reciprocity β larger spectral radius (instability risk).
- Narrower spectral gap β less dynamical diversity.
- Lower non-normality β weaker transient amplification.
Together β compressed dynamical range.
Interestingly, hierarchical modular networks consistently outperformed random counterparts, but only when reciprocity was low. However, the comparative advantages of network topologies shift with reciprocity, sparsity, and weight distribution
Findings (robust across sizes, densities, architectures):
- Increasing reciprocity (link as well as strength reciprocity) reduces memory capacity.
- Representation becomes less diverse (lower kernel rank).
- Effects are strongest in ultra-sparse networks.
Methods:
- Reservoir computing to isolate structure from learning.
- Networks of 64β256 nodes, in both ultra-sparse and sparse regimes.
- Topologies: small-world, hierarchical modular, coreβperiphery, hybrid, and nulls.
- Metrics: memory capacity, kernel rank, spectral analysis.
In earlier work (www.biorxiv.org/content/10.1...), we developed Network Reciprocity Control (NRC): algorithms that adjust reciprocity (link + strength) while preserving network structure.
In this study, we apply NRC to systematically test how reciprocity shapes computation in recurrent networks.
The no-strong-loops principle:
Across species (macaque, marmoset, mouse), strong reciprocal (symmetric) connections are rare.
This asymmetry is well known anatomically.
But what are its computational consequences?
π¨ New preprint!
βA Computational Perspective on the No-Strong-Loops Principle in Brain Networksβ
www.biorxiv.org/content/10.1...
Over the past 3 years, weβve been investigating why cortical networks avoid strong reciprocal loops β and what this means for computation.
Interested in Network hubs, cortical hierarchies, and gradients? Ever wonder where they come from? Check our latest review, where we cover different approaches to mapping hubs, models for their evolution, and mechanisms for how they develop:
osf.io/preprints/os...