Every year, our apricot is the first to blossom, which is easy to remember because “apricot” shares a linguistic root with “precocious”.
Every year, our apricot is the first to blossom, which is easy to remember because “apricot” shares a linguistic root with “precocious”.
I hope that when I'm 88 I'm also still able to embrace new technologies with such joy and curiosity:
www-cs-faculty.stanford.edu/~knuth/paper...
I think people just disagree on what representing something with a graph means. You seem to mean something very specific, along the lines of eq. 4. But if I want to represent the interactions of an up-to n-th order Ising model, a hypergraph can do that, and a pairwise graph can't--you need 2^n dof.
ok let's make a list of all ℵ₀ genders then. Now construct a new flag where the color of the first band differs from that of the 1st gender's flag, the color of the second band differs from that of the 2nd gender, etc. This defines a new gender that was not in your list of ℵ₀ genders. □
I found this surprising and somewhat worrying: openAI is using an AI model to align their model spec. #NeurIPS2025
🔗 Erik’s post: theintrinsicperspective.com/p/i-figured-...
🔗 The paper: arxiv.org/abs/2510.02649
A few months ago I started discussing causal emergence with Erik Hoel.
This led to a really fun collaboration, and a new approach to “engineer emergence”.
Erik just published an overview of the ideas, goals, and dreams:
I know you like showing pictures of lenses but this seems a little excessive
Many sea stars begin life as young fairy-like creature (called a brachiolaria) that float through the open ocean. Eventually, a small star forms within them (here in yellow). The fairy-like brachiolaria sinks under the star’s weight, and the star pops out!
🎥@the_story_of_a_biologist (on Insta)
There's so much more in the paper, largely thanks to Patrick who really pushed this to the next level. We're already working on applications: SVs are often used for XAI, but now we can do this for vector-valued functions--the kind implemented by transformers... Stay tuned!
To summarise:
⬆️Möbius inversions construct higher-order structure.
⬇️Shapley values project this down again, in the 'right' way.
We derive generalisations of both, to directed acyclic multigraphs, and group-valued functions.
This shows how intimately related Shapley values and Möbius inversions are: we derive an expression that expresses Shapley values *purely in terms of the incidence algebra*!
Doing so required also generalising the Möbius inversion theorem to this setting (prev. only defined for ring-valued functions). We show that it's a natural theorem in the *path algebra* of the graph:
But we go further.
Classical Shapley values only work for real-valued functions on power sets of players (or lattices).
We generalise them even beyond posets to
✅vector/group-valued fns
✅weighted directed acyclic multigraphs
, and prove uniqueness!
That’s exactly what we do.
We reinterpret Shapley values as projection operators: a recursive re-attribution of higher-order synergy to lower-order parts.
This turns Shapley values into a general projection framework for hierarchical structure, valid far beyond game theory.
Möbius inversions are a way to derive higher-order interactions ion a system's mereology. I wrote a blog post about this here 👉https://abeljansma.nl/2025/01/28/mereoPhysics.html
If Shapley values are truly general, we should be able to express them for any Möbius inversion/higher-order structure.
But Shapley values (SVs) aren’t just about fairness.
They're really a projection operator: the right way to push higher-order structure back down to lower levels.
So… can we do this more generally? 🤔
Enter Möbius inversions...
If a group of people earn a payoff together, how should it be fairly distributed?
Shapley values are weighted sums of sub-coalition "synergies", and provably the fairest possible distribution.
It earned Shapley the Nobel Prize. 🧮
🚨New paper: arxiv.org/abs/2510.05786
*Shapley values beyond game theory*
We show that Shapley values aren’t just about dividing payoffs--they are the right way to project down any higher-order structure.
We generalise them, and Möbius inversions, in important ways: 🧵
This is how Rota originally introduced the incidence algebra. Everyone since has (correctly) required the ring to be commutative. Did people in the '60s just refer to commutative rings as associative rings?
Just to balance out the discourse: the #NeurIPS2025 review process for this paper went great. Fair reviews, mostly responsive reviewers, and a thoughtful AC that caught a possible conflict of interest. Definitely improved the paper.
The #NeurIPS2025 version is now online: arxiv.org/pdf/2501.11447
It includes a new analysis to show that LLM semantics can be decomposed: the negativity of "horribly bad" is redundantly encoded in the two words, whereas "not bad" has synergistic semantics (i.e. negation):
The "partial causality decomposition" was just accepted for a spotlight at #NeurIPS2025!
The final version includes a decomposition of LLM semantics---the Arxiv version should be updated soon. Stay tuned!
Hi! sorry I'm not on often on this site. They're now online: www.youtube.com/@dutchinstit...
While I'm flattered, it's a bit weird that google's AI defers to me when you search for this:
Yes!
Next week we're organising a workshop on the role of analogies in (artificial) intelligence, with:
Melanie Mitchell (@melaniemitchell.bsky.social), Martha Lewis, Jules Hedges (@julesh.mathstodon.xyz.ap.brid.gy), and Han van der Maas.
Register here: www.d-iep.org/workshopanal...
Don't take my word for it--take Reviewer 2's: "I found the paper extremely interesting and deep"
A gentle introduction is available at abeljansma.nl/2025/01/28/m...
My new approach to higher-order interactions in complex systems is now published in Physical Review Research:
journals.aps.org/prresearch/a...
The link doesn’t seem to work for me