Disanalogies between causal learning in animals vs. machines: Comment on “disentangled representations for causal cognition” by F. Torresan & M. Baltieri
None.
My comment on Fillipo Torresan & @manuelbaltieri.bsky.social's "Disentangled representations for causal cognition" in Physics of Life Reviews:
www.sciencedirect.com/science/arti...
I argue that there is little meaningful analogy between learning from "pixels" vs "experience," but I praise
11.07.2025 06:15
👍 6
🔁 3
💬 1
📌 0
Shocking
02.05.2025 11:43
👍 0
🔁 0
💬 0
📌 0
Two hypercube-shaped category-theoretic diagrams, each covered with an unreadable mess of labels.
Igor, you legend. Don't stop being you.
There are ten more of these unreadable hypercube diagrams on the following pages....
Souce: https://arxiv.org/abs/2505.00682
02.05.2025 07:32
👍 1
🔁 4
💬 1
📌 0
My experience applying for retractions at Elsevier.
I've looked at paper mills since 2019 and drafted a preprint on a paper mill from an international publisher in 2021. I started contacting journals or research integrity teams to raise concerns about papers. Publishers react differently. 1/n
25.04.2025 13:47
👍 31
🔁 16
💬 2
📌 1
Great talk by @manuelbaltieri.bsky.social!
23.04.2025 11:04
👍 2
🔁 1
💬 0
📌 0
"We discuss the problem of running today’s software decades, centuries, or even millennia into the future" tinlizzie.org/VPRIPapers/t...
10.04.2025 21:19
👍 5
🔁 2
💬 0
📌 0
Great to see a colleague speaking up, sad to think about the state of affairs.
26.03.2025 08:14
👍 2
🔁 0
💬 0
📌 0
I don't want to delete anything. I simply agree with Barbieri's distinction and claim that for a successful syntactic relationship, there is no need for anticipation or computation.
On that level, the cell is a simple reliable #state machine (transducer) with no place for interpretation of meaning.
23.03.2025 01:53
👍 2
🔁 1
💬 1
📌 0
📌
15.03.2025 12:47
👍 0
🔁 0
💬 0
📌 0
Re the Tononi paper: Both Tononi’s IIT (phi) and Friston’s FEP start from fundamental, axiomatic, and debatable assumptions. These assumptions are generally made without any humility. This logic allows them to make exceptionally broad claims. Which contributes to my unease about them.
12.03.2025 16:01
👍 92
🔁 17
💬 9
📌 3
Directed wiring diagrams for Mealy machines!
10.03.2025 05:31
👍 1
🔁 0
💬 0
📌 0
📌
08.03.2025 07:56
👍 0
🔁 0
💬 0
📌 0
They wanted to save us from a dark AI future. Then six people were killed
How a group of Silicon Valley math prodigies, AI researchers and internet burnouts descended into an alleged violent cult
Only just learning about this now -- I guess for a while people have predicted that the AI doomer rationalist crowd would go violent, so its not surprising in some sense. Still though, odd times!
www.theguardian.com/global/ng-in...
08.03.2025 07:50
👍 91
🔁 15
💬 10
📌 6
Secondly, we discuss how this form of Bayesian filtering is quite simplistic, 1) not making full use of Bayesian updates by ignoring observations from the environment/plant, and 2) assuming that beliefs of equicredible states of the environment are disjoint (they form a partition).
16/16
05.03.2025 02:30
👍 0
🔁 0
💬 0
📌 0
Importantly, this makes use of the fact that we have a Markov category, Rel^+, of possibilistic Markov kernels that can be used to specify beliefs as (sub)sets without assigning them probabilities, but that works very much like other “nice” Markov categories.
15/
05.03.2025 02:30
👍 0
🔁 0
💬 1
📌 0
We then show how this corresponds to a Bayesian filtering interpretation for a reasoner: how a controller modelling its environment can be understood as performing Bayesian filtering on its environment.
14/
05.03.2025 02:30
👍 0
🔁 0
💬 1
📌 0
Firstly, we show that the definition of model between two autonomous system can be “reversed” to build a “possibilistic” version of the internal model principle.
13/
05.03.2025 02:30
👍 0
🔁 0
💬 1
📌 0
After a reasonably self contained overview of string diagrams for Markov categories, and some definitions including Bayesian inference/filtering, their parametrised and conjugate prior versions, we dive into the main result, showing mainly two things.
12/
05.03.2025 02:30
👍 0
🔁 0
💬 1
📌 0
Our focus here is mostly technical and has to do almost entirely with control theory, but considering where the conversation started on the other platform, I hope that this will have an impact also in the cognitive and life sciences.
10/
05.03.2025 02:30
👍 1
🔁 0
💬 1
📌 0
This is often taken to be 1) a better formalisation of Conant&Ashby’s good regulator “theorem”, and 2) the reason why talking about “internal models” is necessary in cognitive science, AI/ML/RL, biology and neuroscience.
/9
05.03.2025 02:30
👍 2
🔁 0
💬 1
📌 0
The internal model principle is arguably one of the most influential outputs of control theory, claiming, at its core, that if a controller regulates a plant against disturbances from the environment, it does so by implementing a model of the environment.
8/
05.03.2025 02:30
👍 1
🔁 0
💬 1
📌 0
We define “models” for non-autonomous (fully observable) systems, generalising the original definition for autonomous systems (but focus on the latter). We think of this as generalising aspects of lumpability, state aggregation, coarse grainings, dynamical consistency, etc.
7/
05.03.2025 02:30
👍 1
🔁 0
💬 1
📌 0
We review the original work by Wonham and collaborators, and unpack some of its implicit assumptions, finding that at least one of them requires more attention (we also have a result that doesn’t require it, and may end up in a revised version or a future work).
6/
05.03.2025 02:30
👍 1
🔁 0
💬 1
📌 0
In the first part, we review and reformulate the “internal model principle” from control theory (at least, one of its versions) in a more modern language heavily inspired by categorical systems theory (www.davidjaz.com/Papers/Dynam..., github.com/mattecapu/ca...).
5/
05.03.2025 02:30
👍 1
🔁 0
💬 1
📌 0
In this work, we focus on two specific definitions of models, and show their connections. One is inspired by work in control theory, and one comes from Bayesian inference/filtering for cognitive science, AI and ALife, and is formalised with Markov categories.
4/
05.03.2025 02:30
👍 2
🔁 0
💬 1
📌 0