Brendan Fleig-Goldstein's Avatar

Brendan Fleig-Goldstein

@brendanfg

Philosopher of cognitive science and AI @ Brown University https://www.brendanfleiggoldstein.com/

181
Followers
345
Following
53
Posts
29.11.2024
Joined
Posts Following

Latest posts by Brendan Fleig-Goldstein @brendanfg

Came here to also say Pittsburgh

07.03.2026 01:20 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I mean hey I like some 4e just not clear why those properties are necessary for cognition

15.02.2026 04:18 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

There’s also a very nice discussion of this in chapter 2 of Buckner’s excellent From Deep Learning to Rational Machines, I’d recommend that over the paper I just linked!

15.02.2026 01:48 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Learning in High Dimension Always Amounts to Extrapolation The notion of interpolation and extrapolation is fundamental in various fields from deep learning to function approximation. Interpolation occurs for a sample $x$ whenever this sample falls inside or ...

arxiv.org/abs/2110.09485

15.02.2026 01:39 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

We’re not mechanical? I think AI skepticism of this sort just reifies AGI and encourages the sloppy benchmark setting that the AI hypers engage in

15.02.2026 01:38 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Right if β€œnovel knowledge production” is positioned as a benchmark then yes it would make sense to hold LLMs to a different standard. It’s a silly benchmark of course because LLMs are capable of novelty, contra the false idea that LLMs can only interpolate

15.02.2026 01:29 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

There have been machine generated proofs pre-LLMs so I don’t think it’s accurate to say that we don’t know machines can’t prove something new as they have already done so. I think AGI is a trash term though so yes anything being said about that is probably very stupid :)

15.02.2026 01:08 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I think there’s a big difference between requiring people to cite sources appropriately versus proving that they’ve done so. The suggestion seems to have been that you need to exhaustively comb through the training set to prove something isn’t in there, but we don’t require humans to do that

15.02.2026 01:04 πŸ‘ 0 πŸ” 0 πŸ’¬ 4 πŸ“Œ 0

Why don’t we hold humans to this same standard?

14.02.2026 22:43 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Would that sounds super interesting and fun (if not exhausting)!

08.02.2026 20:06 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Can you expand on this this sounds awesome and I’m curious lol

08.02.2026 19:52 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Do you have any examples of philosophers making claims about AI consciousness on the basis of their behavior? These claims imo instead derive mainly from the application of pre-existing theories of consciousness (in-line with what you Guest etc advocate for)

02.02.2026 15:02 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

All my work is on behavioral evidence for cognitive models check it out

27.01.2026 01:06 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
The best game in town: The reemergence of the language-of-thought hypothesis across the cognitive sciences | Behavioral and Brain Sciences | Cambridge Core The best game in town: The reemergence of the language-of-thought hypothesis across the cognitive sciences - Volume 46

Correct we don’t know for sure how computation is biologically realized. The primary evidence for computational cognitive models is behavioral data :) it’s the best game in town and the alternative is giving up on intentional psychology www.cambridge.org/core/journal...

27.01.2026 00:59 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Software is physically realized in hardware. So is software immaterial? In a sense! But not problematically so. Nothing mysterious about the existence of Microsoft word :)

27.01.2026 00:01 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I don’t think we know yet! I’m actually a bit of a neuroscience skeptic. Neuroscience as biology is great but neuroscience tells us very little currently about cognitive processing. Computation could occur through changes in synaptic strength or it could be all RNA computation Γ  la Gallistel

26.01.2026 23:58 πŸ‘ 2 πŸ” 0 πŸ’¬ 2 πŸ“Œ 0

I’m pretty familiar with anti-computationalist arguments. I also don’t think it’s necessary for me to give a clean N&S def of computation, and I already pointed you toward different accounts. But sureβ€”computation in this context is the physical realization of an information processing algorithm

26.01.2026 23:19 πŸ‘ 1 πŸ” 0 πŸ’¬ 2 πŸ“Œ 0

I’m a little confused what your argument is. Is the point just that you think, as an empirical fact, biological systems are not doing computation? That’s fine, although not obviously true. But then are you simply an eliminativist about mental states?

26.01.2026 22:38 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

In my course devoted to this exact question we cover a lot of different possible accounts of computational implementation. There’s a whole lit in philosophy on this, shagrir, piccinnini, chalmer’s causal topology, etc

26.01.2026 22:29 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Distinguishing between original and derived intentionality like that seems more mysterious to me than the idea that the mind is a computational system

26.01.2026 21:44 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Many don’t, but I was definitely not talking about you! That comment was for other people coming at me

26.01.2026 21:07 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Look up the paper still no lie detector test for LLMs, there is a whole literature on probing mental content in LLMs by extremely technically informed philosophers

26.01.2026 20:51 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Yes exactly, and they’re confronted with the same exact problems that neuroscientists are….identifying intentional content (even subintentional non conceptual content) is not something we can do with any known system currently

26.01.2026 20:48 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Then you really should know your PDP!

26.01.2026 20:47 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Also I think you misunderstood the distributed representations idea by suggesting there should be a meta layer. It’d be good for AI skeptics to engage in some basic history of AI and learn about connectionism and PDP!

26.01.2026 20:37 πŸ‘ 0 πŸ” 0 πŸ’¬ 2 πŸ“Œ 0

AI sentience plausibly follows from not particularly wild theories of consciousness. IIT is probably more out there than AI sentience, and I dislike attempts to draw hard lines and label certain academic theories of consciousness such as IIT as pseudoscience

26.01.2026 20:32 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

(Since I imagine many will take what I say in bad faith I should clarify that I mean that the functions being computed during inference are largely opaque to us)

26.01.2026 20:26 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

We don’t hand code the weights of LLMs, and so what happens during inference in LLMs is not something we know. Despite the fact that the brain evolved and we engineered LLMs, both are pretty well known at the lowest level and not well understood at the level of cognitive mechanisms

26.01.2026 20:24 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

This argument cuts equally well against actual digital computers (β€œhow do transistors compute things, what symbols are they interpreting and processing?”) so it seems quite straightforwardly a bad argument

26.01.2026 20:19 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

The connectionist/pdp school from which modern deep learning arose has long held that representations are emergent, distributed, etc. The brain’s representations very well be like this as well. There’s like not a little belief box in our brains that has a bunch of beliefs crammed into it

26.01.2026 20:08 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0