Came here to also say Pittsburgh
Came here to also say Pittsburgh
I mean hey I like some 4e just not clear why those properties are necessary for cognition
Thereβs also a very nice discussion of this in chapter 2 of Bucknerβs excellent From Deep Learning to Rational Machines, Iβd recommend that over the paper I just linked!
Weβre not mechanical? I think AI skepticism of this sort just reifies AGI and encourages the sloppy benchmark setting that the AI hypers engage in
Right if βnovel knowledge productionβ is positioned as a benchmark then yes it would make sense to hold LLMs to a different standard. Itβs a silly benchmark of course because LLMs are capable of novelty, contra the false idea that LLMs can only interpolate
There have been machine generated proofs pre-LLMs so I donβt think itβs accurate to say that we donβt know machines canβt prove something new as they have already done so. I think AGI is a trash term though so yes anything being said about that is probably very stupid :)
I think thereβs a big difference between requiring people to cite sources appropriately versus proving that theyβve done so. The suggestion seems to have been that you need to exhaustively comb through the training set to prove something isnβt in there, but we donβt require humans to do that
Why donβt we hold humans to this same standard?
Would that sounds super interesting and fun (if not exhausting)!
Can you expand on this this sounds awesome and Iβm curious lol
Do you have any examples of philosophers making claims about AI consciousness on the basis of their behavior? These claims imo instead derive mainly from the application of pre-existing theories of consciousness (in-line with what you Guest etc advocate for)
All my work is on behavioral evidence for cognitive models check it out
Correct we donβt know for sure how computation is biologically realized. The primary evidence for computational cognitive models is behavioral data :) itβs the best game in town and the alternative is giving up on intentional psychology www.cambridge.org/core/journal...
Software is physically realized in hardware. So is software immaterial? In a sense! But not problematically so. Nothing mysterious about the existence of Microsoft word :)
I donβt think we know yet! Iβm actually a bit of a neuroscience skeptic. Neuroscience as biology is great but neuroscience tells us very little currently about cognitive processing. Computation could occur through changes in synaptic strength or it could be all RNA computation Γ la Gallistel
Iβm pretty familiar with anti-computationalist arguments. I also donβt think itβs necessary for me to give a clean N&S def of computation, and I already pointed you toward different accounts. But sureβcomputation in this context is the physical realization of an information processing algorithm
Iβm a little confused what your argument is. Is the point just that you think, as an empirical fact, biological systems are not doing computation? Thatβs fine, although not obviously true. But then are you simply an eliminativist about mental states?
In my course devoted to this exact question we cover a lot of different possible accounts of computational implementation. Thereβs a whole lit in philosophy on this, shagrir, piccinnini, chalmerβs causal topology, etc
Distinguishing between original and derived intentionality like that seems more mysterious to me than the idea that the mind is a computational system
Many donβt, but I was definitely not talking about you! That comment was for other people coming at me
Look up the paper still no lie detector test for LLMs, there is a whole literature on probing mental content in LLMs by extremely technically informed philosophers
Yes exactly, and theyβre confronted with the same exact problems that neuroscientists areβ¦.identifying intentional content (even subintentional non conceptual content) is not something we can do with any known system currently
Then you really should know your PDP!
Also I think you misunderstood the distributed representations idea by suggesting there should be a meta layer. Itβd be good for AI skeptics to engage in some basic history of AI and learn about connectionism and PDP!
AI sentience plausibly follows from not particularly wild theories of consciousness. IIT is probably more out there than AI sentience, and I dislike attempts to draw hard lines and label certain academic theories of consciousness such as IIT as pseudoscience
(Since I imagine many will take what I say in bad faith I should clarify that I mean that the functions being computed during inference are largely opaque to us)
We donβt hand code the weights of LLMs, and so what happens during inference in LLMs is not something we know. Despite the fact that the brain evolved and we engineered LLMs, both are pretty well known at the lowest level and not well understood at the level of cognitive mechanisms
This argument cuts equally well against actual digital computers (βhow do transistors compute things, what symbols are they interpreting and processing?β) so it seems quite straightforwardly a bad argument
The connectionist/pdp school from which modern deep learning arose has long held that representations are emergent, distributed, etc. The brainβs representations very well be like this as well. Thereβs like not a little belief box in our brains that has a bunch of beliefs crammed into it