Many apparent disagreements over the utility of neural manifolds come from a lack of clarity on what the term really encompasses, argues @mattperich.bsky.social
#neuroskyence
www.thetransmitter.org/neural-dynam...
Many apparent disagreements over the utility of neural manifolds come from a lack of clarity on what the term really encompasses, argues @mattperich.bsky.social
#neuroskyence
www.thetransmitter.org/neural-dynam...
New paper out in #Neuron: A general theory of sequential working memory in prefrontal cortex and RNN/SSMs with their exact neural mechanism. Plus unifying this new mechanism with the alternate mechanism of hippocampal cognitive maps! (1/9)
www.cell.com/neuron/fullt...
I am excited to share the last work of my postdoc as a Swartz Fellow at NYU on the dynamic routing of large-scale cognitive networks in the neocortex! ๐๐ง Here's a quick breakdown: ๐งต
preprint: www.biorxiv.org/content/10.1...
1760: Bayes: You should interpret what you see in the light of what you know.
1780: Galvani: Nerves have something to do with Electricity.
1850: Phineas Gauge et al: Different parts of the brain do different things.
It feels like we're back in the 40s
I'm familiar with aspects of this literature, but it's quite possible that I'm misinterpreting your post. Is there something specific about the scenario I postulate that is inconsistent with any of the 4Es?
What cases do you have in mind? I can imagine some functions where ML could generalize so long as the out of training data follows the pattern of the training set. But with more complex functions, generalization should fall off as you deviate further from the training set.
An example of out of training would be speaking a language that you've never before encountered. Absurd example, but definitely out of training :)
I wouldn't count that. That would be akin to asking an LLM a question it has never encountered before and claiming that a proper response implies out-of-training solution. I'd say the answer is fully within the training set (word association).
OP is an example of LLMs failing, which is not hard to imagine. What I'm having a hard time envisioning is a human (or any animal) solving a problem outside of their training set
I see the intuition behind your comment. But I feel that this intuition breaks down when you go down to specific examples of problems with presumed out-of-training set solutions. I just can't think of an example.
Yeah, I get that. I still prefer to attempt defining. But I guess it's just a personal preference :)
Oh oops. I always thought definitions helped think about issues properly.
So do you then subscribe to something along the lines of definition 2 to assign intelligence? Or something else?
The problem I see is that under this definition you would give intelligence to an LLM used by a robot that is able to sense and interact with the environment. But that doesn't seem all that different from our current LLMs
LLMs would have some of the first, but none of the second. Could the second definition be closer to what you were thinking?
My sense is that the term intelligence in 2 ways: (1) abilities to do goal-directed actions (where the proficiency, breadth, speed of performance and speed of learning measure independent aspects of intelligence), and (2) meaning/grounding of symbols and actions, on the other...
It's tricky though. It's hard to argue that humans or other animals solve new problems that are not in our training sets (can you think of an example?). And re the second point, it feels arbitrary to allow human-like errors only in the definition of intelligence.
Depends on how you define intelligence. How would you define it such that LLMs have zero?
Important paper!
www.biorxiv.org/content/bior...
Im not sure the Discussion fully delineates its radical implications.
No more...
* Place cells
* Grid cells, splitter cells, border cells
* Mirror neurons
* Reward neurons
* Conflict cells
(continued)
And what does it mean to โtruly understandโ? Is the something more to understanding than what LLMs do? If you are curious about these questions this blog might help: blog.dileeplearning.com/p/ingredient...
โฆIt boils down to whether we want to recruit people for incremental progress on a path weโre comfortable with, or if we need/want breakthroughs & paradigm shifts. I think we need the latter, & that requires learning from biology first, more than we have, rather than starting & staying in model land.
๐ก๐ผ๐ป๐ฐ๐ผ๐ฟ๐๐ถ๐ฐ๐ฎ๐น ๐ฐ๐ผ๐ด๐ป๐ถ๐๐ถ๐ผ๐ป, subcortex matters!
Happy to share very short piece on thinking of cognition more broadly as solving a broad gamut of behavioral problems, including problems in the "here and now".
Open access for now:
www.sciencedirect.com/science/arti...
#neuroscience