This weird sensation when you debug something by talking to someone, iterating over possible reasonable causes until you find it, and that someone is a machine.
@ramon-astudillo
Principal Research Scientist at IBM Research AI in New York. Speech, Formal/Natural Language Processing. Currently LLM post-training, structured SDG and RL. Opinions my own and non stationary. ramon.astudillo.com
This weird sensation when you debug something by talking to someone, iterating over possible reasonable causes until you find it, and that someone is a machine.
amazing times
Uh, they totally still around in the Iberian peninsula
They measured Claude?
Yikes, micro-blogs should be character limited
Yes, please. Also open source platforms to serve from home and on mobile
I think what's most important for most people isn't whether some analogy is in some sense "correct", it's whether it gives people useful intuitions. And autocomplete and stochastic parrot don't tend to lead to strong intuitions about the capabilities or impacts of llms.
Ok, very likely all the language in the world won't be enough for training this thing. It can't even exploit cross-lingual regularities
Ok, let me argue why it is relevant. Analogies are great to convey meaning but also silently inject biases. "Guy that mechanically puts his finger on a line of a book and copies the answer" does definitely not feel "intelligent". That intuition is not wrong because lookup tables cannot solve this.
and "what if Elon Musk transforms the entire moon into a 10^18 parameter n-gram table tuned with all the language in the world?". Well, if that were to be possible, that table may "be conscious" in the same way Claude may be. Also people would have an easier time believing this π€£.
This one here
bsky.app/profile/ramo...
Google translate uses a neural network, not a lookup table. Also this relates to the other thread in that translation is likely not a task from which you can make general AI arguments
π For example: I could say "AI solves the trolley problem". Here is a thinking experiment: I have an AGI trolley, it knows "the truth" and thus what's the right decision, so no.more trolley problem.
That would be invalid in the same way.
Fair point, my initial description was poor. Imagined in a way that includes things that we don't know if they are true or not. Particularly imagining premises that are key to the conclusion. π
Yeah. The uncomputability argument has also been debunked. It reduces to Penrose's argument on the limitations of AI (GΓΆedels theorem, etc). See e.g. Scott Aaronson here
www.scottaaronson.com/democritus/l...
Anything can be used as a thinking device. The problem is when it is misleading or has loaded assumptions. "Chinese room" (and that analogy you did BTW) fall IMO into that territory. Nobody doubts the existence of trolleys or that they will kill you if they run you over. That's the real issue.
(not sure I got it) "Chinese room" has the added limitation of using translation, which is not the right task to make general AI arguments. You could construct one similar for LLMs, basically "gimme infinite n-grams, they are dumb but will look smart". The problem ofc is the "infinite" part.
Same thing. You cannot invent a device that has not been proven to exist and use that as an argument
This is basically my beef with any argument following from Chinese room or p-zombies
Regarding consciousness: I think we need a definition first. What I think is plausible, is that for any definition that assigns a number representing "consciousness", most likely both dogs and Claude won't score a zero and that that fact will also not be trivial (i.e. a toaster would be a hard zero)
πYou could also put it as: The bits of prediction error of a system are related to the bits it takes to describe that system. You cannot just invent a device that breaks that relation and then continue with an argument.
Take the Chinese room. An imagined device that from the outside acts as if it were intelligent, but you can see the internals and it's a dumb lookup table. It's easy to "feel" this is a valid argument ... but it's made up! Maybe every device that acts intelligent has far more complex internals π
Well you are a large bunch, statistically speaking someone should jump at my AI posts and say something like "Markov models can't have a soul!" or something, but it does not happen.
So what should kids learn now? Planning, they should learn planning. Also horizontal learning, a bit of everything.
If it is not clear: under those definitions dogs would also be conscious, so the whole enslaving argument gets tricky. Not with cats, that one is clear.
(classic) Philosophers invented logic right? Anyway, fair, but then this cannot be used as an excuse to pass pseudo arguments as real ones. They are just pulling out of thin air something that looks like a duck and quacks like a duck but it's not a duck and then drawing conclusions from there.
Ok, I count this as mildly hurtful, so I am getting some push back at least. Thanks for cheering me up!
Are dogs conscious? IDK I belong to the third set implicitly neglected here that thinks we need to define consciousness first. Surely there are some definitions under which Claude can be considered "a little bit conscious" but would people agree on them? Likely not
On the one hand, I see people posting about AI and getting a tsunami of idiotic hateful comments and I feel lucky I never have that type of interaction. On the other hand ... it's weird? I have a lot of followers, are they all awesome or are they all dead? maybe a mix