Ramon Astudillo's Avatar

Ramon Astudillo

@ramon-astudillo

Principal Research Scientist at IBM Research AI in New York. Speech, Formal/Natural Language Processing. Currently LLM post-training, structured SDG and RL. Opinions my own and non stationary. ramon.astudillo.com

6,411
Followers
347
Following
2,174
Posts
27.07.2023
Joined
Posts Following

Latest posts by Ramon Astudillo @ramon-astudillo

This weird sensation when you debug something by talking to someone, iterating over possible reasonable causes until you find it, and that someone is a machine.

09.03.2026 23:58 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

amazing times

09.03.2026 12:13 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image
09.03.2026 07:44 πŸ‘ 275 πŸ” 55 πŸ’¬ 4 πŸ“Œ 6

Uh, they totally still around in the Iberian peninsula

09.03.2026 03:07 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

They measured Claude?

09.03.2026 03:00 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Yikes, micro-blogs should be character limited

08.03.2026 23:15 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Yes, please. Also open source platforms to serve from home and on mobile

08.03.2026 19:32 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I think what's most important for most people isn't whether some analogy is in some sense "correct", it's whether it gives people useful intuitions. And autocomplete and stochastic parrot don't tend to lead to strong intuitions about the capabilities or impacts of llms.

08.03.2026 11:17 πŸ‘ 54 πŸ” 3 πŸ’¬ 1 πŸ“Œ 2

Ok, very likely all the language in the world won't be enough for training this thing. It can't even exploit cross-lingual regularities

08.03.2026 16:21 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Ok, let me argue why it is relevant. Analogies are great to convey meaning but also silently inject biases. "Guy that mechanically puts his finger on a line of a book and copies the answer" does definitely not feel "intelligent". That intuition is not wrong because lookup tables cannot solve this.

08.03.2026 16:15 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

and "what if Elon Musk transforms the entire moon into a 10^18 parameter n-gram table tuned with all the language in the world?". Well, if that were to be possible, that table may "be conscious" in the same way Claude may be. Also people would have an easier time believing this 🀣.

08.03.2026 16:11 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 1

This one here

bsky.app/profile/ramo...

08.03.2026 16:05 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Google translate uses a neural network, not a lookup table. Also this relates to the other thread in that translation is likely not a task from which you can make general AI arguments

08.03.2026 16:04 πŸ‘ 0 πŸ” 0 πŸ’¬ 2 πŸ“Œ 0

πŸ‘† For example: I could say "AI solves the trolley problem". Here is a thinking experiment: I have an AGI trolley, it knows "the truth" and thus what's the right decision, so no.more trolley problem.

That would be invalid in the same way.

08.03.2026 15:58 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Fair point, my initial description was poor. Imagined in a way that includes things that we don't know if they are true or not. Particularly imagining premises that are key to the conclusion. πŸ‘‡

08.03.2026 15:58 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
PHYS771 Lecture 10.5: Penrose

Yeah. The uncomputability argument has also been debunked. It reduces to Penrose's argument on the limitations of AI (GΓΆedels theorem, etc). See e.g. Scott Aaronson here

www.scottaaronson.com/democritus/l...

08.03.2026 15:52 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Anything can be used as a thinking device. The problem is when it is misleading or has loaded assumptions. "Chinese room" (and that analogy you did BTW) fall IMO into that territory. Nobody doubts the existence of trolleys or that they will kill you if they run you over. That's the real issue.

08.03.2026 15:47 πŸ‘ 0 πŸ” 0 πŸ’¬ 2 πŸ“Œ 0

(not sure I got it) "Chinese room" has the added limitation of using translation, which is not the right task to make general AI arguments. You could construct one similar for LLMs, basically "gimme infinite n-grams, they are dumb but will look smart". The problem ofc is the "infinite" part.

08.03.2026 15:39 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 2

Same thing. You cannot invent a device that has not been proven to exist and use that as an argument

08.03.2026 15:23 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

This is basically my beef with any argument following from Chinese room or p-zombies

08.03.2026 14:42 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Regarding consciousness: I think we need a definition first. What I think is plausible, is that for any definition that assigns a number representing "consciousness", most likely both dogs and Claude won't score a zero and that that fact will also not be trivial (i.e. a toaster would be a hard zero)

08.03.2026 14:39 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

πŸ‘†You could also put it as: The bits of prediction error of a system are related to the bits it takes to describe that system. You cannot just invent a device that breaks that relation and then continue with an argument.

08.03.2026 14:36 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 1

Take the Chinese room. An imagined device that from the outside acts as if it were intelligent, but you can see the internals and it's a dumb lookup table. It's easy to "feel" this is a valid argument ... but it's made up! Maybe every device that acts intelligent has far more complex internals πŸ‘‡

08.03.2026 14:34 πŸ‘ 0 πŸ” 0 πŸ’¬ 2 πŸ“Œ 0

Well you are a large bunch, statistically speaking someone should jump at my AI posts and say something like "Markov models can't have a soul!" or something, but it does not happen.

08.03.2026 14:06 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

So what should kids learn now? Planning, they should learn planning. Also horizontal learning, a bit of everything.

08.03.2026 14:02 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

If it is not clear: under those definitions dogs would also be conscious, so the whole enslaving argument gets tricky. Not with cats, that one is clear.

08.03.2026 05:09 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

(classic) Philosophers invented logic right? Anyway, fair, but then this cannot be used as an excuse to pass pseudo arguments as real ones. They are just pulling out of thin air something that looks like a duck and quacks like a duck but it's not a duck and then drawing conclusions from there.

08.03.2026 05:05 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Ok, I count this as mildly hurtful, so I am getting some push back at least. Thanks for cheering me up!

08.03.2026 04:27 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Are dogs conscious? IDK I belong to the third set implicitly neglected here that thinks we need to define consciousness first. Surely there are some definitions under which Claude can be considered "a little bit conscious" but would people agree on them? Likely not

08.03.2026 03:53 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 1

On the one hand, I see people posting about AI and getting a tsunami of idiotic hateful comments and I feel lucky I never have that type of interaction. On the other hand ... it's weird? I have a lot of followers, are they all awesome or are they all dead? maybe a mix

08.03.2026 03:44 πŸ‘ 2 πŸ” 0 πŸ’¬ 3 πŸ“Œ 0