David Manheim's Avatar

David Manheim

@davidmanheim.alter.org.il

Humanity's future can be amazing - let's make sure it is. Visiting lecturer at the Technion, founder https://alter.org.il, Superforecaster, Pardee RAND graduate.

3,903
Followers
347
Following
1,201
Posts
26.04.2023
Joined
Posts Following

Latest posts by David Manheim @davidmanheim.alter.org.il

Yep, Campbell should get priority for the idea: rss.onlinelibrary.wiley.com/doi/full/10....

10.03.2026 15:49 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

What? I'm not your brother-in-law!

10.03.2026 15:48 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

Who knew that would happen?

Oh, right...

10.03.2026 15:47 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Yes! And when you're designing or choosing the metrics, you might also want to read this: www.cell.com/patterns/ful...

10.03.2026 15:46 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

Who would have guessed?

10.03.2026 15:45 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

OU-Delicious!

10.03.2026 15:41 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

What time frame?

I certainly think it happens when AI agents get to control an increasing fraction of all purchases. When is that? I don't know, but I worry it's a year or two, and I'm fairly confident it's not decades, unless AI progress halts.

10.03.2026 13:15 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

Markets are a spending-weighted aggregation mechanism. Buyers create demand, and production is dictated by the resulting market prices.

That is to say, in a capitalist systems, AI agents graduating to economic actors is gradual human disempowerment in different words.

10.03.2026 09:19 πŸ‘ 4 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Well, since it's actually now being done by AI, there goes that theory of how AI can't do what humans are doing... zitongyang.github.io/slides/Ziton...

08.03.2026 20:17 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

So as soon as you put it on a machine and it sets up a cron job to tell it to continue, which will keep it going, it counts as really thinking? (Because yes, that has happened.)
And, relatedly, when you go to sleep, you stop being human in the relevant sense?

08.03.2026 20:15 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Blogpost - Hunting Undead Stochastic Parrots Hunting Undead Stochastic Parrots We argue the "stochastic parrot" critique of LLMs is philosophically undeadβ€”refuted under some interpretations, still valid under others, and persistently confused be...

Possibly of interest / related is this draft blogpost about my (in-progress) new paper on the various Stochastic Parrots arguments: docs.google.com/document/d/1...

08.03.2026 20:12 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I’m at a philosophy conference and it is WILD. You philosophers are fascinating people.

07.03.2026 15:08 πŸ‘ 76 πŸ” 11 πŸ’¬ 5 πŸ“Œ 7
Preview
Hunting Undead Stochastic Parrots Hunting (Undead) Stochastic Parrots In short, the argument is that people keep on repeating a claim that’s either false, or deeply confused. And I promise I didn’t write the paper only so I could use ...

"This type of software"?

I'm concerned that people keep mistakenly repeating the same poorly conceptualized stochastic parrot argument, which is mostly simply invalid as a claim.
docs.google.com/presentation...

07.03.2026 17:53 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Good thing it's only killing them off figuratively... for now.
bsky.app/profile/robb...

07.03.2026 17:42 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
A call to join a collective effort on AI evaluation AI evaluations increasingly shape deployment, governance, and trust, but expectations for how they are conducted and reported remain fragmented. We introduce a cross-sector Delphi process to develop community-endorsed guidance for AI evaluation practice and invite researchers, practitioners, ethics organizations, and institutions to participate.

Online Now: A call to join a collective effort on AI evaluation #datascience

06.03.2026 16:35 πŸ‘ 0 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

I hate being late for meetings, but "the air raid siren went off so I had to go down to the bomb shelter" is probably the best reason I'll ever have for doing so.

05.03.2026 14:42 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Any time you see someone argue that their opponents are wrong, instead of arguing that they are right, the conclusion should be a negative update about *both sides*.

(And before you try to interpret this as a subtweet, it is about at least three different things I've been annoyed about recently.)

05.03.2026 09:46 πŸ‘ 6 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

I hereby posit Manheim's Law of Positive-Sum Badness:

In polarized disputes, evidence that one side is stupid, malicious, or evil increases the probability that the opposing side is too.

05.03.2026 09:44 πŸ‘ 12 πŸ” 2 πŸ’¬ 3 πŸ“Œ 0

"What can Goodhart's Law teach us about AI alignment?"

Good question! Our 2018 paper was explicitly about the problem: "in Artificial Intelligence alignment... the increased optimization power offered by artificial intelligence makes it especially critical for that field."
arxiv.org/abs/1803.04585

04.03.2026 20:00 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

That's just Goodhart's law - because most metrics start out as shit, they are just whatever is easiest to measure that seems related.
I even have a paper about this!
www.cell.com/patterns/ful...

04.03.2026 19:57 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

"As Iran’s supply of long range weapons dwindles"

I've heard from several sources that's a long way off, and they have lots of stockpiles, albeit fewer an fewer launchers - but given my location I certainly hope you're right.

04.03.2026 19:53 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Sam Altman faced painful internal backlash from the new Pentagon deal, which is why he told staff that he's... also planning new deals with other NATO countries to use in classified settings.
www.wsj.com/tech/ai/open...

04.03.2026 19:31 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image Post image

OpenAI's plans after signing the contract which they say won't allow domestic surveillance... are definitely gonna eventually involve domestic surveillance.

04.03.2026 19:12 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I think it's strategic brilliance from the perspective of the current leader; he doesn't care about the country, he wants to be able to stay in power, and this will help him.
(I agree that it's horrific in the long term for the country, but we all know voters are myopic.)

02.03.2026 12:37 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

It's like the Fifa Peace Prize doesn't mean anything at all.

02.03.2026 12:33 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Anyways, me and my family are basically safe, just running up and down the stairs to the bomb shelter a couple times a day.

And at least I know that the US is using Claude to plan their strategy for another 30 days or something, which is nice and dystopian.

02.03.2026 06:45 πŸ‘ 8 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

This would be the worst business move, in isolation.

Federal contracts are net time sinks for tech companies, this contract won't pay enough to be worth the pain. The only way they could benefit is if acceding to pressure were somehow used to punish competitors, which... right, that's the point.

02.03.2026 06:33 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Lastly, there's an argument that biotech in general can be dangerous because of pathogen applications, and advances in AIxBio can be worrying because of this.

True - but not doing safe things because very different ideas in another area of bio are scary is a bad objection!
(fin)

14.12.2025 08:23 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

(continued) to think a given new tech is on-net dangerous. Self-replication or an exponential trajectory could qualify, but this isn't that.

It seems far more likely that this enables cool and largely safe nanotech applications.

14.12.2025 08:23 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

And third, general advances in biodesign are terrifying!

And yes, all new technology can be misused, and that's certainly true for bio. But out (strong) prior should be that very little of the usage of new technology is for dangerous misuse which means that we need some pretty strong reason...

14.12.2025 08:23 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0