Malo Bourgon's Avatar

Malo Bourgon

@malo.online

CEO at Machine Intelligence Research Institute (MIRI, @intelligence.org)

411
Followers
69
Following
57
Posts
20.08.2023
Joined
Posts Following

Latest posts by Malo Bourgon @malo.online

Framing this stuff as sci-fi is an easy rhetorical move but silly IMO. Turing and I.J. Good at the dawn of the field could see the common-sense concern: if you succeed at building something smarter than humans, it might be hard to control. Itโ€˜s not just MIRI folks, see the CAIS extinction statement.

05.03.2026 04:53 ๐Ÿ‘ 3 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

I mean, hundreds of folks signed the CAIS extinction statement that don't fall into either of those camps. It's not just Daniel and folks at MIRI etc. And from experience, a lot of policymakers on both sides take this seriously, they're just starting to say so publicly.

05.03.2026 04:40 ๐Ÿ‘ 3 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

This tracks. In my experience, for a lot of folks the reaction is usually pretty different once they actually sit down and engage with the arguments/evidence directly. My sense is that's basically what happened when Bernie started to look into this stuff.

05.03.2026 04:22 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Speaking from personal experience, the list of policymakers who take this seriously, on both sides of the aisle, is longer than most people realize and growing.

Grateful to @sanders.senate.gov for being one of them.

05.03.2026 00:24 ๐Ÿ‘ 6 ๐Ÿ” 1 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Weโ€™re honored to have had the opportunity to host you at MIRI (@intelligence.org).

Thanks for taking the time to chat with us (and many others during your trip to the Bay Area) and raising awareness about the threat we all face from the reckless race to superintelligence.

04.03.2026 23:46 ๐Ÿ‘ 8 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Top AI researcher warns 'world is in peril'
Top AI researcher warns 'world is in peril' YouTube video by ABC News

I was on ABC News Live last week!

My first live interview. I think it went pretty well, especially considering I only had ~20 mins notice :p

16.02.2026 18:39 ๐Ÿ‘ 5 ๐Ÿ” 2 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Weโ€™re so close!

Very grateful for all our donors. Your support enables everything we do. Also grateful for the awesome gang at MIRI who worked their asses off this year. You guys crushed it!

Thanks everyone ๐Ÿ™ Happy New Year ๐ŸŽ‰

01.01.2026 04:48 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Final Update: From ~$450k earlier today, weโ€™re now down to just over $250k left in unclaimed matching funds!

4 hours left to go, and by golly it looks like weโ€™ve got a real shot at securing all the matching.

Thanks everyone! Happy New Year ๐ŸŽ‰

01.01.2026 04:06 ๐Ÿ‘ 1 ๐Ÿ” 2 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 2

Update 2: Weโ€™re down to ~$450k left of unclaimed matching funds, with just over 12 hours to go!

Thanks to all those who stepped up in the last couple of days to close the gap by ~$500k. โค๏ธ

31.12.2025 19:48 ๐Ÿ‘ 1 ๐Ÿ” 1 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 1

Update: Weโ€™ve received over $250k since this was posted.

~$700k in matching funds remaining.

30.12.2025 18:41 ๐Ÿ‘ 3 ๐Ÿ” 2 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 1
Preview
MIRI's 2025 Fundraiser - Machine Intelligence Research Institute MIRI is running its first fundraiser in six years, targeting $6M. The first $1.6M raised will be matched 1:1 via an SFF grant. Fundraiser ends at midnight on Dec 31, 2025. Support our efforts to impro...

Donations to MIRI before Jan 1 are high-leverage. Weโ€™ve got ~$1.6M in 1:1 matching from SFF, over half of which has yet to be claimed!

This is real counterfactual matching: whatever doesnโ€™t get matched by the end of Dec 31, we donโ€™t get. ๐Ÿงต

29.12.2025 22:55 ๐Ÿ‘ 3 ๐Ÿ” 2 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 5
Post image Post image Post image Post image

You don't have to take my word for it. LLMs are dumb in a bunch of ways, but I think this is a powerful and convincing consensus on this question.

05.12.2025 01:54 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Seems like youโ€™re confusing a punchy post title about a standard norm in Bayesian epistemology (i.e., donโ€™t give empirical claims credence 0 or 1, or you canโ€™t update) with a claim about the formal definition of probability, where 0 and 1 are of course valid probabilities.

05.12.2025 01:19 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Of course, by definition probabilities are real numbers on [0,1], which includes the endpoints.

05.12.2025 01:00 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

I understand that you believe Iโ€™m a huckster. I was hoping you might elaborate on why you think that.

05.12.2025 00:44 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Huckster? Say more. Conversations I was having in the comments seemed pretty reasonable and polite, with me just sharing context/info/perspective, and folks following up on that.

04.12.2025 23:37 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

... and banned almost immediately. ยฏ\_(ใƒ„)_/ยฏ

04.12.2025 21:05 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

โ€If Anyone Builds It, Everyone Diesโ€ was recently added to the New Yorker's โ€œThe Best Books of the Year So Farโ€ list!

newyorker.com/best-books-2...

31.10.2025 02:30 ๐Ÿ‘ 5 ๐Ÿ” 2 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

@hankgreen.bsky.social rarely does interviews or 30+ min long videos.

His latest video, an hour+ long interview with Nate Soares about โ€œIf Anyone Builds It, Everyone Dies,โ€ is a banger. My new favorite!

www.youtube.com/watch?v=5CKu...

30.10.2025 20:52 ๐Ÿ‘ 6 ๐Ÿ” 1 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Nate Soares - If Anyone Builds It, Everyone Dies Nate Soares discusses the scramble to create superhuman AI that has us on a path to extinction. But itโ€™s not too late to change course.

In the Bay Area? Come join Nate Soares, in conversation with Semafor Tech Editor Reed Albergotti, about Nate's NYT bestselling book โ€œIf Anyone Builds It, Everyone Dies.โ€

๐Ÿ—“๏ธ Tuesday Oct 28 @ 7:30pm at Mannyโ€™s in SF.

Get your tickets:

24.10.2025 22:09 ๐Ÿ‘ 5 ๐Ÿ” 2 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

IDK man, feels like you are fixated on one particular thing he said (and your interpretation of what he meant by it), as part of a longer conversation on the pod about the analogy. Iโ€™m not trying to pick a fight here, just wanted to clarify that heโ€™s not making the mistake you think heโ€™s making.

17.10.2025 02:02 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

I listened to it. Also, I run org (@intelligence.org) that he founded, so Iโ€™m quite familiar with the argument heโ€™s making. This interview didnโ€™t result in the best exposition of the analogy in question, but I can assure you he isnโ€™t making the mistake you think he is.

17.10.2025 00:56 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

How is he anthropomorphizing natural selection? One can think of evolution as an optimization process, and the analogy is between that optimization process and the one used to train AI systems.

17.10.2025 00:43 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

It is by no means a "nearly unanimous view" that LLMs are a dead end among AI experts. Also the argument for future very powerful AI systems being an extinction threat does not depend on those systems being LLM based.

16.10.2025 20:14 ๐Ÿ‘ 3 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Video thumbnail

๐Ÿ˜ฎ Whoopie Goldberg recommends โ€œIf Anyone Builds It, Everyone Diesโ€ on The View!

15.10.2025 22:46 ๐Ÿ‘ 3 ๐Ÿ” 2 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Should be a fun conversation.

I'll be there, if you're in town come say hi!

18.09.2025 15:11 ๐Ÿ‘ 1 ๐Ÿ” 1 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

LFG!

25.09.2025 03:50 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

This was a great event. Really enjoyed chatting with Joel and Ollie on the first panel.

Thanks @scientistsorg.bsky.social and @futureoflife.org for putting this event together.

23.09.2025 18:06 ๐Ÿ‘ 4 ๐Ÿ” 1 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

(Totally above board. Sharing full-length episodes is one of the benefits of being a subscriber.)

22.09.2025 21:13 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Preview
Sam Harris | #434 - Can We Survive AI? Sam Harris speaks with Eliezer Yudkowsky and Nate Soares about their new book, If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI.

I think my favorite interview Eliezer and Nate have done so far for the book has been for the Making Sense podcast with Sam Harris.

Unfortunately the full episode is for subscribers only.

Fortunately, as a subscriber, I can share the full thing ๐Ÿ™‚

22.09.2025 20:48 ๐Ÿ‘ 9 ๐Ÿ” 3 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0