Forethought's Avatar

Forethought

@forethought-org

Research nonprofit exploring how to navigate explosive AI progress. forethought.org

58
Followers
2
Following
42
Posts
07.03.2025
Joined
Posts Following

Latest posts by Forethought @forethought-org

Preview
Moral Public Goods and the Future of Humanity Moral public goods are widely valued but underfunded. Learn how coordination, governance, and power distribution could shape humanityโ€™s long-term future.

We argue that making sure future people can coordinate to fund moral public goods could be a big deal for how well the long-term future goes: www.forethought.org/research/mo...

24.02.2026 14:19 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

"Moral public goods" are things many people value for ethical reasons, but where no individual's contribution is worth it unless others contribute too, creating large potential gains from coordination.

24.02.2026 14:19 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Can Liberal Democracy Survive AGI? โ€” Sam Hammond | ForeCast
Can Liberal Democracy Survive AGI? โ€” Sam Hammond | ForeCast Sam Hammond and Fin Moorhouse discuss discuss how AGI could reshape the nation-state, drawing on Sam's โ€œAI and Leviathanโ€ essay series.Read a transcript of t...

New podcast episode: chatting with economist Sam Hammond about what happens to public institutions when AI collapses transaction costs.

www.youtube.com/watch?v=grG...

11.02.2026 16:12 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Preview
AI Tools for Strategic Awareness: Forecasting & OSINT How near-term AI could power forecasting, scenario planning, and OSINT tools to improve strategic awareness and decision-making.

Today weโ€™re publishing another set of design sketches, illustrating what some of these tools might look like more concretely.

You can read the full article here: www.forethought.org/research/de...

11.02.2026 15:08 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

Tools for strategic awareness could deepen peopleโ€™s understanding of whatโ€™s actually going on around them, making it easier for them to make good decisions in their own interests. This would have big implications both for individuals and for collective decision-making.

11.02.2026 15:08 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Preview
UN Charter Lessons for International AGI Governance How the UN Charter was createdโ€”and what its successes and limits suggest for future international governance of advanced AI and AGI.

And the UN charter piece is here: www.forethought.org/research/th...

10.02.2026 08:48 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Preview
International Organization Voting Rules for AGI Governance Rough research note on how international organizations voteโ€”unanimity, majority, weighted voting, and vetoesโ€”and what this means for AGI governance.

The overview of international organisations is here: www.forethought.org/research/an...

10.02.2026 08:48 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Weโ€™ve recently published two pieces of background research that informed our thinking on an international AGI project:
โ€ข An overview of some international organisations, with their voting structures
โ€ข The UN Charter: a case study in international governance

10.02.2026 08:47 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Preview
Design Sketches for a More Sensible AI Future Explore practical AI tools that improve reasoning, forecasting, coordination, and strategic awareness to help navigate the transition to advanced AI.

Thereโ€™s an overview of the whole series here: www.forethought.org/research/de...

09.02.2026 18:27 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Preview
AI Tools for Trust: Community Notes, Rhetoric Detection & More Five AI technologies to combat misinformation: community notes, rhetoric detection, reliability tracking, epistemic evals, and provenance tracing.

You can read the full post here: www.forethought.org/research/de...

09.02.2026 18:27 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

The first set of design sketches focus on collective epistemics: tools that make it easy to know whatโ€™s trustworthy and reward honesty.

09.02.2026 18:27 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

Technologies powered by near-term AI systems could transform our ability to reason and coordinate, significantly improving our chances of safely navigating the transition to advanced AI.

Last week, we launched series of design sketches for specific technologies that we think could help.

09.02.2026 18:27 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Preview
Angels-on-the-Shoulder: 5 AI Tools for Better Decisions Five โ€œangels-on-the-shoulderโ€ AI designs that help people make better decisions.

You can read the full article here: www.forethought.org/research/de...

09.02.2026 10:19 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

We think tools like this will be possible soon, and could meaningfully help humanity to navigate the transition to advanced AI. Today weโ€™re publishing a set of design sketches describing some of these tools in more detail.

09.02.2026 10:19 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

Imagine having a technological analogue to an โ€˜angels-on-the-shoulderโ€™: a customised tool or tools that help you make better decisions in real time, decisions that you more deeply endorse after the fact.

09.02.2026 10:19 ๐Ÿ‘ 2 ๐Ÿ” 1 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Preview
Short AI Timelines Arenโ€™t Always Higher-Leverage Are 2โ€“10 year AGI timelines really highest leverage? We compare 2027/2035/2045 scenarios and explain when medium timelines can offer higher-impact.

Should we focus on worlds where AGI comes in the next few years? People often argue yes, because short timelines have higher leverage. We're not so sure. New post arguing that for many people, 2035+ timelines might be highest leverage: www.forethought.org/research/sh...

26.01.2026 12:00 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Preview
Forethought is Hiring Researchers (with Mia Taylor) This is a bonus episode to say that Forethought is hiring researchers. After an overview of the roles, we hear from Research Fellow Mia Taylor about working at Forethought. The application deadline has been extended to November 1st 2025. Apply here: fore

Wondering whether to apply to our open roles? Research Fellow Mia Taylor joined 5 weeks ago. We just released a new episode of ForeCast, hearing from her about why she joined, what it's like to work here, and who the work is likely (and unlikely) to suit.

pnc.st/s/forecast/...

14.10.2025 07:42 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Preview
Forethought Researcher Referrals We may reach out to the person based on the information you provide, but it might be good for you also to encourage the person to apply. You can see more about the role here. You might want to think about things like: Who is a great researcher who cares about these topics who might like more freedom than industry / academia can give them? Who are some of your favourite bloggers / LessWrong commenters? Who are the smartest early career researchers you know? We will pay ยฃ10,000 if we end up hiring them as a Senior Research Fellow, or ยฃ5,000 if we end up hiring them as a Research Fellow. They must pass 3 month probation, you must be the only person to refer them, and we must have not previously planned to reach out to them. We'll try to use reasonable judgement in cases of ambiguity, aiming to err on the side of being generous.

Weโ€™re also offering a referral bounty of up to ยฃ10,000 (submit here: forms.gle/xbsC6K9QBAw...).

13.10.2025 08:14 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Preview
Careers We are a research nonprofit focused on how to navigate the transition to a world with superintelligent AI systems.

See the full ad and apply here: forethought.org/careers/res..., and see our careers page for more about what itโ€™s like to work at Forethought: forethought.org/careers/.

13.10.2025 08:14 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Weโ€™re looking for strong thinkers:
- Senior fellows to lead their own agendas
- Fellows who can work with others and develop their worldviews

We offer freedom to focus on what you think is most important, a great research community, & support turning your ideas into action.

13.10.2025 08:14 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

Weโ€™re hiring!

Society isnโ€™t prepared for a world with superhuman AI. If you want to help, consider applying to one of our research roles:
forethought.org/careers/res...

Not sure if youโ€™re a good fit? See more in the reply (or just apply โ€” it doesnโ€™t take long)

13.10.2025 08:14 ๐Ÿ‘ 7 ๐Ÿ” 3 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Preview
Politics and Power Post-Automation (with David Duvenaud) David Duvenaud is an associate professor at the University of Toronto. He recently organised the workshop on โ€˜Post-AGI Civilizational Equilibriaโ€™ , and he is a co-author of โ€˜Gradual Disempowermentโ€™. He recently finished an extended sabbatical on the Alignm

What might happen to society and politics after widespread automation? What are the best ideas for good post-AGI futures, if any?

David Duvenaud joins the podcast โ€”

pnc.st/s/forecast/...

25.09.2025 11:51 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Preview
Is Gradual Disempowerment Inevitable? (with Raymond Douglas) Raymond Douglas is a researcher focused on the societal effects of AI. In this episode, we discuss Gradual Disempowerment. To see all our published research, visit forethought.org/research. To subscribe to our newsletter, visit forethought.org/subscribe.

How could humans lose control over the future, even if AIs don't coordinate to seek power? What can we do about that?

Raymond Douglas joins the podcast to discuss โ€œGradual Disempowermentโ€

Listen: pnc.st/s/forecast/...

09.09.2025 11:25 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Preview
Should AI Agents Obey Human Laws? (with Cullen O'Keefe) Cullen O'Keefe is Director of Research at the Institute for Law & AI. In this episode, we discuss 'Law-Following AI: designing AI agents to obey human laws'. To see all our published research, visit forethought.org/research. To subscribe to our ne

Should AI agents obey human laws?

Cullen Oโ€™Keefe (Institute for Law & AI) joins the podcast to discuss โ€œlaw-following AIโ€.

Listen: pnc.st/s/forecast/...

28.08.2025 10:30 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Preview
The Basic Case for Better Futures: SF Model Analysis Forethought SF model shows flourishing work has greater scale than survival: 100x impact difference.

Read it here:

www.forethought.org/research/su...

28.08.2025 08:31 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

The โ€˜Better Futuresโ€™ series compares the value of working on โ€˜survivalโ€™ and โ€˜flourishingโ€™.

In โ€˜The Basic Case for Better Futuresโ€™, Will MacAskill and Philip Trammell describe a more formal way to model the future in those terms.

28.08.2025 08:31 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Preview
Subscribe We are a research nonprofit focused on how to navigate the transition to a world with superintelligent AI systems.

You can find this and future article narrations wherever you listen to podcasts: www.forethought.org/subscribe#p...

26.08.2025 10:58 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

We're starting to post narrations of Forethought articles on our podcast feed, for people whoโ€™d prefer to listen to them.

First up is โ€˜AI-Enabled Coups: How a Small Group Could Use AI to Seize Powerโ€™.

26.08.2025 10:58 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Preview
How to Make the Future Better: Concrete Actions for Flourishing Forethought outlines concrete actions for better futures: prevent post-AGI autocracy, improve AI governance.

In the fifth essay in the โ€˜Better Futuresโ€™ series, Will MacAskill asks what, concretely, we could do to improve the value of the future (conditional on survival).

Read it here: www.forethought.org/research/ho...

26.08.2025 09:04 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Survival is Insufficient: on Better Futures, with Will MacAskill | ForeCast
Survival is Insufficient: on Better Futures, with Will MacAskill | ForeCast Will MacAskill discusses his new research series โ€˜Better Futuresโ€™.โžก๏ธ Read the papers at forethought.org/research/better-futures.00:00:00 Intro00:02:38 Surviv...

Full episode:

www.youtube.com/watch?v=UMF...

24.08.2025 13:11 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0