We argue that making sure future people can coordinate to fund moral public goods could be a big deal for how well the long-term future goes: www.forethought.org/research/mo...
We argue that making sure future people can coordinate to fund moral public goods could be a big deal for how well the long-term future goes: www.forethought.org/research/mo...
"Moral public goods" are things many people value for ethical reasons, but where no individual's contribution is worth it unless others contribute too, creating large potential gains from coordination.
New podcast episode: chatting with economist Sam Hammond about what happens to public institutions when AI collapses transaction costs.
www.youtube.com/watch?v=grG...
Today weโre publishing another set of design sketches, illustrating what some of these tools might look like more concretely.
You can read the full article here: www.forethought.org/research/de...
Tools for strategic awareness could deepen peopleโs understanding of whatโs actually going on around them, making it easier for them to make good decisions in their own interests. This would have big implications both for individuals and for collective decision-making.
And the UN charter piece is here: www.forethought.org/research/th...
The overview of international organisations is here: www.forethought.org/research/an...
Weโve recently published two pieces of background research that informed our thinking on an international AGI project:
โข An overview of some international organisations, with their voting structures
โข The UN Charter: a case study in international governance
Thereโs an overview of the whole series here: www.forethought.org/research/de...
You can read the full post here: www.forethought.org/research/de...
The first set of design sketches focus on collective epistemics: tools that make it easy to know whatโs trustworthy and reward honesty.
Technologies powered by near-term AI systems could transform our ability to reason and coordinate, significantly improving our chances of safely navigating the transition to advanced AI.
Last week, we launched series of design sketches for specific technologies that we think could help.
You can read the full article here: www.forethought.org/research/de...
We think tools like this will be possible soon, and could meaningfully help humanity to navigate the transition to advanced AI. Today weโre publishing a set of design sketches describing some of these tools in more detail.
Imagine having a technological analogue to an โangels-on-the-shoulderโ: a customised tool or tools that help you make better decisions in real time, decisions that you more deeply endorse after the fact.
Should we focus on worlds where AGI comes in the next few years? People often argue yes, because short timelines have higher leverage. We're not so sure. New post arguing that for many people, 2035+ timelines might be highest leverage: www.forethought.org/research/sh...
Wondering whether to apply to our open roles? Research Fellow Mia Taylor joined 5 weeks ago. We just released a new episode of ForeCast, hearing from her about why she joined, what it's like to work here, and who the work is likely (and unlikely) to suit.
pnc.st/s/forecast/...
Weโre also offering a referral bounty of up to ยฃ10,000 (submit here: forms.gle/xbsC6K9QBAw...).
See the full ad and apply here: forethought.org/careers/res..., and see our careers page for more about what itโs like to work at Forethought: forethought.org/careers/.
Weโre looking for strong thinkers:
- Senior fellows to lead their own agendas
- Fellows who can work with others and develop their worldviews
We offer freedom to focus on what you think is most important, a great research community, & support turning your ideas into action.
Weโre hiring!
Society isnโt prepared for a world with superhuman AI. If you want to help, consider applying to one of our research roles:
forethought.org/careers/res...
Not sure if youโre a good fit? See more in the reply (or just apply โ it doesnโt take long)
What might happen to society and politics after widespread automation? What are the best ideas for good post-AGI futures, if any?
David Duvenaud joins the podcast โ
pnc.st/s/forecast/...
How could humans lose control over the future, even if AIs don't coordinate to seek power? What can we do about that?
Raymond Douglas joins the podcast to discuss โGradual Disempowermentโ
Listen: pnc.st/s/forecast/...
Should AI agents obey human laws?
Cullen OโKeefe (Institute for Law & AI) joins the podcast to discuss โlaw-following AIโ.
Listen: pnc.st/s/forecast/...
The โBetter Futuresโ series compares the value of working on โsurvivalโ and โflourishingโ.
In โThe Basic Case for Better Futuresโ, Will MacAskill and Philip Trammell describe a more formal way to model the future in those terms.
You can find this and future article narrations wherever you listen to podcasts: www.forethought.org/subscribe#p...
We're starting to post narrations of Forethought articles on our podcast feed, for people whoโd prefer to listen to them.
First up is โAI-Enabled Coups: How a Small Group Could Use AI to Seize Powerโ.
In the fifth essay in the โBetter Futuresโ series, Will MacAskill asks what, concretely, we could do to improve the value of the future (conditional on survival).
Read it here: www.forethought.org/research/ho...