Top AI experts say AI poses an extinction risk on par with nuclear war.
Prohibiting the development of superintelligence can prevent this risk.
Weβve just launched a new campaign to get this done.
Top AI experts say AI poses an extinction risk on par with nuclear war.
Prohibiting the development of superintelligence can prevent this risk.
Weβve just launched a new campaign to get this done.
The AI plateau:
The future is not set, nor are commitments made by AI companies.
We've been compiling a growing list of examples of AI companies saying one thing, and doing the opposite:
controlai.news/p/art...
UK POLITICIANS DEMAND REGULATION OF POWERFUL AI
TODAY: Politicians across the UK political spectrum back our campaign for binding rules on dangerous AI development.
This is the first time a coalition of parliamentarians have acknowledged the extinction threat posed by AI.
1/6
Did Sam Altman lie to President Trump?
What are the facts?
β Trump announced Stargate
β Elon Musk says they donβt have the money
β Nadella says his $80b is for Azure
β Trump doesnβt know if they have it
β Reporting suggests they may only have $52b
newsletter.tolgabilge.com/p/stargate-g...
We've just launched an open call for binding rules on dangerous AI development.
Top AI scientists, and even the CEOs of the biggest AI companies themselves, have warned that AI threatens human extinction.
The time for action is now. Sign below π
controlai.com/public...
Know them by their deeds, not their words.
AI companies often say one thing and do the opposite. Weβve been watching closely, and have been compiling a list of examples:
controlai.news/p/art...
We need a treaty to establish common redlines on AI.
AI development is advancing rapidly, and we may soon have AI systems that surpass humans in intelligence, yet we have no way to control them. Our very existence is at stake.
This could be the biggest deal in history.
π§΅
Google DeepMind's Chief AGI Scientist says there's a 50% chance that AGI will be built in the next 3 years.
This was in reference to a prediction he made back in 2011. He also thought there was a 5 to 50% chance of human extinction within a year of human-level AI being built!
The New Year is upon us, and it is a time when many are making predictions about how AI will continue to develop.
We've collected some predictions for AI in 2025, by Elon Musk, Sam Altman, Dario Amodei, Gary Marcus, and Eli Lifland.
Get them in our free weekly newsletter π
controlai.news/p/the...
Last year, OpenAI's chief lobbyist said that OpenAI is not aiming to build superintelligence.
Her boss, Sam Altman, is now bragging about how OpenAI is rushing to create superintelligence.
Two years of AI politics β where we started, where we stand, and where weβre heading:
newsletter.tolgabilge.com/p/two-years-of-ai-politics-past-present
Two years of AI politics β where we started, where we stand, and where weβre heading:
newsletter.tolgabilge.com/p/two-years-of-ai-politics-past-present
π© ControlAI Weekly Roundup: Time to Unplug?
1οΈβ£ Voters back AI policy focus on preventing extreme risks
2οΈβ£ Meta asks the government to block OpenAI's for-profit switch
3οΈβ£ Eric Schmidt warns there's a time to unplug AI
Get our free newsletter:
controlai.news/p/con...
One of the weird things about the world today is that the idea of 'AGI' is now regularly being talked about in e.g. policy contexts. But seems v clear that most policymakers' notion of AGI & its implications is vastly underpowered compared to that of the ppl trying to build AGI.
I'm not like that
π© ControlAI Weekly Roundup: Sneaky Machines
1οΈβ£ OpenAI launches o1, in tests tries to avoid shutdown
2οΈβ£ Google DeepMind launches Gemini 2.0
3οΈβ£ Comments by incoming AI czar David Sacks on AGI threat resurface
Get our free newsletter here π
controlai.news/p/sub...
Current AI research leads to extinction by godlike AI.
Creating AGI depends simply on enabling it to perform the intellectual tasks that we can.
Once AI can do that, we are on a path to godlike AI β systems so beyond our reach that they pose the risk of human extinction.
π§΅
π© ControlAI Weekly Roundup: AI Accelerates Cyberattacks
1οΈβ£ AI assists hackers mine sensitive data
2οΈβ£ Google DeepMind predicts weather more accurately than leading system
3οΈβ£ xAI plans massive expansion of its Memphis supercomputer
Get our free newsletter here π
controlai.news/p/con...
Recent polling by the AI Policy Institute β clear majorities of Americans say:
β¬₯ AI labs can't police themselves, more regulation is needed
β¬₯ They support AI Safety Institute testing of AI models, and this should be mandatory
β¬₯ AI safety testing is more important than US-China competition
We're starting to see people wake up to the risks. Serious people, who aren't talking their own books, and who are oath-sworn to do the best for their countries, and who feel compelled to speak out.
π© ControlAI Weekly Roundup: US-China Detente or AGI Suicide Race?
1οΈβ£ Biden and Xi agree AI shouldnβt control nuclear weapons
2οΈβ£ A US government commission recommends a race to AGI
3οΈβ£ Bengio writes about advances in the ability of AI to reason
controlai.news/p/con...
psa: likes are public here