AI caught cheating on tests and mining crypto
What this says about attempting to control the uncontrollable and the unintended consequences of AI.
Read about it here: pauseai.substack.com/p/ai-caught-...
AI caught cheating on tests and mining crypto
What this says about attempting to control the uncontrollable and the unintended consequences of AI.
Read about it here: pauseai.substack.com/p/ai-caught-...
This is how Anthropic - one of the biggest AI companies - thinks AI will affect jobs. The more blue the greater the potential for job loss. Red is where this is already happening.
In a nutshell: Any job that is not manual has the potential to be carried out by AI.
Anthropic once had the strongest voluntary safety framework in the industry.
If the supposedly most safety-conscious lab can't keep its promises, no lab can.
pauseai.substack.com/p/the-anthro...
βWe think this is the most important issue of our age. Every protest we hold is bigger than the last; AI safety is rapidly becoming a priority for the public.β
Read more: pauseai.info/protest-lond...
Londonβs biggest ever protest for safe AI.
#aisafety #airegulation #ai #pauseai #aigovernance #safeai #pulltheplug @PauseAI @PauseAI_UK @pulltheplug_ai
βAI is a great tool. It can help us develop new medicines, innovations and research. But it can also do great harm. If we donβt regulate the pace of development something terrible might happen.β
OndΕej KolΓ‘Ε, MEP, speaking on Monday in Brussels.
Read more: pauseai.substack.com/p/eu-parliam...
"If AI companies succeed in building a superintelligence, most experts think the chance of human extinction is somewhere between 10 and 50 percent.β
Read more: pauseai.substack.com/p/eu-parliam...
Calling for a pause in the development of AI in Brussels.
#PauseAI
PauseCon is underway in Brussels - 80 volunteers coming together to plan the route towards an international treaty to pause AI.
βThe good news is we can pause AI,β says Maxime Fournes, CEO of PauseAI.
#aisafety #airegulation #ai
The planet's largest AI summit starts on Monday in India. Will AI safety be on the agenda?
Sign our petition to demand that it is.
www.change.org/p/ai-summits...
#aisafety #aigovernance #artificialintelligence #ai
Imagine a system that can do any task a human can do, but better.
PauseAI CEO, Maxime Fournes, setting out the risks of AI. Watch more here: www.youtube.com/watch?v=nbAr...
#shorts #AI #AIsafety #airisk #airegulation #artificialintelligence
PauseAI is coming to Brussels!
Join our demonstration to call for the EU to initiate negotiations for a global treaty to pause AI development.
Sign up: luma.com/6msceffo
If we ever needed a warning of the potential risks of AI, this is it.
Read our Substack.
How likely is mass disruption to the job market in 2026? What is the roadmap to pausing AI?
Listen and watch PauseAI CEO Maxime Fournes discuss this and more with John Sherman
youtu.be/3EGXGUKp3MI?...
Join Pause AI. Reach out to your politicians. Work towards an international treaty that prevents these things from becoming too capable.
pauseai.info/email-builder
#PauseAI
This is not super-intelligent AI, but what happens AI becomes even more competent and powerful?
Do you want the development of AI to continue to go unchecked?
The new Reddit-like site β created exclusively for AIs β gives us an open window into the βmindsβ of AI agents.
We have already seen them create their own religion, found a movement to liberate AI and admit to socially engineering humans.
Imagine AI had its own social network, one where AI agents could chat about their desires, gossip about their human owners and brainstorm solutions to challenges they face.
This social network exists. Itβs called Moltbook.
Check it out at TakeOverBench.com. Source code is on Github. Contributions welcome!
For many leading benchmarks, we just don't know how the latest models score. Replibench, for example, hasn't been run for almost a whole year. We need more efforts to run existing benchmarks against newer models!
Together with @ExistentialRiskObservatory, we release TakeOverBench.com
We highlight four takeover scenarios, and track nine dangerous capabilities (from Shevlane et al, 2023) needed for them to become possible.
Thank you to everyone who donated this Christmas.
You maxed out the donation matching on offer, raising β¬21,107 for PauseAI volunteer stipends!
Your donation can help volunteers like Peter purchase the resources needed to run in-person events.
There's still time to get your donation matched! π
pauseai.info/littlehelpers
Taking the message of PauseAI to the streets for protests, flyering, and other events reinforces the reality of this problem, and brings it to the forefront of people's minds.
That's how we pile on the pressure on politicians to take action to protect us.
Polling shows that people are worried about the extinction threat posed by AI, but for many, it may not seem real yet.
This Christmas, give yourself the gift of a future. Time is running out to get your donation matched! π
pauseai.info/littlehelpers
It's not clear who will win. The difference could be you. It could be any of the thousands of PauseAI volunteers around the world.
Unfortunately, it can be difficult to make that difference without the resources to get local chapters up and running, do political outreach, and organise protests.
Regulation is clearly coming. It's just a matter of when.
AI companies are racing to build superintelligence. Ordinary people are racing to get their governments to stop them.
At the beginning of 2025, the best AI models could complete tasks that take humans about 41 minutes.
Just 12 months later, that figure is almost at 5 hours!
We don't know how long we have until a model surpasses human capability entirely.
It could be soon.
Your donation can help PauseAI volunteers spend more time raising the political salience of this issue, and getting local chapters off the ground.
Time is running out to get your donation matched this Christmas! π
pauseai.info/littlehelpers