Just as if people had enough of generic content that they can easily find elsewhere, including through AI.
(via Semafor's Max Tani)
@felixsimon
Research Fellow in AI and News, Reuters Institute, Oxford University | Research Associate & PhD, Oxford Internet Institute | Junior Research Fellow, Corpus Christi College | AI, news, (mis)info, democracy | My views etc… https://www.felixsimon.net/
Just as if people had enough of generic content that they can easily find elsewhere, including through AI.
(via Semafor's Max Tani)
Of course possible that I am missing something big time.
And if you throw in searches for “chatgpt free” or “chatgpt app” these other searches get completely dwarfed in comparison. Would be curious to see what happened on the install/uninstall front and if there was any lasting or significant effect (colour me sceptical for now).
And while they all ticked up after a bit of a curious drop, they seem to be below the levels of the last three months (I tried this for “Worldwide” but the picture looked similar for the US).
As I don’t have access to Sensor Flow, I went on Google Trends (imperfect as it is, both as a data source and as a substitute here) and searched for a bunch of keywords (“chatgpt alternatives”, “delete chatgpt account”, “cancel chatgpt plus”, “chatgpt uninstall”).
I was curious to find out more about the QuitGPT campaign (ChatGPT uninstalls after OpenAI’s Department of Defense deal) which had surged by 295% according to Sensor Flow data: techcrunch.com/2026/03/02/c...
Yup, good point
Reposting this with the proper link...
felixsimon.substack.com/p/ai-generat...
But I'd push back on the narrative that we're entering some qualitatively new era of epistemic chaos. The tools are different. The underlying human dynamics, largely, are not.
The relative value of trustworthy sources goes up when the information environment gets noisier. That’s why the work of organisations like the BBC, FT and fact-checkers matters so much. Without them, the picture would be grimmer.
A recent NBER working paper found exactly this with Süddeutsche Zeitung readers — concern about AI fakes actually drove more visits and better subscriber retention. www.nber.org/papers/w34100
What does seem to have shifted: people are getting more sceptical online, and when in doubt, some are gravitating toward sources they trust.
Someone who wants to believe US forces suffered a devastating strike doesn’t need a flawless AI satellite image. A blurry screenshot will do. AI may be raising the ceiling on what's possible, but it's not clear it's moving the needle on what actually spreads and sticks.
You don’t actually need sophistication for misinformation to work. In our paper we argued consumption is driven by demand, not supply — shaped by partisanship and low institutional trust, not by the technical polish of the content.
That still seems to be the case, see paper here: misinforeview.hks.harvard.edu/article/misi...
But we’re also seeing all the usual: game footage, images out of context, crude photoshop-style manipulations. These aren't new. As Sacha Altay, Hugo Mercier and I argued in 2023, fact-checkers have long struggled less with deepfakes than with simple "cheapfakes."
Or here: www.ft.com/content/0bad...?
See here: www.bbc.co.uk/news/live/cv...
That said, there are good theoretical reasons to expect more of it — models have improved on the multimodal front and are more accessible. And anecdotally it feels that way. BBC Verify and the FT have documented striking examples: AI- satellite images, AI-enhanced explosions, etc
Do we have good quantitative measures showing there’s more AI-generated false content now than in previous conflicts? Not really. And I’m not sure it would matter — more drops in an already full ocean does not equal more impact.
…Or at least, I think it's a bit more complicated than that. The post is a bit more nuanced, but the TL/DR:
I've been getting asked a lot about AI-generated misinformation in the context of the US-Israeli strikes on Iran. The default take: it's definitely worse than before. I'm not so sure:…https://open.substack.com/pub/felixsimon/p/ai-generated-misinformation-and-the?r=145e2& 🧵
(And before any detractors come out: This consultation is at least in name explicitly meant to seek views "from members of the public, creative industry stakeholders, and other interested parties on how the BBC is operating currently, and how it should evolve in the future.”)
I've long held that government consultations are a great way for graduate students to make use of their skills (making evidence-based arguments) beyond the classroom and here is another one: Give your input on the BBC Charter Review buff.ly/W2F1r4K
Academics seeing the salary range for an Oxford Computer science position go to £3.9 billion pounds a year: “I guess I’m a bit of a computer scientist myself”
Met the mother of all research recruitment posters today. I’m sorry people, there’s nothing after this. We’ve peaked.
1. A short thread on a Bluesky phenomenon that might be described as "They are a dead-eyed cultist who must be cast out lest the heresy take root!" OP has blocked me for mocking them - I'd usually obscure their name but since they themselves were quote-dunking to demand someone else be blocked ...
Yesterday, the @reutersinstitute.bsky.social team was hard at work in coming up with ideas for our next cross-country survey on AI, information and news.
If you have some thoughts on what you think we should be looking at, you have until 15th March to share them here: buff.ly/4rcKxCa
well forgot to tag @ajungherr.bsky.social in the very first point which is now referring to no one