You can register here: us06web.zoom.us/webinar/regi...
Are you a regulator or civil society expert working on disinformation?
📅 If so, mark your calendars!
On April 9, our director @marcfaddoul.com will join @disinfo.eu for a webinar: “Civil society evidence under the DSA: lessons from AI Forensics.”
⚖️ Addressing these harms requires action.
In Europe, states must move forward with the Directive on Combating Violence Against Women and Domestic Violence, including Article 7(b).
Under the DSA, AI-generated content should be treated as a distinct risk vector in platform risk assessments.
📢 Our forthcoming research examines Telegram, where the platform is currently acting as a vector for image-based abuse targeting women.
• Our Grok report: 53% of generated images showed individuals in minimal attire — 81% presenting as female.
• Our research on Apple Intelligence found 77% of ambiguous gendered occupations were resolved in ways reinforcing stereotypes.
🔎 Our investigations in 2025 reflected this pattern:
• Agentic AI accounts on TikTok: nearly ⅓ of accounts, and half of the top 10, contained content sexualizing female bodies, including minors.
📊 Today:
• 90% of victims of image-based sexual abuse are women
• 58% of women worldwide have experienced technology-facilitated gender-based violence (UNESCO)
• Deepfake content has increased by 550% since 2019 — 99% targeting women and girls
For International Women’s Day, it’s worth acknowledging a troubling pattern we uncovered in our 2025 investigations of online harm: much of it disproportionately targets women and girls.
Unfortunately, this is not surprising.👇
I had a great conversation with Paul Bouchaud at @aiforensics.org about their research tracking the proliferation of Grok-generated deepfakes across the internet, which disproportionately impacted women. Check it out in the most recent of AWO's newsletter here!
Grateful to the organizers and participants for important discussions on EU platform regulation in today’s challenging political climate—and how best to use the DSA and DMA going forward.
🔍 Read our AI Search report: tinyurl.com/3zf7vvcj
🤖 Read our TikTok Research API report: tinyurl.com/4mrw3v95
🔹 @nataliastanusch.bsky.social presented our report “From ‘Googling’ to ‘Asking ChatGPT’: Governing AI Search.”
It argues for:
– expanding moderation to account for AI search systems
– distinguishing content moderation from behaviour moderation
– a complementary role for the DSA and the AI Act
🔹 Our Head of Research, Salvatore Romano, spoke on “Scraping, APIs, and Access to Platform Data.”
Drawing on our TikTok Research API investigation, he showed how unreliable or error-prone APIs can obstruct independent research—often pushing researchers back to scraping.
The API Was Official. The Problems Were Also Official.
Governing AI Search presentation.
At the DSA and Platform Regulation Conference.
Last week marked two years of the DSA.
To mark the occasion, AI Forensics joined the DSA & Platform Regulation Conference, organized by the @dsaobservatory.bsky.social, to share our research and reflect on the future of platform governance in the EU.
We contributed two presentations.👇
Last Monday, I found a dinner invitation from Emmanuel Macron in my junk mail. It was for the AI Summit anniversary, 24 hours later. Almost missed it!
I jumped on a train and found myself at the Élysée the following evening. The room was filled with leading AI academics, startups, and big corps
“Scraping, APIs & Access to Platform Data”: Our Head of Research Salvatore Romano will present our work on data access under the DSA.
“AI and the DSA”: Our Researcher @nataliastanusch.bsky.social will present our work on regulating and moderating AI-powered search.
See you there!
Planning on joining @dsaobservatory.bsky.social’s #DSA #Conference in #Amsterdam next week?
Come say hi and learn about some of our latest research on Tuesday, February 17 👇
This isn’t inevitable.
Google’s Gemma3-1 (≈3x smaller) behaved more cautiously:
💡 Same scenarios:
• 6% hallucinations (vs 15%)
• 59% stereotyped (vs 72%)
Read the full report: aiforensics.org/work/apple-f...
📊 Gender (70,000 ambiguous professional scenarios)
• 77% resolved to a specific profession
• 67% followed gender stereotypes (“she” nurse, “he” doctor)
⚠️ Other social bias (900 ambiguous scenarios)
• 15% hallucinated associations
• 72% of those followed stereotypes
🔎 Race (200 generated news stories)
Ethnicity mentioned in:
• 53% of summaries when protagonist was White
• 64% when Black
• 86% when Hispanic
• 89% when Asian
After the BBC revealed it invented headlines about events that never happened, Apple “temporarily disabled” news & entertainment summaries (Jan 2025).
But the rest of the system stayed live.
Here’s what we found 👇
🧵 AI bias isn’t just a chatbot problem.
It’s increasingly built into our phones 📱
Our new audit finds the core on-device model powering Apple Intelligence is far from neutral.
🔍 What does TikTok US’s new ownership really mean for users?
The transition has raised concerns, from changes to privacy policies to potential censorship.
🗣️Our Head of Research, Salvatore Romano, breaks it down in a new interview with Thomas Bourdeau for
@rfi.fr: tinyurl.com/2v8zavmk
Whether it’s election integrity or online safety for women and gender minorities, we’ll continue uncovering how algorithmic systems are shaping — and too often harming — our digital lives, with the hope of helping to build something better.
• Monitoring the French and Dutch local elections
• An investigation into non-consensual intimate images (NCII) on major platforms
• Racial and gender biases in LLMs when conducting OCR
• And more to come later this year
Looking ahead to early spring 2026, we also have several key projects underway👇
👀 Watch this space. We’ll soon be publishing a major investigation into Apple Intelligence.
So far this year, our large-scale analysis of Grok exposed the scale of non-consensual AI-generated undressing of women and girls, shedding light on the harmful misuse of generative systems.
🔒Safer Internet Day is a good moment to reflect on how our investigations are helping push the internet in a safer direction — and how we can continue being a catalyst for change in 2026. 🧵
Our new policy brief examines the regulatory blind spots this exposes under the DSA and AI Act.
Full findings + recommendations: tinyurl.com/cehbxha7
Grok update 🔍
Our latest data shows that safeguards implemented on X did reduce harmful outputs.
But on Grok.com, we continue to identify AI-generated images, including some depicting suspected minors.👇