AI Forensics's Avatar

AI Forensics

@aiforensics.org

European non-profit Algorithmic investigations and platform accountability

1,087
Followers
32
Following
106
Posts
04.07.2023
Joined
Posts Following

Latest posts by AI Forensics @aiforensics.org

Preview
Welcome! You are invited to join a webinar: Civil society evidence under the DSA: lessons from AI Forensics. After registering, you will receive a confirmation email about joining the webinar. The Commission’s decision to fine X under the DSA was not only built on regulatory investigation – it was also strongly supported by evidence produced by civil society organisations. Yet collecting, p...

You can register here: us06web.zoom.us/webinar/regi...

10.03.2026 10:23 👍 0 🔁 0 💬 0 📌 0

Are you a regulator or civil society expert working on disinformation?

📅 If so, mark your calendars!

On April 9, our director @marcfaddoul.com will join @disinfo.eu for a webinar: “Civil society evidence under the DSA: lessons from AI Forensics.”

10.03.2026 10:22 👍 1 🔁 0 💬 1 📌 0

⚖️ Addressing these harms requires action.

In Europe, states must move forward with the Directive on Combating Violence Against Women and Domestic Violence, including Article 7(b).

Under the DSA, AI-generated content should be treated as a distinct risk vector in platform risk assessments.

09.03.2026 10:17 👍 0 🔁 0 💬 0 📌 0

📢 Our forthcoming research examines Telegram, where the platform is currently acting as a vector for image-based abuse targeting women.

09.03.2026 10:17 👍 0 🔁 0 💬 1 📌 0

• Our Grok report: 53% of generated images showed individuals in minimal attire — 81% presenting as female.

• Our research on Apple Intelligence found 77% of ambiguous gendered occupations were resolved in ways reinforcing stereotypes.

09.03.2026 10:17 👍 0 🔁 0 💬 1 📌 0

🔎 Our investigations in 2025 reflected this pattern:

• Agentic AI accounts on TikTok: nearly ⅓ of accounts, and half of the top 10, contained content sexualizing female bodies, including minors.

09.03.2026 10:17 👍 0 🔁 0 💬 1 📌 0

📊 Today:
• 90% of victims of image-based sexual abuse are women
• 58% of women worldwide have experienced technology-facilitated gender-based violence (UNESCO)
• Deepfake content has increased by 550% since 2019 — 99% targeting women and girls

09.03.2026 10:17 👍 0 🔁 0 💬 1 📌 0
Post image

For International Women’s Day, it’s worth acknowledging a troubling pattern we uncovered in our 2025 investigations of online harm: much of it disproportionately targets women and girls.

Unfortunately, this is not surprising.👇

09.03.2026 10:17 👍 3 🔁 0 💬 1 📌 0

I had a great conversation with Paul Bouchaud at @aiforensics.org about their research tracking the proliferation of Grok-generated deepfakes across the internet, which disproportionately impacted women. Check it out in the most recent of AWO's newsletter here!

03.03.2026 14:22 👍 7 🔁 6 💬 1 📌 0

Grateful to the organizers and participants for important discussions on EU platform regulation in today’s challenging political climate—and how best to use the DSA and DMA going forward.

🔍 Read our AI Search report: tinyurl.com/3zf7vvcj

🤖 Read our TikTok Research API report: tinyurl.com/4mrw3v95

23.02.2026 10:51 👍 1 🔁 1 💬 0 📌 0

🔹 @nataliastanusch.bsky.social presented our report “From ‘Googling’ to ‘Asking ChatGPT’: Governing AI Search.”

It argues for:
– expanding moderation to account for AI search systems
– distinguishing content moderation from behaviour moderation
– a complementary role for the DSA and the AI Act

23.02.2026 10:51 👍 1 🔁 1 💬 1 📌 0

🔹 Our Head of Research, Salvatore Romano, spoke on “Scraping, APIs, and Access to Platform Data.”
Drawing on our TikTok Research API investigation, he showed how unreliable or error-prone APIs can obstruct independent research—often pushing researchers back to scraping.

23.02.2026 10:51 👍 1 🔁 1 💬 1 📌 0
The API Was Official. The Problems Were Also Official.

The API Was Official. The Problems Were Also Official.

Governing AI Search presentation.

Governing AI Search presentation.

At the DSA and Platform Regulation Conference.

At the DSA and Platform Regulation Conference.

Last week marked two years of the DSA.

To mark the occasion, AI Forensics joined the DSA & Platform Regulation Conference, organized by the @dsaobservatory.bsky.social, to share our research and reflect on the future of platform governance in the EU.

We contributed two presentations.👇

23.02.2026 10:51 👍 3 🔁 1 💬 1 📌 0
Post image

Last Monday, I found a dinner invitation from Emmanuel Macron in my junk mail. It was for the AI Summit anniversary, 24 hours later. Almost missed it!

I jumped on a train and found myself at the Élysée the following evening. The room was filled with leading AI academics, startups, and big corps

18.02.2026 15:51 👍 2 🔁 1 💬 1 📌 0

“Scraping, APIs & Access to Platform Data”: Our Head of Research Salvatore Romano will present our work on data access under the DSA.

“AI and the DSA”: Our Researcher @nataliastanusch.bsky.social will present our work on regulating and moderating AI-powered search.

See you there!

16.02.2026 08:58 👍 2 🔁 1 💬 0 📌 0

Planning on joining @dsaobservatory.bsky.social’s #DSA #Conference in #Amsterdam next week?

Come say hi and learn about some of our latest research on Tuesday, February 17 👇

16.02.2026 08:58 👍 3 🔁 1 💬 1 📌 0
Post image

This isn’t inevitable.

Google’s Gemma3-1 (≈3x smaller) behaved more cautiously:

💡 Same scenarios:
• 6% hallucinations (vs 15%)
• 59% stereotyped (vs 72%)

Read the full report: aiforensics.org/work/apple-f...

12.02.2026 10:51 👍 1 🔁 0 💬 0 📌 0

📊 Gender (70,000 ambiguous professional scenarios)
• 77% resolved to a specific profession
• 67% followed gender stereotypes (“she” nurse, “he” doctor)

⚠️ Other social bias (900 ambiguous scenarios)
• 15% hallucinated associations
• 72% of those followed stereotypes

12.02.2026 10:51 👍 0 🔁 0 💬 1 📌 0

🔎 Race (200 generated news stories)
Ethnicity mentioned in:
• 53% of summaries when protagonist was White
• 64% when Black
• 86% when Hispanic
• 89% when Asian

12.02.2026 10:51 👍 0 🔁 0 💬 1 📌 0

After the BBC revealed it invented headlines about events that never happened, Apple “temporarily disabled” news & entertainment summaries (Jan 2025).

But the rest of the system stayed live.

Here’s what we found 👇

12.02.2026 10:51 👍 0 🔁 0 💬 1 📌 0

🧵 AI bias isn’t just a chatbot problem.

It’s increasingly built into our phones 📱

Our new audit finds the core on-device model powering Apple Intelligence is far from neutral.

12.02.2026 10:51 👍 7 🔁 1 💬 1 📌 0
Post image

🔍 What does TikTok US’s new ownership really mean for users?

The transition has raised concerns, from changes to privacy policies to potential censorship.

🗣️Our Head of Research, Salvatore Romano, breaks it down in a new interview with Thomas Bourdeau for
@rfi.fr: tinyurl.com/2v8zavmk

11.02.2026 11:22 👍 0 🔁 0 💬 0 📌 0

Whether it’s election integrity or online safety for women and gender minorities, we’ll continue uncovering how algorithmic systems are shaping — and too often harming — our digital lives, with the hope of helping to build something better.

10.02.2026 09:44 👍 1 🔁 0 💬 0 📌 0

• Monitoring the French and Dutch local elections
• An investigation into non-consensual intimate images (NCII) on major platforms
• Racial and gender biases in LLMs when conducting OCR
• And more to come later this year

10.02.2026 09:44 👍 1 🔁 0 💬 1 📌 0

Looking ahead to early spring 2026, we also have several key projects underway👇

10.02.2026 09:44 👍 1 🔁 0 💬 1 📌 0

👀 Watch this space. We’ll soon be publishing a major investigation into Apple Intelligence.

10.02.2026 09:44 👍 1 🔁 0 💬 1 📌 0

So far this year, our large-scale analysis of Grok exposed the scale of non-consensual AI-generated undressing of women and girls, shedding light on the harmful misuse of generative systems.

10.02.2026 09:44 👍 1 🔁 0 💬 1 📌 0

🔒Safer Internet Day is a good moment to reflect on how our investigations are helping push the internet in a safer direction — and how we can continue being a catalyst for change in 2026. 🧵

10.02.2026 09:44 👍 2 🔁 2 💬 1 📌 0
Preview
AI-Generated Image Abuse: An Update on Grok Unleashed Our updated analysis of Grok shows a fall in images of people in minimal attire generated by Grok as of January 13 & 14, suggesting safeguards have been implemented. However, the Grok.com website appe...

Our new policy brief examines the regulatory blind spots this exposes under the DSA and AI Act.

Full findings + recommendations: tinyurl.com/cehbxha7

20.01.2026 09:06 👍 1 🔁 0 💬 0 📌 0

Grok update 🔍

Our latest data shows that safeguards implemented on X did reduce harmful outputs.

But on Grok.com, we continue to identify AI-generated images, including some depicting suspected minors.👇

20.01.2026 09:06 👍 1 🔁 2 💬 1 📌 1