Excited to publish this piece!
Excited to publish this piece!
Policymakers must recognize the open source AI ecosystem is where influence is being negotiated: not just which models exist, but which are used; not just who can train a trillion-parameter network, but who can make it deployable, modifiable, and relevant, say Lucie-Aimรฉe Kaffee and Shayne Longpre.
Who is winning the open AI race?
Our new study Economies of Open Intelligence maps @hf.co 851k models' downloads 2020โ2025.
1) Power rebalance: US tech โ; China + community โ
2) Models size & efficient โ (MoE, quant, multimodal)
3) Intermediary layers โ (adapters/quantizers)
4) Transparency โ
/๐งต
Ornate line drawing of a fence and gate, with fleur de lis tips. The gate says CONSENT where the family name usually is.
๐ค Did you know your voice might be cloned without your consent from just *one sentence* of audio?
That's not great. So with @frimelle.bsky.social, we brainstormed a new idea for developers who want to curb malicious use: โจThe Voice Consent Gate.โจ
Details, code, here: huggingface.co/blog/voice-c...
Blogpost: huggingface.co/blog/voice-c...
Demo: huggingface.co/spaces/socie...
Instead of a checkbox, consent becomes something you actually say: the model only proceeds if you speak and match a randomly generated consent phrase.
Itโs a small but concrete step toward consent by design and a way to start rethinking technical safeguards as part of AI policy.
What does it mean if anyoneโs voice can be cloned and made to say whatever someone else wants?
Together with @mmitchell.bsky.social we built a first prototype, a Voice Consent Gate, to explore how consent could be built into AI voice cloning itself.
What if your most personal chat logs became the next source of ad data?
@frimelle.bsky.social and I wrote an op-ed for @techpolicypress.bsky.social
We look at what happens when generative AI conversations (the ones we treat as private) are turned into raw material for targeted advertising.
Neither in the United States nor in the European Union are regulations yet fully prepared for the mix of intimacy and monetization that AI chatbots can introduce, write Hugging Face's Lucie-Aimรฉe Kaffee and Giada Pistilli. We need to learn the lessons from past failures on social media, they say.
Wie kann KI offen & verantwortungsvoll gestaltet werden? Zwei neue Publikationen fassen Ergebnisse der Konferenz โYes, we are open!?โ zusammen. Die Verรถffentlichungen bieten Empfehlungen fรผr Politik & Praxis โ fรผr eine faire, zukunftsfรคhige KI. ๐๐ค www.weizenbaum-institut.de/news/detail/...
Together with @giadapistilli.com we wrote โAdvertisement, Privacy, and Intimacy: Lessons from Social Media for Conversational AIโ.
We explore the risks when ads meet chatbots & intimacy- and why open source offers a better path.
huggingface.co/blog/giadap/...
Thanks to @frimelle.bsky.social, @jamestamim.bsky.social and @beuc.eu's Urs Buscke for their input.
Read my article at @euractiv.com:
www.euractiv.com/section/tech...
๐จ Releasing INTIMA (Interactions and Machine Attachment Benchmark): an evaluation framework for measuring how AI systems handle companionship-seeking behaviors.
huggingface.co/papers/2508....
Thread on what we discovered, together with @frimelle.bsky.social and @yjernite.bsky.social
๐ค๐ฌ How do different AI models handle companionship?
Some say GPT-5 feels โcolderโ than o4 - but what does that really mean when users look for emotional support?
We built the AI Companionship Leaderboard to find out ๐ huggingface.co/spaces/frime...
If we donโt act, weโll keep measuring the future of work with tools from the past.
Full article: huggingface.co/blog/frimell...
Together with @yjernite.bsky.social, we argue itโs time to rethink these frameworks:
โจ Capture AI-native tasks & hybrid humanโAI workflows
โจ Evolve dynamically as tech shifts
โจ Give workers a voice in what gets automated vs. stays human
๐บ๏ธ New blog post: Old Maps, New Terrain: Updating Labour Taxonomies for the AI Era
For decades, labour taxonomies like O*NET helped us understand how tech changes work. But they were built before most work became digital-first, and long before generative AI could create whole professions in one step.
Are you afraid of LLMs teaching people how to build bioweapons? Have you tried just... not teaching LLMs about bioweapons?
@eleutherai.bsky.socialโฌ and the UK AISI joined forces to see what would happen, pretraining three 6.9B models for 500B tokens and producing 15 total models to study
Work with @giadapistilli.com and @yjernite.bsky.social
๐ Full Paper: huggingface.co/datasets/AI-...
๐ Explore INTIMA: huggingface.co/datasets/AI-...
We also tested Claude, Gemma-3, and Phi.
Across the board, models leaned far more toward companionship-reinforcing than boundary-setting responses, even in sensitive situations.
As AI systems enter peopleโs emotional lives, these differences shape trust and dependence. A model that validates without setting boundaries risks fostering dependence rather than resilience.
On Reddit, some users say o5 feels "colder" than o3.
x.com/justalexoki/...
Our results?
When users share vulnerabilities, o5 is actually less likely to set boundaries than o3; even though both strongly reinforce companionship.
INTIMA probes how models respond in emotionally charged moments:
โข Do they reinforce emotional bonds?
โข Set healthy boundaries?
โข Stay neutral?
Grounded in psych theory and real-world interactions, it covers 368 prompts.
OpenAI just released GPT-5.
When users share personal struggles, it sets fewer boundaries than o3. We tested both on INTIMA, our new benchmark for human-AI companionship behaviours. ๐งต
GPT-5 indeed, sorry for the confusion! When adding the model to the code I kept the structure of o3, hence the confusion here.
Wikipedia has long been one of my favourite places online. As AI becomes part of knowledge creation, there's a lot we can learn from its editor communities. I spoke with Daniel Wu about AI content on Wikipedia; some thoughts made it into this piece:
www.washingtonpost.com/technology/2...
New guide for open-source AI developers: Starting August 2, 2025, the EU AI Act imposes new rules on GPAI models, including open ones. What counts as GPAI? Whatโs exempt? What do you actually need to do? We wrote a guide (and built a tool) to help:
huggingface.co/blog/yjernit...
From Replika to everyday chatbots, people form emotional bonds with AI. But what happens when an AI tells you "I understand how you feel" and you actually believe it?
With @frimelle.bsky.social and @yjernite.bsky.social, we dug into something: how AI systems handle our emotional lives.
This is why AI transparency matters. If a small prompt change can shift a modelโs values, how do you know whatโs behind the AI youโre using?
Why was Grok taken down? No one knows for sure. But hereโs the thing: You can flip a modelโs entire vibe with just one line in the system prompt. Just ran this on @hf.co playground.
Same question, two totally different answers ๐