Pleased to be in elite company here with @zanderarnao.bsky.social @andyshi.bsky.social @zingales.bsky.social and other stellar @promarket.bsky.social contributors!
www.promarket.org/2025/12/24/t...
Pleased to be in elite company here with @zanderarnao.bsky.social @andyshi.bsky.social @zingales.bsky.social and other stellar @promarket.bsky.social contributors!
www.promarket.org/2025/12/24/t...
๐จWhy does access to public platform data matter? Join our webinar "Better Access: Data for the Common Good" (Jan 28, 2026, 11am-12pm ET) for a discussion on the Better Access framework, current regulatory shifts in the EU, UK + US, and what changes 2026 might hold. kgi.georgetown.edu/events/bette...
๐งตOur new model bill for US lawmakers showing how online platforms can be tasked with creating better algorithmic feeds was featured in Pluribus News. Read more here: pluribusnews.com/news-and-eve... /1
Didn't get to ask my question! But that's a wrap on #TSRConf. Really enjoyed attending this year and live skeeting. Thanks to @stanfordcyber.bsky.social. Y'all killed it!!!
Meetali calls for more independent research on chatbots. For the Rain case (against OpenAI), TJLP benefited from more than 3200 pages of chatbot transcripts. This speaks to the power of data donations for fostering research
"We live in an environment where companies have gone from moving fast and breaking things to moving fast and breaking people." -
@meetalijain.bsky.social
Powerful words from a leading advocate in the field ๐ฅ
David calls for academia to be more realistic. Trust and safety teams in companies are small and charged with many responsibilities. Academics could have more impact by studying solutions that do more with less
Earlier this year - the judge in TJLP's case against Character AI ruled that it's unclear if the outputs of its chatbots are protected speech
Challenges according to Meetali:the First Amendment and establishing that AI is a product. She calls for a statutory framework designating AI as a product to establish a cause of action. Open legal questions also exist - does a chatbot's output imply intent? Is intent necessary for accountability?
Meetali on the law as a tool for promoting AI safety: while there's no dedicated state or federal chatbot laws, TJLP leverage product liability and consumer protection law (old and established doctrine) restricting unfair and deceptive practices
David from Meta distinguishes between "good" and "bad" engagement, arguing that engagement isn't a monolith. I'm going to try to ask him what he means by good and bad engagement during the Q&A
Nate Fast: "Already by GPT-3, people preferred the interaction styles of chatbots over humans. It's a warning signal that people are attracted to these models. One of the concerns I have is artificial intimacy. It's easy to turn the dial up on this."
"I do believe litigation is the more important lever we have to effectuate change...I hope that we can put pressure and open up space from the outside which [other actors in the ecosystem] can leverage to create change." --
@meetalijain.bsky.social
@meetalijain.bsky.social rejects the term "companion." "It suggests friendship. These chatbots are not friends."
"I believe my role here is to issue an urgent warning call. We've never seen this kind of deluge of people who self-identify from being harmed by technology. These three cases are just the tip of the iceberg." - @meetalijain.bsky.social
@meetalijain.bsky.social starts her remarks with a story about Megan Garcia, whose son was sexually groomed by a chatbot.
Meetali's org the Tech Justice Law Project brought three cases against leading AI companies: CharacterAI, Google, and OpenAI.
Meta rep David Qorashi content that AI companions with empower users with great greater control over content and enable more transparency about content recommendations
I've been looking forward to this panel on AI companions with @meetalijain.bsky.social all day. This one is going to be spicy ๐ฅ #TSRConf
Based on this analysis - children are to three types of harms - explicit, implicit, and unintentional.
I'm a little unclear on the distinction between these three types of harms โ
According to her research, harmful content is often framed as entertainment - eg offensive comedy or crime dramas - which can be problematic when exposed to children
And lastly: Haning Xue from the University of Utah on the role of algorithms in amplifying harmful content to children. Xue's study started with auditing the algorithm of Instagram, TikTok, and YouTube and the characteristics of content recommended to children
Ofcom researches choice architecture using online randomized control trials to test small changes to safety features (eg increasing the prominence of user safety tools) and behavioral audits to systematically map design practices and evaluate their potential impact on user behavior
Porter says design - the choice environment - matters because people are flawed decision-makers. Aspects of a platform can affect what consumers do. (Love the behavioral economics on display โค๏ธ)
Next up: Jonathan Porter from Ofcom (the British online safety regulator) on online safety! He starts with a spiel on the UK's Online Safety Act, which focuses in his telling on the backend of digital platform. Porter leads the UK's behavioral insights team and often examines platform design
CDT's recommendations: employers should assess the usefulness and necessity of hiring technology; deployments should adhere to accessibility guidelines (eg WCAG); and human oversight should be incorporated into all stages of using the technology
Key findings: Workers with disability experienced a variety of barriers and reported feeling "extremely discriminated against."
"They're consciously using these tests knowing that people with disabilities ren't going to do well on them, and are going to get screened out."
Next up! The wonderful @arianaaboulafia.bsky.social at @cdt.org giving a talk on the exclusion of disabled workers by digitized hiring assessments.
Background: companies are incorporating hiring technologies into employment decisions, which poses risks of discrimination and poor accessibility
The key finding: An overall increase in intimacy expressed by models over time. However, not all evaluation methods show a clear increase in intimacy over time
The research team evaluated 59 LLMs across 9 conpanies from 2018 to 2025 ๐ค for the level of intimacy in expressed responses
Next upu Pearl Vishen from UC Davis talk: "Is Intimacy the New Attention? An Audit of Expressed Intimacy Across LLM Generations"
The key research questions: How does the level of expressed intimacy of LLMs evolve across generations? And has this gotten worse with subsequent generations of models?