We need to run a bookclub on this series!
@shirandudy
A research scientist @Northeastern #AIEthics, #ResponsibleAI (she/her) Work on amplifying voices that we should hear more. My research is on cultural representation and perspective communication in AI. https://www.shirandudy.com
We need to run a bookclub on this series!
People who call themselves AI safety experts should be loud in this moment, but sadly you wouldn't hear them while Anthropic is selling us out.
Spring is right around the corner and so does our 2026 NLPerspectives workshop! We got you extra days to submit your paper π€© having March 2nd as the new deadline. Hope to see you in Palma De Mallorca ποΈ nlperspectives.di.unito.it/workshops/w2...
Thanks! π It helped!
Thanks so much for sharing it. The application process for pr doesn't allow for the research statement nor publications to be uploaded (only resume). Happy to reach out to a person who can help with this.
Pastries and snacks from Mallorca
π’ 2nd Call for Papers! Submit your original work, non-archival papers, and research communications on perspectivist data, modelling, and evaluation to our workshop co-located with LREC in Palma de Mallorca. Deadline: February 27
nlperspectives.di.unito.it/workshops/w2...
Which exactly why we should develop (and use) alternatives for every one of their products.
Who says they were given any social permission. Why do we keep buying into his investments?
Take a look at this- I think Semble is going to be a key part of achieving a Pivot to Wiki via atproto.
On academic AIβour communities are SOTA-chasing, valorizing novelty over impact.
Meanwhile, AI remains unhelpful for the majority of people on this planet.
Two points that stuck with me:
On "responsible AI"βBig AI puts profit before public safety. The idea that growth should be tempered for wellbeing or equality is unimaginable to economic libertarianism.
Language engineers are central to this persisting with a scalability narrative that's failing humanity, supplying critical talent to plutocrats, and building as if the whole endeavor were value-free.
The argument: ecological, meaning, and language crises are converging into a metacrisis, and Big AI is accelerating all three.
ACL, possibly the largest publisher of LLM research, has a code of conduct stating that "the public good is the paramount consideration." Yet the venue continues to struggle with honest reckoning about what we're building and why.
Highly recommend reading: "Big AI is Accelerating the Metacrisis: What Can We Do?"
arxiv.org/pdf/2512.24863
Are we heading to a society in which AI would reduce societal gaps? If anything, we're headed to exacerbate them instead. My new blogpost on AI and fairness. shirandudy.leaflet.pub/3mbkogdh3b22x
Finally serious steps towards co-governance. Now York city is walking the walk and to democratically involve its residents in deciding how to make the city work better for them. I hope more municipalities and other institutions would draw inspiration from NYC's Office of Mass Engagement π« π―
AI generated image based on the prompt Palma de Mallorca, Spain
π£ CFP now open! NLPerspectives @ LREC2026 is calling for papers.
Join us in Palma de Mallorca (May 11β16) to explore fresh approaches to perspectives in NLP.
ποΈ Deadline: Feb 27, 2026
Letβs rethink how we represent, evaluate, and hear, viewpoints in NLP research.
#NLPerspectives #LREC2026
An inspiring discussion that offers a glimpse into how these AI systems promote an anti-black agenda by @drtanksley.bsky.social
Nothing like IRL launch energy @wesleyfinck.org π
Which is another way to say that since our last week's release -- no one has died
I was sure it was a myth that dems still offer thoughts at this point. I really wanted to believe they can meet the moment with actions π€¦π»ββοΈ
We need more visions like this to a future where we all are part of and not just a few at the top.
How do you know this version of #DEMS will lose? It is stuck on yesteryear talking points set by someone else. What a waste of time. I'm so glad to see ppl like #Mamdani who controls narrative.
As AI becomes more integrated into our daily lives, rigorous testing (yes, longitudinal too), transparency about limitations, and genuine accountability measures aren't just nice-to-haves - they're necessities. ππ±
#AI #TechAccountability #OpenAI #AIEthics #AuditAI
We must prevent the next tragedy from happening. The silver lining? β¨ We're finally seeing concrete steps toward accountability - not just for corporations, but for individual leaders like Altman who shape these technologies. βοΈπ¨βπΌ
What's particularly concerning is the dismissal of warnings from the Center for Humane Technology about the real-world impact on everyday users, or following incidents like those involving Character AI. π¨β
CEO Sam Altman described learning safeguards as an "iterative process" - essentially meaning we're all guinea pigs π· for these systems as they evolve.
Users absorb the risks while companies capture the value π€
Horrific Evidence!!
OpenAI has acknowledged that ChatGPT "becomes less reliable in long interactions where parts of the model's safety training may degrade." π This admission raises critical questions about AI safety at scale.
The EU/Latin American's shouldn't focus on winning the AI race, but should be focused on solving people's problems. As simple as THAT. We (all) need a different game to win in π²β¨ A great episode of @techwontsave.us by @parismarx.com interviewing @ceciliarikap.bsky.social at techwontsave.us