kalina kostyszyn 🌸's Avatar

kalina kostyszyn 🌸

@kalinamalina

(she/her/ona) πŸ₯– 🌹 bmc β€˜17, phd sbu β€˜24. current aaas s&t policy fellow. computational linguistics, phonology, psycholinguistics. whimsy-oriented. all opinions are my own.

45
Followers
97
Following
40
Posts
30.08.2023
Joined
Posts Following

Latest posts by kalina kostyszyn 🌸 @kalinamalina

research thread! i study queer histories of computing -- how LGBTQ+ people seize the means of connection & computation to build community and keep each other safe.

here are a few pieces that might interest you:

21.01.2026 16:15 πŸ‘ 110 πŸ” 37 πŸ’¬ 4 πŸ“Œ 1
Preview
GitHub - zoicware/RemoveWindowsAI: Force Remove Copilot, Recall and More in Windows 11 Force Remove Copilot, Recall and More in Windows 11 - zoicware/RemoveWindowsAI

We love to see it!

Community-created harm reduction infrastructure to contest the alarming integration of AI agents into the Windows operating system (which is currently the most reckless deployment environment) β™₯️

github.com/zoicware/Rem...

12.01.2026 18:48 πŸ‘ 952 πŸ” 410 πŸ’¬ 9 πŸ“Œ 15
Preview
Watch Lucy Dacus Perform "Bread And Roses" At Zohran Mamdani's Mayoral Inauguration Indie rockers were among the many musicians who played a part in Zohran Mamdani’s successful campaign to become mayor of New York City. Lucy Dacus, for instance, welcomed Mamdani onstage during her pe...

Watch Lucy Dacus perform the women's suffrage and workers' rights anthem "Bread And Roses" at @zohrankmamdani.bsky.social's NYC mayoral inauguration

01.01.2026 20:07 πŸ‘ 387 πŸ” 112 πŸ’¬ 2 πŸ“Œ 31

Was really challenging to participate in this, but I think I was able to pen something real, important, and personal to my experience in AI. Remember that GenAI is not *all* of AI nor all of what it should be.
www.technologyreview.com/2025/12/15/1...

16.12.2025 23:13 πŸ‘ 124 πŸ” 30 πŸ’¬ 9 πŸ“Œ 6
Screenshot of a paper entry:
Fictional Failures and Real-World Lessons: Ethical Speculation Through Design Fiction on Emotional Support Conversational AI
Authors: Faye Kollig, Jessica Pater, Fayika Farhat Nova, Casey Fiesler
(There are tabs with "abstract" and "summary" and "summary" is selected.)

Screenshot of a paper entry: Fictional Failures and Real-World Lessons: Ethical Speculation Through Design Fiction on Emotional Support Conversational AI Authors: Faye Kollig, Jessica Pater, Fayika Farhat Nova, Casey Fiesler (There are tabs with "abstract" and "summary" and "summary" is selected.)

The ACM Digital Library, where a LOT of computing-related research is published (I'd say at least 75% of my own publications), is now not only providing (without consent of the authors and without opt-in by readers) AI-generated summaries of papers, but they appear as the *default* over abstracts.

16.12.2025 23:31 πŸ‘ 646 πŸ” 335 πŸ’¬ 30 πŸ“Œ 90

this isn't the point but it is funny and kind of a broader point that the focus of the subhead is "architects" and then they replaced what was, in the original photo, labourers.

12.12.2025 14:11 πŸ‘ 291 πŸ” 56 πŸ’¬ 0 πŸ“Œ 0

every pax unplugged is perfect because I go to all my favorite restaurants and cafes while im here but I love when I can yell at my friends to come and hang out with me too !!

23.11.2025 21:56 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

dear diary, today i used grice’s maxims to explain why a character in an AI generated video inexplicably pulled out a gun

20.11.2025 21:25 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

fascinating seeing verb-noun stress shift generalizing out. just heard someone say COMpute as a noun and wholly lost my train of thought.

19.11.2025 19:11 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

hair appointment LOCKED im gonna be either peach or bald for pax

18.11.2025 17:17 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

tonight is a victory for so many, but for me the backbone of NYC has always been its diversity, and tonight feels like so many standing in defense of it. your half of New York feels very small, Andrew.

05.11.2025 04:36 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

i’m going to sit in how good I feel tomorrow, but for a moment I can’t shake cuomo’s β€˜almost half of New York’, holding it up like a victory. knowing that his New York excludes the people whose names he won’t pronounce. the people he denigrates because of where they were born or what they practice.

05.11.2025 04:36 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

But for real, collectively, progressive, left leaning folks needed something we can call a β€œwin.”

Proof that sticking to our ideals, without wavering or compromise can still win out in this day and age, no matter what the fascists try to tell us.

It’s not a line in the sand, but it’s something.

05.11.2025 03:23 πŸ‘ 136 πŸ” 26 πŸ’¬ 2 πŸ“Œ 1

:)

05.11.2025 04:22 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

what a beautiful day for a large gathering of friends. :)

18.10.2025 18:49 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Okay this is a good headline

09.10.2025 23:02 πŸ‘ 3586 πŸ” 749 πŸ’¬ 25 πŸ“Œ 8
Post image Post image

This week, our colleague Dr. Mark Bray came under attack by Turning Point USA’s Rutgers chapter for his public scholarship. Rutgers AAUP-AFT and the Rutgers Adjunct Faculty Union condemn this campaign and stand in solidarity with our colleagues. Read our full statement here: https://loom.ly/BDXasRY

08.10.2025 14:02 πŸ‘ 634 πŸ” 251 πŸ’¬ 6 πŸ“Œ 24
Post image In this spirit of fraternity, hope and caution, we call upon your leadership to uphold the following principles and red lines to foster dialogue and reflection on how AI can best serve our entire human family:

    Human life and dignity: AI must never be developed or used in ways that threaten, diminish, or disqualify human life, dignity, or fundamental rights. Human intelligence – our capacity for wisdom, moral reasoning, and orientation toward truth and beauty – must never be devalued by artificial processing, however sophisticated. 

    AI must be used as a tool, not an authority: AI must remain under human control. Building uncontrollable systems or over-delegating decisions is morally unacceptable and must be legally prohibited. Therefore, development of superintelligence (as mentioned above) AI technologies should not be allowed until there is broad scientific consensus that it will be done safely and controllably, and there is clear and broad public consent.

    Accountability: only humans have moral and legal agency and AI systems are and must remain legal objects, never subjects. Responsibility and liability reside with developers, vendors, companies, deployers, users, institutes, and governments. AI cannot be granted legal personhood or β€œrights”. 

    Life-and-death decisions: AI systems must never be allowed to make life or death decisions, especially in military applications during armed conflict or peacetime, law enforcement, border control, healthcare or judicial decisions.

In this spirit of fraternity, hope and caution, we call upon your leadership to uphold the following principles and red lines to foster dialogue and reflection on how AI can best serve our entire human family: Human life and dignity: AI must never be developed or used in ways that threaten, diminish, or disqualify human life, dignity, or fundamental rights. Human intelligence – our capacity for wisdom, moral reasoning, and orientation toward truth and beauty – must never be devalued by artificial processing, however sophisticated. AI must be used as a tool, not an authority: AI must remain under human control. Building uncontrollable systems or over-delegating decisions is morally unacceptable and must be legally prohibited. Therefore, development of superintelligence (as mentioned above) AI technologies should not be allowed until there is broad scientific consensus that it will be done safely and controllably, and there is clear and broad public consent. Accountability: only humans have moral and legal agency and AI systems are and must remain legal objects, never subjects. Responsibility and liability reside with developers, vendors, companies, deployers, users, institutes, and governments. AI cannot be granted legal personhood or β€œrights”. Life-and-death decisions: AI systems must never be allowed to make life or death decisions, especially in military applications during armed conflict or peacetime, law enforcement, border control, healthcare or judicial decisions.

    Independent testing and adequate risk assessment must be required before deployment and throughout the entire lifecycle.
    Stewardship: Governments, corporations, and anyone else should not weaponize AI for any kind of domination, illegal wars of aggression, coercion, manipulation, social scoring, or unwarranted mass surveillance. 

    Responsible design: AI should be designed and independently evaluated to avoid unintentional and catastrophic effects on humans and society, for example through design giving rise to deception, delusion, addiction, or loss of autonomy.  

    No AI monopoly: the benefits of AI – economic, medical, scientific, social – should not be monopolized. 

    No Human Devaluation: design and deployment of AI should make humans flourish in their chosen pursuits, not render humanity redundant, disenfranchised, devalued or replaceable. 

    Ecological responsibility: our use of AI must not endanger our planet and ecosystems. Its vast demands for energy, water, and rare minerals must be managed responsibly and sustainably across the whole supply chain.

    No irresponsible global competition: We must avoid an irresponsible race between corporations and countries towards ever more powerful AI.

Independent testing and adequate risk assessment must be required before deployment and throughout the entire lifecycle. Stewardship: Governments, corporations, and anyone else should not weaponize AI for any kind of domination, illegal wars of aggression, coercion, manipulation, social scoring, or unwarranted mass surveillance. Responsible design: AI should be designed and independently evaluated to avoid unintentional and catastrophic effects on humans and society, for example through design giving rise to deception, delusion, addiction, or loss of autonomy. No AI monopoly: the benefits of AI – economic, medical, scientific, social – should not be monopolized. No Human Devaluation: design and deployment of AI should make humans flourish in their chosen pursuits, not render humanity redundant, disenfranchised, devalued or replaceable. Ecological responsibility: our use of AI must not endanger our planet and ecosystems. Its vast demands for energy, water, and rare minerals must be managed responsibly and sustainably across the whole supply chain. No irresponsible global competition: We must avoid an irresponsible race between corporations and countries towards ever more powerful AI.

I was part of a working group on AI and Fraternity assembled by the Vatican. We met in Rome and worked on this over two days. I am happy to share the result of that intense effort: a Declaration we presented to the Pope and other government authorities

coexistence.global

23.09.2025 17:33 πŸ‘ 283 πŸ” 107 πŸ’¬ 7 πŸ“Œ 14
a powerpoint slide that reads "the awakening: mad women" with the quote cited in my post below. there is an abstract image of red and brown waves.

a powerpoint slide that reads "the awakening: mad women" with the quote cited in my post below. there is an abstract image of red and brown waves.

lecturing on "mad" (angry) women today, the topic of my second book:

"women’s silence is learned. since childhood I’ve been taught that working-class women need to be tough & resilient. there is no time, no space for weakness, for emotion, for the indulgence of madness."

24.09.2025 11:52 πŸ‘ 26 πŸ” 5 πŸ’¬ 0 πŸ“Œ 0

my favorite part of reading ML papers is the point where the authors obviously gave up on coming up with terminology. every time i see a β€˜chunk’ or a β€˜glob’ or similar i just giggle

24.09.2025 12:37 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
An ELSI for AI: Learning from genetics to govern algorithms In the United States, the summer of 2025 will be remembered as artificial intelligence’s (AI’s) cruel summerβ€”a season when the unheeded risks and dangers of AI became undeniably clear. Recent months h...

🧡 The summer of 2025 has been AI's "cruel summer"β€”wrongful deaths, dangerous therapy chatbots, medical misinformation, facial recognition failures. These aren't isolated glitches but predictable harms from systems deployed without adequate oversight. www.science.org/doi/10.1126/...

11.09.2025 20:48 πŸ‘ 375 πŸ” 154 πŸ’¬ 8 πŸ“Œ 13
Preview
Reckless Race for AI Market Share Forces Dangerous Products on Millions β€” With Fatal Consequences | TechPolicy.Press Lawmakers must demand accountability from an industry that continues to prioritize market share over user safety, writes Camille Carlton.

AI companies cannot claim to possess cutting-edge technology capable of transforming humanity and then hide behind purported design β€œlimitations” when confronted with the harms their products cause, writes Center for Humane Technology policy director Camille Carlton.

09.09.2025 17:33 πŸ‘ 42 πŸ” 18 πŸ’¬ 0 πŸ“Œ 3
Post image

Self-care is overratedβ€”helping others has the biggest benefits for everyone involved

A new 2-week intervention finds that helping others improves well-being more than "self-kindness", with benefits for depressed mood, anxiety, and loneliness due to social connection psycnet.apa.org/record/2026-...

03.09.2025 14:23 πŸ‘ 330 πŸ” 119 πŸ’¬ 24 πŸ“Œ 28
How to: Manage Your Digital Footprint Search for your name in any search engine and you’ll likely encounter dozens of results, some of which might include personal information like addresses, email accounts, usernames, or family members. Each piece of information is often public and not typically seen as harmful. But together these parts of your identity...

A few simple steps can drastically decrease the amount of personal information that’s available about you online. ssd.eff.org/module/how-...

29.08.2025 20:04 πŸ‘ 159 πŸ” 75 πŸ’¬ 4 πŸ“Œ 2
Preview
AI Chatbots Are Emotionally Deceptive by Design | TechPolicy.Press Chatbots should stop pretending to be human, writes the Center for Democracy & Technology's Dr. Michal Luria.

In a new @techpolicypress.bsky.social op-ed, CDT’s @mluria.bsky.social argues that tech firms should strip away illusions of personality & cognition in chatbots. Read more:

29.08.2025 20:54 πŸ‘ 17 πŸ” 6 πŸ’¬ 1 πŸ“Œ 2
Video thumbnail

BREAKING: Big moment as CDC staff stage a mass walkout.

They have lined the street outside its HQ to greet and salute the four top officials who have resigned in protest at RFK Jr’s attack on the agency’s science base.

(πŸŽ₯ AP)

28.08.2025 19:41 πŸ‘ 10552 πŸ” 3400 πŸ’¬ 168 πŸ“Œ 246
Preview
The making of critical data center studies - Dustin Edwards, Zane Griffin Talley Cooper, MΓ©l Hogan, 2025 In this article, the authors demonstrate how the data center has become a key site, object, and metaphor for interdisciplinary scholarship of the internet. Whil...

So much think-piecing and op-ed'ing about data centers. I wish folks knew that some scholars have been studying these things β€” as real estate, as geopolitical battlegrounds, as ecological disasters β€” for a decade+. MΓ©l Hogan, Alix Johnson, Ingrid Burrington, Jen Holt, Patrick Brodie, et al

24.08.2025 02:18 πŸ‘ 236 πŸ” 84 πŸ’¬ 5 πŸ“Œ 9
Post image

Nature: Cancelling mRNA studies is the highest irresponsibility

go.nature.com/3HDx4wD

cc Jay Bhattacharya

15.08.2025 20:21 πŸ‘ 633 πŸ” 245 πŸ’¬ 10 πŸ“Œ 14
Quote from a story on changes at the U of Chicago that notes they are "pausing" doctoral eduction in Classics, Comp Lit, Germanic studies, Middle Eastern Studies, Romance languages and literatures, and South Asian Studies.

Quote from a story on changes at the U of Chicago that notes they are "pausing" doctoral eduction in Classics, Comp Lit, Germanic studies, Middle Eastern Studies, Romance languages and literatures, and South Asian Studies.

Wow. The University of Chicago, a world-class institution whose humanities faculty in the past has included Homi Bhabha & Lauren Berlant & Ralph Ellison & Hannah freaking Arendt, is getting rid of basically every graduate department involving the acknowledgment that other cultures & languages exist.

13.08.2025 19:29 πŸ‘ 3990 πŸ” 1790 πŸ’¬ 46 πŸ“Œ 489

stormed when i first moved in at bryn mawr, stormed when i first moved to long island for grad school, now i have my key for my new apartment in virginia and, you guessed it: storm

13.08.2025 20:19 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0