The first thing I read and talk to my graduate students about is generous reading as a foundational practice for our class. We're going to assume that I chose each thing on our reading list because we can get something out of it, and we're going to read for that first. Then, we can critique.
10.03.2026 15:44
π 180
π 26
π¬ 2
π 2
1. Are you currently using an AI tool for work-related tasks or projects?
* Yes
* No, but I would like to (PLEASE SKIP TO QUESTION 7)
My employer asks me to complete a survey on AI usage for which this is the first question (required):
10.03.2026 02:00
π 2660
π 569
π¬ 141
π 148
I love bcc it's so great that email has a secret "can you believe this bullshit" feature
28.02.2026 18:42
π 97
π 20
π¬ 0
π 0
The U.S. spent $30 billion to ditch textbooks for laptops and tablets: The result is the first generation less cognitively capable than their parents | Fortune
Neuroscientist Jared Cooney Horvath said older generations βscrewed upβ giving students access to so much technology: βI genuinely hope Gen Z quickly figures that out and gets mad.β
βHorvath noted not only dipping test scores, but also a stark correlation in scores and time spent on computers in school, such that more screen time was related to worse scores. He blamed students having unfettered access to technology that atrophied rather than bolstered learning capabilities.β
23.02.2026 14:10
π 274
π 87
π¬ 10
π 24
we should have a third industry besides AI and gambling
19.02.2026 13:20
π 502
π 119
π¬ 7
π 18
i don't know how to put this politely but stating that your work is approved/supported by companies like open ai, google and anthropic doesn't give you the credibility you think it does.
for me, this is a clear sign that i am not interested in engaging or working with you
18.02.2026 18:44
π 259
π 51
π¬ 1
π 2
BREAKING: The Department of Education has ended its directive that attempted to restrict diversity, equity, and inclusion efforts in schools nationwide.
This is a victory for academic freedom and education equity.
18.02.2026 17:39
π 41048
π 10832
π¬ 475
π 1026
Please share with folks you know who read or write fanfiction!!
16.02.2026 15:30
π 5
π 3
π¬ 1
π 0
OpenAI Just Scrapped the Team Responsible for Keeping Its AI Safe and Aligned With Human Values
OpenAI has just gotten rid of the team that ensures its AI systems are safe for all users.
The company disbanded an internal group focused on making sure its AI systems were βsafe, trustworthy, and consistently aligned with human values.β At the same time, the teamβs former leader has been reassigned as the companyβs chief futurist. OpenAI confirmed to TechCrunch that the teamβs members have been placed in other roles across the organization. The development was first reported by Platformer.
The alignment team was formed in September 2024 as an internal unit dedicated to AI alignment, a field focused on ensuring advanced systems operate in line with human intent. That work includes preparing AI to handle adversarial conditions and high-stakes scenarios without harmful or catastrophic outcomes.
βWe want these systems to consistently follow human intent in complex, real-world scenarios and adversarial conditions, avoid catastrophic behavior, and remain controllable, auditable, and aligned with human values,β a post from OpenAIβs Alignment Research blog states.
A related job posting described the teamβs mission as βdeveloping methodologies that enable AI to robustly follow human intent across a wide range of scenarios, including those that are adversarial or high-stakes.β
In a blog post published on Wednesday, former alignment lead Josh Achiam outlined his new responsibilities. βMy goal is to support OpenAIβs mission β to ensure that artificial general intelligence benefits all of humanity β by studying how the world will change in response to AI, AGI, and beyond,β Achiam wrote.
According to the company, the restructuring reflects routine changes within a rapidly evolving organization.
OpenAI Just Scrapped the Team Responsible for Keeping Its AI Safe and Aligned With Human Values
12.02.2026 05:13
π 3
π 10
π¬ 0
π 3
"Why can't we have a calm discussion about AI?"
Because folks don't actually want to discuss AI, they want blind submission to the ideological structures that motivate the contemporary use cases and deployments of AI. They don't want to hear the concerns except to dismiss them out of hand.
12.02.2026 03:39
π 1225
π 312
π¬ 25
π 10
FAMU can't use the word Black on anything posted around campus related to Black History Month, to stay in compliance with Florida state laws against DEI. Black students can't use the word Black at their Historically Black College during Black History Month.
08.02.2026 14:28
π 4073
π 1507
π¬ 147
π 223
In "Translingual Literacy, Language Difference, and Matters of Agency," Lu and Horner argue that language difference is the norm in language use--not just deviations from sameness--as all languaging requires the labor of negotiating and constructing meaning, thus always evincing agency
07.02.2026 16:44
π 7
π 2
π¬ 1
π 0
College approached and paid student to write op-ed in The Dartmouth
The Dartmouth ran the article on Nov. 17 without knowledge that the College had been involved.
The college at which I'm employed, which has signed a contract with the AI firm that stole books from 131 colleagues & me, paid a student to write an op-ed for the student paper promoting AI, guided the writing of it, and did not disclose this to the paper. www.thedartmouth.com/article/2026...
29.01.2026 22:40
π 2068
π 806
π¬ 62
π 91
I am. But there are three things at play here regarding ICE collecting private data. Quick rundown on the issue, why you should be paying attention, and what I'm doing about it:
27.01.2026 21:28
π 6780
π 2285
π¬ 166
π 142
The people of Minnesota have executed one of the most impressive civil resistance campaigns I can remember:
- Organized a city wide general strike
- Maintained nonviolent discipline amidst violence
- Mobilized 10,000s in subzero temps to protest and watch ICE
- Flipped public opinion against ICE
26.01.2026 16:17
π 32014
π 7596
π¬ 484
π 368
The bar for quality journalism in higher ed is literally in the deepest reaches of hell.
27.01.2026 20:45
π 1
π 0
π¬ 0
π 1
This absolutely screams βwritten by a loser crypto broβ
27.01.2026 20:37
π 3
π 0
π¬ 0
π 0
1/2 "The most striking finding we had is that the students that practiced math problems with ChatGPT without any guardrails did 17% worse on immediate subsequent exam where they did not have AI assistance." Hamsa Bastani of @upenn.edu at the Simons Institute. simons.berkeley.edu/talks/hamsa-...
26.01.2026 16:20
π 62
π 20
π¬ 1
π 8
I am begging these people to get a grip. This is an especially wild thing to say right now.
26.01.2026 14:37
π 6
π 0
π¬ 0
π 0
The silicon gaze: A typology of biases and inequality in LLMs through the lens of place - Francisco W. Kerche, Matthew Zook, Mark Graham, 2026
This paper introduces the concept of the silicon gaze to explain how large language models (LLMs) reproduce and amplify long-standing spatial inequalities. Draw...
"Drawing on a 20.3-million-query audit of ChatGPT, we map systematic biases in the model's representations of countries, states, cities, and neighbourhoods. From these empirics, we argue that bias is not a correctable anomaly but an intrinsic feature of generative AIβ.
ht: Dagmar Monett
21.01.2026 01:15
π 155
π 73
π¬ 1
π 3
I am liking Pope Leo more and more daily.
13.06.2025 15:28
π 933
π 205
π¬ 24
π 14
Ashley St. Clair
@stclairashley
Just saw a photo that Grok produced of a child no older than four years old in which it took off her dress, put her in a bikini + added what is intended to be semen. ChatGPT does not do this. Gemini does not do this.
Another girl who appears to be just 11 or 12 with a brain tumor, Grok removed her shirt completely.
Stating that the people producing the prompts are the only ones responsible puts an undue burden on victims, most of which have no idea this is happening. This problem + exploitation of children could be fixed in a matter of minutes by the MechaHitler team.
Elon Musk chatbot repeatedly making CSAM on demand should be one of the single biggest stories right now and it's effectively a collective shrug instead
05.01.2026 10:31
π 5175
π 1619
π¬ 130
π 310
31.12.2025 14:37
π 2064
π 843
π¬ 52
π 84
A USPS mailbox in New York City
USPS quietly changed its postmark rules β mail is no longer dated when you drop it off. The βofficialβ date is when it hits automated sorting β sometimes days later
Which could have major implications for mail in voting β itβs a clever way to disenfranchise voters thatβs going largely overlooked
29.12.2025 19:14
π 7131
π 4808
π¬ 292
π 636
Screenshot of a paper entry:
Fictional Failures and Real-World Lessons: Ethical Speculation Through Design Fiction on Emotional Support Conversational AI
Authors: Faye Kollig, Jessica Pater, Fayika Farhat Nova, Casey Fiesler
(There are tabs with "abstract" and "summary" and "summary" is selected.)
The ACM Digital Library, where a LOT of computing-related research is published (I'd say at least 75% of my own publications), is now not only providing (without consent of the authors and without opt-in by readers) AI-generated summaries of papers, but they appear as the *default* over abstracts.
16.12.2025 23:31
π 646
π 335
π¬ 30
π 90
Headline: "Microsoft Scales Back AI Goals Because Almost Nobody Is Using Copilot" by Jan Martindale Dec. 10 2025
we can kill them all if we just work together and β¨believeπ«
15.12.2025 17:11
π 22951
π 8319
π¬ 247
π 587
one small lesson to take away from the crumbling HE sector, btw, is that there is no such thing as meritocracy here and the only thing that matters is doing work that is meaningful to you and the people you care about and to make friends and collaborators in the process
06.12.2025 18:00
π 72
π 20
π¬ 3
π 3