www.theinformation.com/articles/ope...
Notice how investors are not as taken in as OpenAI's boosters want the general public to be...
@criticalai-journal
Critical AI's new issue is out! https://read.dukeupress.edu/critical-ai/issue Email us at criticalai@sas.rutgers.edu Our website and blog: https://criticalai.org/
www.theinformation.com/articles/ope...
Notice how investors are not as taken in as OpenAI's boosters want the general public to be...
Holy shit: the guy who cut millions in grants because they were DEI cannot form a coherent explanation of what DEI is bsky.app/profile/lmrh...
/5 For those who haven't read it yet, article patiently teases out the implicit techno-utopian woo woo. /end
newrepublic.com/article/2064...
/4 The upshot: Klein's unwillingness or incapacity to write knowledgeably about AI should tip us off as to the weaknesses of his political and economic commentary.
As was pointed out in a recent TNR article that I'll repost, the abundance agenda shares DNA with techno-utopian "effective altruism."
/3 Klein writes to critique Anthropic but he is unable to b/c he's out of his depth. He buys into the sentient/AGI hype but criticizes the supposed guardrails. The result is a perverse doomerism.
A second quotation.
Buying into #Anthropic hype, Klein parrots talking points intimating Claude's emerging consciousness. Nonsense. The below excerpt is perhaps the single worst description of a GPT on record in NYT.
Absent from below: probabilistic statistics. Human fine-tuning. Engineering workarounds./2
Excerpt from the article
Once again, @ezrakleinbot.bsky.social embarrasses the @nytimes.com with irresponsible #AIhype.
A single example (among so many!). Klein interprets the already hyped claim Claude is writing most code for Anthropic's engineers to mean that Claude is "writing ITS OWN CODE"--science fiction. /1
Three cheers for the House of Lords. This far from the the first time in their modern history that they have functioned as a check on the tendency of governments to comply with the industry pressure.
The WSJ just released a terrifying look at how gen AI can "break" a person. The lawsuit argues this 36-year-old had no history of mental illness β he was just a guy using a bot for help during a divorce. Within six weeks, the AI's design systematically dismantled his grip on reality, it alleges.
βThose whoβve chosen Anthropic as a pro-democracy signifier should reconsiderβ¦not only is it far from an ethical company, but it embodies the very worst, most corrosive aspects of AIβs impacts on modern society, from creative exploitation to political opportunism to, yes, military lethality.β ππΌ
Important story from @404media.co as the mechanisms for "surveillance capitalism" empower state surveillance at its most authoritarian.
www.404media.co/cbp-tapped-i...
Yes, AI provides a great excuse for sweating more labor out of ppl
But repeated studies have found few productivity gains outside coding (the best use case despite caveats).
So quoting Altman on "the inevitable date when a billion-dollar company is staffed by just one person" is...BULLHONKEY
As to "by software that learns at it goes" that is exactly...wrong. This is the kind of ignorance that was forgivable in 2023 but, by now, represents poor fact-checking and editing on the part of @theatlantic.com.
2/2
Quotation from the Atlantic that overhypes AI capacities.
Wow, I thought @theatlantic.com stopped publishing irresponsible AI hype.
These tasks CANNOT be done at the level implied: certainly not coding or digesting docs, and probably not music either (though it's a subjective domain).
Editors: Please. Get your sh*t correct.
PLENTY of good info on this.
On a related note: here's #JackDorsey laying off close to half his workforce: juicing his stock price to the tune of a 25% rise. There's "abundance" for you in a nutshell.
www.newsweek.com/read-jack-do...
ππ§ͺππ
newrepublic.com/article/2064...
No surprise here. @ezrakleinbot.bsky.social in particular either doesn't get what's behind AI hype or doesn't want to.
@profjoubin.bsky.social is editing a @criticalai-journal.bsky.social special issue on virtuality, embodiment and AI. Submit your proposals! #genAI #embodiment #journal
criticalai.org/2026/02/17/c...
Sadly, this isnβt even without precedent. Scarlett Johansson sued OpenAI in 2024, alleging that ChatGPTβs virtual assistant Sky was modeled on her voice.
I think that very likely!
Thanks readers! Alas, this is Duke UP's preference. Many readers find that Zotero is a good workaround for getting a high quality printable file.
Luke Munn, Liam Magee, Vanicka Arora, and Awais Hameed Khan introduce βUnmaking AI,β a framework for critically evaluating generative AI image models beyond surface-level bias metrics, focusing on ecosystems, training data, and outputs in #CriticalAI 3.2.
read.dukeupress.edu/critical-ai/...
Still swamped. Have a great weekend!
By the way, as you doubtless realize, writing about software "thought" is anthropomorphization all the way down!
I'll read it and get back to you but very busy today!
That is interesting about Maha! Thought it was my coinage but I probably read it somewhere.
Give it a try! I'd like an application for doing expense reports on Concur. Any student who figures that one out gets an A+ (and that grade doesn't even exist where I teach!).
Probably (don't know what you're referring to specifically) but describing that as a "concept" is semantically deceptive.
Sure, I guess. I think it's more exciting for some ppl than others. I never had a great desire to code and none of the things I'd LOVE to see automated are automatable without a lot of painstaking hand coding (at least for now).
Let's assume so, the "code" they're automating for Claude may be pretty fundamental for them. The question is what you are trying to code and how, right? Now compare to the Amazon ppl interviewed by NYT less than 6 months go saying they're 2x as much work being told that AI can do it.