Jed Brown's Avatar

Jed Brown

@jedbrown.org

Prof developing fast algorithms, reliable software, and healthy communities for computational science. Opinions my own. https://hachyderm.io/@jedbrown https://PhyPID.org | aspiring killjoy | against epistemicide | he/they

968
Followers
887
Following
832
Posts
24.10.2023
Joined
Posts Following

Latest posts by Jed Brown @jedbrown.org

Inevitability frame is insidious, yet we had 150 participants at our teach-in last week and our faculty assembly has been unanimous. Many assume that criticism is shallow, but upon learning what these products are and the power structures they support, turn strongly negative about their imposition.

10.03.2026 21:29 πŸ‘ 12 πŸ” 5 πŸ’¬ 0 πŸ“Œ 1

After revelations that companies like Ring have been selling home security footage to ICE, the most common defense of keeping a camera up seems to be, β€œBut what if I’m stalked?”

And as a violence researcher, I’m just here to state for the record that police do not care about your security footage.

10.03.2026 20:14 πŸ‘ 1159 πŸ” 369 πŸ’¬ 19 πŸ“Œ 18

The truth is that some engineers found what they perceive to be a short-cut to KPIs and executives have investors to impress, but the industry has no idea what is required to maintain QA standards and manage tacit knowledge/tech debt/agility through the lifecycle of a software product.

10.03.2026 17:24 πŸ‘ 2 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

Here’s a better way of putting it: if you’d like to know why faculty are so upset about CU unilaterally signing a $2 million contract with Open AI here are a few reasons πŸ‘‡

10.03.2026 03:01 πŸ‘ 52 πŸ” 17 πŸ’¬ 0 πŸ“Œ 1
Post image Post image

The Inspector General for NSF must launch an investigation into the Trump administration’s alleged attempt to sell-off parts of NCAR β€” immediately!

πŸ‘‡πŸ½πŸ‘‡πŸ½πŸ‘‡πŸ½

10.03.2026 00:11 πŸ‘ 191 πŸ” 87 πŸ’¬ 8 πŸ“Œ 4

This puts me at 10 people released. I filed my 1st habeas petition 29 days ago. I am an incredibly small-time, nobody lawyer who knew nothing about immigration law the day I submitted that case. I barely know any more today. I had never sued the federal government and now I've beaten them 10 times.

09.03.2026 19:24 πŸ‘ 11122 πŸ” 2480 πŸ’¬ 303 πŸ“Œ 120

Yes, that whole incident should be understood as clever PR by Amodei. @heidykhlaaf.bsky.social has deep expertise on this subject and a finger on the pulse.

09.03.2026 03:56 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Absolutely, and it is so telling that even the use cases touted here as good fall apart when examining construct validity.

> If you accept some A.I. tools, like to summarize a thousand pages of patient records

An LLM "summary" is a purely linguistic artifact, not an epistemic product of the source

08.03.2026 03:32 πŸ‘ 4 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Can Anthropic’s AI Claude be trusted in combat?| The Take
Can Anthropic’s AI Claude be trusted in combat?| The Take YouTube video by Al Jazeera English

It was great to join @aljazeera.com's podcast "The Take" to discuss the details of the DoW's use of Claude in Iran, as well as the stand-off between DoW and Anthropic that was largely safety theatre.
www.youtube.com/watch?v=skyI...

06.03.2026 21:20 πŸ‘ 7 πŸ” 3 πŸ’¬ 0 πŸ“Œ 1
Video thumbnail
07.03.2026 22:37 πŸ‘ 13253 πŸ” 5014 πŸ’¬ 87 πŸ“Œ 245

Even the military is not entirely composed of pure loyalists. Having people do the target analysis creates ample chance for evidence, objection, and people who can testify. The fetish for chatbots generating targets is the impunity that comes from scuttling the paths for liability and inhibitions.

08.03.2026 00:21 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Technical debt as a lack of understanding Some time back, I was working on a project where it felt like the timebomb of technical debt was exploding in our faces. We couldn’t refactor the whoositz because of the whatsitz and when we asked abo...

Writing (prose and code) *is* thinking and learning. I think you'll enjoy this short video of Ward Cunningham on coining the term "technical debt" and how it wasn't meant to be about sloppy code, but misalignment of the code with the (evolving, through learning) mental model of the problem domain.

07.03.2026 18:50 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

It hardly matters if a person gives final approval. They have been provided with a convincing counterfeit of an intelligence report. Automation bias and institutional culture discourages rigorous independent checking. It's an inevitable result of systemic choices. Choices that are being celebrated.

06.03.2026 15:18 πŸ‘ 4 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
The Most Moral Army | Los Angeles Review of Books Mary Turfah examines Israeli officials’ weaponization of language, particularly that of medicine, in an attempt to reframe their genocide in Gaza.

Media will keep drooling over the "surgical precision" of "AI" targeting. Nobody involved is behaving as though they will face accountability. The chatbot creates permission; it grants plausible deniability. Discourse often gets hung up on human intent, but this is the impact.

06.03.2026 15:18 πŸ‘ 4 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

It's easily lost, but the US military does not need a chatbot to *find* or execute a double-tap bombing of a girls elementary school. The chatbot provides something shaped like a target analysis without the diligence, cognitive processes, and accountability of a human conducting a target analysis.

06.03.2026 15:18 πŸ‘ 6 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Nine, mostly white dudes on stage in a manel.

Nine, mostly white dudes on stage in a manel.

Everyone wants to sign letters The Future of Life Institute puts out every few years, it seems.

Take a look at this manel which happened around their first letter πŸ™„

The billionaires and eugenicists on this manel are the actual existential risks to humanity we should worry about.

04.03.2026 23:26 πŸ‘ 84 πŸ” 25 πŸ’¬ 5 πŸ“Œ 2

Yes, banned from Bluesky. Blacksky recently moved to its own appview so Bluesky-banned users like Łink are available. This is obviously still limiting, but there could be a future in which Bluesky is not a singular power in the network. (I moved in the fall; can recommend.)

05.03.2026 05:03 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Another day, another chatbot convincing someone to kill themselves. But yeah let's keep talking about how to put these in our classrooms!

04.03.2026 16:11 πŸ‘ 110 πŸ” 49 πŸ’¬ 1 πŸ“Œ 0
Preview
She Came Out of the Bathroom Naked, Employee Says Bank details, sex and naked people who seem unaware they are being recorded. Behind Meta’s new smart glasses lies a hidden workforce, uneasy about peering into the most intimate parts of other people’...

The data from your Meta Ray Bans is used to train Meta's AI, which most people don't understand means that humans are looking at the most intimate details of their lives. www.svd.se/a/K8nrV4/met...

04.03.2026 06:47 πŸ‘ 406 πŸ” 262 πŸ’¬ 11 πŸ“Œ 25

Only "shocking" in how these products are marketed and widespread failure to reflect on well-known cognitive biases going back to the ELIZA Effect (1966). These problems are inherent to the tech because LLMs are purely linguistic devices. Convincing (at times) counterfeiters, always epistemic void.

03.03.2026 17:16 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

The fraudulent citations to real sources is a Grammarly feature (not even ironically). See it (and more counterfeiting) in action in this video:
www.carleton.edu/ai/blog/gram...

03.03.2026 17:05 πŸ‘ 4 πŸ” 2 πŸ’¬ 1 πŸ“Œ 0
Preview
AI and Ethics AIΒ and Ethics seeks to promote informed debate and discussion of the ethical, regulatory, and policy implications that arise from the development of AI. It ...

CFP, AI Resistance, Refusal, Reclamation & Reimagining: Ethical Imperatives and Emerging Practices! Note "This collection is focused ...on the strategies & actions of individuals, communities, organisations & collectives to actively resist, refuse, reimagine & reclaim 'artificial intelligence."

03.03.2026 15:30 πŸ‘ 41 πŸ” 23 πŸ’¬ 0 πŸ“Œ 2
Preview
Over 50% of game developers now think generative AI is bad for the industry, a dramatic increase from just 2 years ago: 'I'd rather quit the industry than use generative AI' The latest GDC survey also found that managers are more likely to use generative AI than their employees.

Also note that many developers are subject to coercive measures to maximize "AI" use and self-report success (performance eval), quality be damned. And contrast the rosy claims of the Potemkin performance for investors to what developers say when able to speak freely.

03.03.2026 14:27 πŸ‘ 7 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

Excellent analysis of that study.

03.03.2026 14:18 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Evidence for the first case is surprisingly flimsy, even contradicted by Anthropic's own study (as well as the 2025 METR report showing strong cognitive bias in self-assessment). And people switch to maladaptive use modes thinking it's helping even when it causes a 30% drop in understanding.

03.03.2026 14:16 πŸ‘ 5 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Good thread on the psychological violence and emptiness of wrangling synthetic text.

Still, note that "hallucination" is a misnomer; the epistemic nihilism is always. Meaning is strictly in the eye of the beholder and correctness or falsehood can only ever be incidental, linguistic serendipity.

03.03.2026 07:05 πŸ‘ 20 πŸ” 8 πŸ’¬ 0 πŸ“Œ 0
They are making money inside companies by automating reconciliation workflows. By qualifying inbound leads. By generating compliance documentation. By reducing customer support overhead. By stitching together painful operational tasks that humans hate doing.

They are making money inside companies by automating reconciliation workflows. By qualifying inbound leads. By generating compliance documentation. By reducing customer support overhead. By stitching together painful operational tasks that humans hate doing.

These people tell on themselves like clockwork. It's great at creating plausible deniability for financial fraud, regulatory compliance fraud, and deceptive responses to customers (unless a court says we have to honor its promises). Just cuz we believe these jobs don't deserve to be done correctly.

03.03.2026 05:03 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Try now? Łink has been working on staging.blacksky.community since mid-Jan, and only today on main (not the first time I tried on the new appview, but it started working sometime within the past few hours for me).

03.03.2026 03:18 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Fun that it's been up over 14 hours with an identical quote printed twice. The quote itself so generic it could be slop. Totally normal and intentional editorial standards.

03.03.2026 01:41 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

AI is a a lack of consent machine that is designed to move us all toward slavery

02.03.2026 19:42 πŸ‘ 79 πŸ” 37 πŸ’¬ 0 πŸ“Œ 0