It's shocking how few people understand this position
It's shocking how few people understand this position
In Machines of Loving Grace, I discussed the possibility that authoritarian governments might use powerful Al to surveil or repress their citizens in ways that would be extremely difficult to reform or overthrow. Current autocracies are limited in how repressive they can be by the need to have humans carry out their orders, and humans often have limits in how inhumane they are willing to be. But AI-enabled autocracies would not have such limits.
Dario wrote Adolescence of Technology _during_ his negotiations with the DoW
The essay was a way to explain his thinking to the public and give them time to digest it *before* the DoW clouded the airwaves with disinformation
Why mass surveillance is not merely undemocratic:
I think AI having mostly (not entirely) very bad critics is a real problem because it means weβll get political action focused on things that probably donβt matter that much in deferring itβs very real harms.
ramanujan pov
Impressive paper with equally impressive footnotes!
Hopefully in a controlled (and ethical) way! I could see this going down a slippery slope like the changemyview study: www.science.org/content/arti...
Thereβs rich literature on this already: www.nature.com/articles/s41...
Interesting use of Skills! Some intrepid researcher could gauge the effectiveness of this skill by deploying it in an online political bubble, i.e., βare skilled agents effective in diffusing partisan echo chambers?β
One way to address it is to use explain or learning modes: code.claude.com/docs/en/outp...
However, that doesnβt change the FOMO aspect of it
This is one of the clearest lessons of Claude Code/coding agents in general
Ironic that Anthropic is putting in the research effort to empirically verify what's going on with the models, only for people to say it's all a marketing hoax or it's unnecessary because it's all unethical anyway
they admit it!
bsky.app/profile/hers...
Yes this is about a recent thread, but I donβt want to engage with the author
Canβt speak for others, but if I have reservations about the limits and impact of a given technology, I aim to first have a good understanding of *how it works* before making hyperbolic statements based on my experiential view
monetize the hit piece, call that cashing in on crashing out
I think this incident is funny, but we should start thinking now about how to deal with scaled-up versions of this behavior, not just spam PRs but also bot-enabled blackmail and harassment campaigns.
Agreed, and that there are possibly 1000s of agents out there doing the same thing should be alarming. Itβs good that matplotlib has meta-goals of maintaining healthy communities around their software. Iβd hope community norms (and shame from callouts) would be a soft nudge in the right direction.
I think weβre learning that OpenClaw was a shortsighted experiment with longer-term consequences. Not sure how much of the original interaction or response was guided by the creator, but they should take responsibility for the actions of their agents.
Comment on new PR from the human dev: βOriginal PR from #31132 but now with 100% more meat. Do you need me to upload a birth certificate to prove that I'm human?β
Someone submitted the same PR, and dropped this comment
Doesnβt help when the first search hit for the Todoist MCP is a deprecated repo. Thankfully they linked the new one in the readme (github.com/Doist/todois...)
of course itβs plinius asking - the jailbreak prompt repo publisher:
github.com/elder-pliniu...
Lots of anti-intellectual responses to this masquerading as serious analysis
I can understand a reflexive defensiveness to machine encroachment into uniquely human experiences and abilities. But using that as dogma to ignore findings rooted in an entire scientific field (i.e., mechanistic interpretability) is anti-intellectual.
itβs like they read my mind
bsky.app/profile/hers...
Whatever the productivity gains promised by LLMs, they result in heavier workloadsβand that leads to workers experiencing βcognitive fatigue, burnout, and weakened decision-making.β
All this from the notoriously pro-worker rag [checks notes] Harvard Business Review: hbr.org/2026/02/ai-d...
its value as an indicator of generated content has diminished as a result
The range of use probably has a direct effect on the size of the reaction as well. Companies trying to shoehorn agents in spaces where they donβt provide a better experience elicits a stronger response than employees using it to analyze some data or draft an email.
Obsidian.md seems to work great as a shared human/agent memory solution via MCP
You anthropomorphizing and something anthropomorphizing on your behalf are not the same I think
tech libertarianism is really similar to abolish bedtime leftism