He’s not playing chess literally. He’s not playing chess figuratively. He’s not playing chess on any level of abstraction whatsoever. I doubt he can play chess.
He’s not playing chess literally. He’s not playing chess figuratively. He’s not playing chess on any level of abstraction whatsoever. I doubt he can play chess.
get in losers we’re eschatonmaxxing
the people yearn for woke 2
"Personally, I saw very limited gains in productivity from AI, nothing nearly profound enough to justify tossing out half of the company's workforce along with their institutional knowledge and expertise." www.linkedin.com/feed/update/...
Also, when you ask @attackerman.bsky.social why everything is awful, you have to let him cook
Went on Geek's Guide to talk about Kim Stanley Robinson's short fiction.
Honestly I would have been fine skipping most of the others and just talking about The Lucky Strike and A Sensitive Dependence on Initial Conditions for the whole two hours!
geeksguideshow.com/2026/02/18/g...
Maybe Starship Troopers was too subtle
"We are no longer operating in a world where we judge technology by what would happen if it landed in the 'wrong hands.' It is already in the wrong hands." www.theverge.com/policy/88634...
Danny devito in always sunny stating "you would have to be a real low life piece of shit to be proud to be an American."
I like to distract myself from the terrifying political news by reading the ridiculous political news.
Screenshot of a Hacker News discussion linking to a "how to delete your account" page for OpenAI. First comment: Posting it here as a top-level comment as many people asked why boycott just openAi: ----- openAI is the least trustworthy of the Big LLM providers. See S(c)am Altman's track record, especially his early comments in senate hearings where: * he warned of engagement-optimisation strategies, like social media, being used for chatbots / LLMs. * also, he warned that "ads would be the last resort" for LLM companies. Both of his own warnings he casually ignored as ChatGPT / openAI has now fully converted to Facebook's tactics of "move fast and break things" - even if it is society itself. A complete turn away from the original AI for science lab it was founded as, which explains why every real (founding) ML scientist has left the company years ago. While still being for-profit outfits, at least DeepMind and Anthrophic are headed by actual scientists - not marketing guys. At least for me, that brings me some confidence in their intentions as, as scientists we often seek knowledge, not power for power's sake.
Currently the top item on Hacker News: How to delete your account at openAI
news.ycombinator.com/item?id=4719...
Mind-boggling that someone could write Hyperion *and* The Terror arstechnica.com/culture/2026...
No outcome would have been good but this is the worst. www.wsj.com/business/med...
The left is not "missing out" on AI. It's winning the argument. Americans worry about AI and dislike tech CEOs. They're mobilizing against data centers. They support regulation, even refusal.
Calls for the left to embrace AI are an effort to change the terms of the debate in Silicon Valley's favor.
Quite rare to see a chart that says quite so overtly that no one involved has the slightest clue what’s happening here
Meta's director of AI safety allowed an AI agent to... accidentally delete her inbox. This is supposedly the person at the company who is working to make sure that powerful AI tools don’t go rogue and act against human interests
www.404media.co/meta-directo...
I so love reading this. Not only has @filmforumnyc.bsky.social just experienced its two best box office years ever, but a quarter of its members are under 35.
www.nytimes.com/2026/02/23/a...
“Looking at what’s possible, it does feel sort of surprisingly slow,” Mr. Altman said at an A.I. conference this month. Jensen Huang, chief executive of the chip maker Nvidia, is worried. The tech industry hype may seem omnipresent, but Mr. Huang feels “the battle of narratives” is being won by the critics. “It’s extremely hurtful, frankly,” Mr. Huang said in a podcast interview last month. “A lot of damage” has been done by “very well-respected people who have painted a doomer narrative, end-of-the-world narrative, science fiction narrative.”
I just keep reading these quotes and cackling
www.nytimes.com/2026/02/21/t...
February - 2026. Trump has shut down TSA Precheck. Airports are now battlefields.
The concept here is to get well-heeled travelers to pressure the government to restore secret police funding
This "why does the left hate AI?" thing is like when the right believes protests are paid for by George Soros; they need a SECRET CONSPIRATORIAL AGENT because they refuse to believe that, actually, a lot of people are independently looking at what AI is being made to be and saying "ok that sucks?"
Instead of "the left isn't paying enough attention to these machine gods" they're claiming to build, perhaps we should write an article about how effective altruists brand themselves "left", ask who is writing these articles, and trace their cults and sources of funding.
By Chris Quinn, Editor, cleveland.com/The Plain Dealer A college student withdrew from consideration for a reporting role in our newsroom this week because of how we use artificial intelligence. It reminded me again how college journalism programs are failing to prepare students for the workforce. I mentioned this in a column before, and readers asked me to explain
A conscientious journalism grad withdraw from a job when she learned the Cleveland Plain Dealer uses AI to write its stories.
Now the editor is castigating her and journalism professors for not being “prepared for the workforce.”
You can’t make this shit up.
www.cleveland.com/news/2026/02...
modern white nationalism is fundamentally cuck shit. just a swamp of marble bust avatars brainwashed by ccp propaganda. it's why all these dipshits are ready to torch nato and abandon taiwan. because they are cucks lol
Time to dust off a classic
Unionization of the work force: Unless you believe that the US government is capable of independently solving these problems (while being in the industry’s pocket), it is clear that there must be some entities that exist for the sole purpose of protecting the workers who are exposed to having their livelihoods destroyed by AI. Those entities are unions. Union contracts, not laws, are the very front lines of AI regulation. Even if the biggest thing that union contracts accomplish is managing the reduction in jobs in a way that doesn’t utterly screw workers, that is significant. Right now, less than 10% of American workers have unions, meaning that most workers are exposed to unilateral damage from AI with no safety net. Policies like reforming labor law to make organizing easier and funding more union organizing are not old-timey things—they are, in fact, necessary policies to address our AI future. Also, as a practical matter, we are going to need the political influence of strong unions to counteract the political influence of the rich, which will be fighting against the other items on this policy list. Including, A stronger social safety net, including higher unemployment benefits, public health care, free higher education, and other common sense measures necessary to get a large number of newly unemployed people through the hard times and into whatever comes next. And,
Much higher taxes on the rich: We will need a significant federal wealth tax in some form in order to avoid the scenario that Dario Amodei himself describes of a small handful of individuals who control the AI industry becoming so insanely rich, as they centralize the income that once flowed to many other industries, that they are effectively running the country and the world. America’s wealth inequality is already a crisis. AI could make it significantly worse. At some point democracy will fully crumble. Not allowing people to get that rich is necessary if we want to avoid having unaccountable godlike dictators. And finally, as a related matter, Strong government regulation of the AI industry: There are more safety regulations involved in making a car than there are in releasing an AI model to the public that will, maybe, help people produce biological weapons or mass-produce child porn or who knows what else. This is an insane situation. Truly insane. Our current nonexistent regulatory apparatus around AI is similar to having just invented nuclear weapons and not yet having written any rules about who can make them or what they can do with them. There should be, at minimum, a robustly funded independent government agency that evaluates and regulates AI models before they are released into the world. Just for starters.
If, like @hamiltonnolan.bsky.social, you think "taking AI seriously" means "stronger unions, strong regulations, and a stronger social safety net" then hell yes.
Buuuuuut I think that's rarely what people mean.
www.hamiltonnolan.com/p/minimum-st...
also, lol, of course he's a "former finance executive"