Anthony Ha's Avatar

Anthony Ha

@anthonyha

Writer, @TechCrunch.com weekend editor, Original Content co-host, loud karaoke singer

4,326
Followers
908
Following
267
Posts
15.05.2023
Joined
Posts Following

Latest posts by Anthony Ha @anthonyha

He’s not playing chess literally. He’s not playing chess figuratively. He’s not playing chess on any level of abstraction whatsoever. I doubt he can play chess.

11.03.2026 11:52 👍 6053 🔁 817 💬 599 📌 68

get in losers we’re eschatonmaxxing

09.03.2026 09:53 👍 93 🔁 13 💬 2 📌 0

the people yearn for woke 2

08.03.2026 14:32 👍 1839 🔁 329 💬 8 📌 2
Preview
Leaving Block after 40% layoffs and AI-driven job cuts | Naoko Takeda posted on the topic | LinkedIn My employer Block laid off 40% of its workforce last Thursday (2/26). I wasn’t affected… but I was *affected*. I decided to quit immediately and left the company the following day on 2/27. I figured t...

"Personally, I saw very limited gains in productivity from AI, nothing nearly profound enough to justify tossing out half of the company's workforce along with their institutional knowledge and expertise." www.linkedin.com/feed/update/...

07.03.2026 14:24 👍 1 🔁 0 💬 0 📌 0
Video thumbnail

Also, when you ask @attackerman.bsky.social why everything is awful, you have to let him cook

06.03.2026 19:47 👍 66 🔁 25 💬 0 📌 2
Preview
GGG#613: The Best of Kim Stanley Robinson Part 1 [YouTube] [Apple] [Spotify] [Download] Anthony Ha joins us to discuss the first half of the book The Best of Kim Stanley Robinson. Stories discussed: “Venice Drowned” (10:53), “Ri…

Went on Geek's Guide to talk about Kim Stanley Robinson's short fiction.

Honestly I would have been fine skipping most of the others and just talking about The Lucky Strike and A Sensitive Dependence on Initial Conditions for the whole two hours!

geeksguideshow.com/2026/02/18/g...

05.03.2026 21:59 👍 1 🔁 0 💬 0 📌 0
Preview
A ★★★★★ review of The Secret Agent (2025) What an incredible film, so full of color and life, including the best voices, faces, and - to be honest - bellies that I can remember in any recent movie. Initially, I was flabbergasted by the ending...

I saw The Secret Agent

05.03.2026 19:46 👍 2 🔁 0 💬 0 📌 0

Maybe Starship Troopers was too subtle

02.03.2026 16:36 👍 290 🔁 67 💬 6 📌 1
01.03.2026 13:13 👍 1181 🔁 221 💬 7 📌 2
Preview
Meta won’t let morality get in the way of a product launch What a great time to add facial recognition to everything!

"We are no longer operating in a world where we judge technology by what would happen if it landed in the 'wrong hands.' It is already in the wrong hands." www.theverge.com/policy/88634...

01.03.2026 15:01 👍 11 🔁 3 💬 0 📌 0
Danny devito in always sunny stating "you would have to be a real low life piece of shit to be proud to be an American."

Danny devito in always sunny stating "you would have to be a real low life piece of shit to be proud to be an American."

28.02.2026 16:21 👍 10480 🔁 3318 💬 18 📌 14

I like to distract myself from the terrifying political news by reading the ridiculous political news.

28.02.2026 13:49 👍 2759 🔁 567 💬 96 📌 28
Screenshot of a Hacker News discussion linking to a "how to delete your account" page for OpenAI.


First comment: Posting it here as a top-level comment as many people asked why boycott just openAi:
-----
openAI is the least trustworthy of the Big LLM providers. See S(c)am Altman's track record, especially his early comments in senate hearings where:
* he warned of engagement-optimisation strategies, like social media, being used for chatbots / LLMs.
* also, he warned that "ads would be the last resort" for LLM companies.
Both of his own warnings he casually ignored as ChatGPT / openAI has now fully converted to Facebook's tactics of "move fast and break things" - even if it is society itself. A complete turn away from the original AI for science lab it was founded as, which explains why every real (founding) ML scientist has left the company years ago.
While still being for-profit outfits, at least DeepMind and Anthrophic are headed by actual scientists - not marketing guys. At least for me, that brings me some confidence in their intentions as, as scientists we often seek knowledge, not power for power's sake.

Screenshot of a Hacker News discussion linking to a "how to delete your account" page for OpenAI. First comment: Posting it here as a top-level comment as many people asked why boycott just openAi: ----- openAI is the least trustworthy of the Big LLM providers. See S(c)am Altman's track record, especially his early comments in senate hearings where: * he warned of engagement-optimisation strategies, like social media, being used for chatbots / LLMs. * also, he warned that "ads would be the last resort" for LLM companies. Both of his own warnings he casually ignored as ChatGPT / openAI has now fully converted to Facebook's tactics of "move fast and break things" - even if it is society itself. A complete turn away from the original AI for science lab it was founded as, which explains why every real (founding) ML scientist has left the company years ago. While still being for-profit outfits, at least DeepMind and Anthrophic are headed by actual scientists - not marketing guys. At least for me, that brings me some confidence in their intentions as, as scientists we often seek knowledge, not power for power's sake.

Currently the top item on Hacker News: How to delete your account at openAI

news.ycombinator.com/item?id=4719...

28.02.2026 14:54 👍 3 🔁 0 💬 0 📌 0
Preview
Hyperion author Dan Simmons dies from stroke at 77 I went into Hyperion blind, decades ago, knowing almost nothing about it. I was never the same.

Mind-boggling that someone could write Hyperion *and* The Terror arstechnica.com/culture/2026...

27.02.2026 20:47 👍 1 🔁 0 💬 0 📌 0
Preview
The People vs. AI Across red states and blue, a grassroots movement is pushing back on the unchecked growth of the artificial intelligence industry.

Don’t believe them when they say it’s inevitable.

27.02.2026 14:48 👍 186 🔁 62 💬 4 📌 3
Preview
Paramount Wins Bidding War for Warner Discovery After Netflix Drops Out Netflix pulled the plug on its deal soon after Warner’s board determined Paramount’s revised offer was superior.

No outcome would have been good but this is the worst. www.wsj.com/business/med...

27.02.2026 00:46 👍 21 🔁 9 💬 1 📌 7
Preview
Actually, the left is winning the AI debate But it does need to get organized.

The left is not "missing out" on AI. It's winning the argument. Americans worry about AI and dislike tech CEOs. They're mobilizing against data centers. They support regulation, even refusal.

Calls for the left to embrace AI are an effort to change the terms of the debate in Silicon Valley's favor.

26.02.2026 17:15 👍 961 🔁 317 💬 13 📌 27
Post image

Quite rare to see a chart that says quite so overtly that no one involved has the slightest clue what’s happening here

26.02.2026 09:15 👍 3403 🔁 603 💬 221 📌 388
Preview
Meta Director of AI Safety Allows AI Agent to Accidentally Delete Her Inbox Meta Superintelligence Labs’ director of alignment called it a “rookie mistake.”

Meta's director of AI safety allowed an AI agent to... accidentally delete her inbox. This is supposedly the person at the company who is working to make sure that powerful AI tools don’t go rogue and act against human interests

www.404media.co/meta-directo...

23.02.2026 15:22 👍 458 🔁 175 💬 21 📌 41
Preview
The Next Stop for a Sundance Director: Leading New York’s Film Forum

I so love reading this. Not only has @filmforumnyc.bsky.social just experienced its two best box office years ever, but a quarter of its members are under 35.
www.nytimes.com/2026/02/23/a...

23.02.2026 14:59 👍 21 🔁 5 💬 1 📌 2
“Looking at what’s possible, it does feel sort of surprisingly slow,” Mr. Altman said at an A.I. conference this month.

Jensen Huang, chief executive of the chip maker Nvidia, is worried. The tech industry hype may seem omnipresent, but Mr. Huang feels “the battle of narratives” is being won by the critics.

“It’s extremely hurtful, frankly,” Mr. Huang said in a podcast interview last month. “A lot of damage” has been done by “very well-respected people who have painted a doomer narrative, end-of-the-world narrative, science fiction narrative.”

“Looking at what’s possible, it does feel sort of surprisingly slow,” Mr. Altman said at an A.I. conference this month. Jensen Huang, chief executive of the chip maker Nvidia, is worried. The tech industry hype may seem omnipresent, but Mr. Huang feels “the battle of narratives” is being won by the critics. “It’s extremely hurtful, frankly,” Mr. Huang said in a podcast interview last month. “A lot of damage” has been done by “very well-respected people who have painted a doomer narrative, end-of-the-world narrative, science fiction narrative.”

I just keep reading these quotes and cackling

www.nytimes.com/2026/02/21/t...

22.02.2026 22:44 👍 3 🔁 0 💬 1 📌 1

February - 2026. Trump has shut down TSA Precheck. Airports are now battlefields.

22.02.2026 14:31 👍 314 🔁 63 💬 11 📌 5

The concept here is to get well-heeled travelers to pressure the government to restore secret police funding

22.02.2026 14:34 👍 963 🔁 263 💬 39 📌 9

This "why does the left hate AI?" thing is like when the right believes protests are paid for by George Soros; they need a SECRET CONSPIRATORIAL AGENT because they refuse to believe that, actually, a lot of people are independently looking at what AI is being made to be and saying "ok that sucks?"

18.02.2026 12:39 👍 1133 🔁 215 💬 12 📌 11

Instead of "the left isn't paying enough attention to these machine gods" they're claiming to build, perhaps we should write an article about how effective altruists brand themselves "left", ask who is writing these articles, and trace their cults and sources of funding.

18.02.2026 02:16 👍 1976 🔁 437 💬 13 📌 19
By Chris Quinn, Editor, cleveland.com/The Plain Dealer
A college student withdrew from consideration for a reporting role in our newsroom this week because of how we use artificial intelligence.

It reminded me again how college journalism programs are failing to prepare students for the workforce. I mentioned this in a column before, and readers asked me to explain

By Chris Quinn, Editor, cleveland.com/The Plain Dealer A college student withdrew from consideration for a reporting role in our newsroom this week because of how we use artificial intelligence. It reminded me again how college journalism programs are failing to prepare students for the workforce. I mentioned this in a column before, and readers asked me to explain

A conscientious journalism grad withdraw from a job when she learned the Cleveland Plain Dealer uses AI to write its stories.

Now the editor is castigating her and journalism professors for not being “prepared for the workforce.”

You can’t make this shit up.

www.cleveland.com/news/2026/02...

16.02.2026 12:16 👍 1801 🔁 442 💬 81 📌 137

modern white nationalism is fundamentally cuck shit. just a swamp of marble bust avatars brainwashed by ccp propaganda. it's why all these dipshits are ready to torch nato and abandon taiwan. because they are cucks lol

15.02.2026 21:18 👍 1050 🔁 129 💬 10 📌 3

Time to dust off a classic

15.02.2026 16:33 👍 3775 🔁 614 💬 25 📌 5
Unionization of the work force: Unless you believe that the US government is capable of independently solving these problems (while being in the industry’s pocket), it is clear that there must be some entities that exist for the sole purpose of protecting the workers who are exposed to having their livelihoods destroyed by AI. Those entities are unions. Union contracts, not laws, are the very front lines of AI regulation. Even if the biggest thing that union contracts accomplish is managing the reduction in jobs in a way that doesn’t utterly screw workers, that is significant. Right now, less than 10% of American workers have unions, meaning that most workers are exposed to unilateral damage from AI with no safety net. Policies like reforming labor law to make organizing easier and funding more union organizing are not old-timey things—they are, in fact, necessary policies to address our AI future. Also, as a practical matter, we are going to need the political influence of strong unions to counteract the political influence of the rich, which will be fighting against the other items on this policy list. Including,
A stronger social safety net, including higher unemployment benefits, public health care, free higher education, and other common sense measures necessary to get a large number of newly unemployed people through the hard times and into whatever comes next. And,

Unionization of the work force: Unless you believe that the US government is capable of independently solving these problems (while being in the industry’s pocket), it is clear that there must be some entities that exist for the sole purpose of protecting the workers who are exposed to having their livelihoods destroyed by AI. Those entities are unions. Union contracts, not laws, are the very front lines of AI regulation. Even if the biggest thing that union contracts accomplish is managing the reduction in jobs in a way that doesn’t utterly screw workers, that is significant. Right now, less than 10% of American workers have unions, meaning that most workers are exposed to unilateral damage from AI with no safety net. Policies like reforming labor law to make organizing easier and funding more union organizing are not old-timey things—they are, in fact, necessary policies to address our AI future. Also, as a practical matter, we are going to need the political influence of strong unions to counteract the political influence of the rich, which will be fighting against the other items on this policy list. Including, A stronger social safety net, including higher unemployment benefits, public health care, free higher education, and other common sense measures necessary to get a large number of newly unemployed people through the hard times and into whatever comes next. And,

Much higher taxes on the rich: We will need a significant federal wealth tax in some form in order to avoid the scenario that Dario Amodei himself describes of a small handful of individuals who control the AI industry becoming so insanely rich, as they centralize the income that once flowed to many other industries, that they are effectively running the country and the world. America’s wealth inequality is already a crisis. AI could make it significantly worse. At some point democracy will fully crumble. Not allowing people to get that rich is necessary if we want to avoid having unaccountable godlike dictators. And finally, as a related matter,
Strong government regulation of the AI industry: There are more safety regulations involved in making a car than there are in releasing an AI model to the public that will, maybe, help people produce biological weapons or mass-produce child porn or who knows what else. This is an insane situation. Truly insane. Our current nonexistent regulatory apparatus around AI is similar to having just invented nuclear weapons and not yet having written any rules about who can make them or what they can do with them. There should be, at minimum, a robustly funded independent government agency that evaluates and regulates AI models before they are released into the world. Just for starters.

Much higher taxes on the rich: We will need a significant federal wealth tax in some form in order to avoid the scenario that Dario Amodei himself describes of a small handful of individuals who control the AI industry becoming so insanely rich, as they centralize the income that once flowed to many other industries, that they are effectively running the country and the world. America’s wealth inequality is already a crisis. AI could make it significantly worse. At some point democracy will fully crumble. Not allowing people to get that rich is necessary if we want to avoid having unaccountable godlike dictators. And finally, as a related matter, Strong government regulation of the AI industry: There are more safety regulations involved in making a car than there are in releasing an AI model to the public that will, maybe, help people produce biological weapons or mass-produce child porn or who knows what else. This is an insane situation. Truly insane. Our current nonexistent regulatory apparatus around AI is similar to having just invented nuclear weapons and not yet having written any rules about who can make them or what they can do with them. There should be, at minimum, a robustly funded independent government agency that evaluates and regulates AI models before they are released into the world. Just for starters.

If, like @hamiltonnolan.bsky.social, you think "taking AI seriously" means "stronger unions, strong regulations, and a stronger social safety net" then hell yes.

Buuuuuut I think that's rarely what people mean.

www.hamiltonnolan.com/p/minimum-st...

15.02.2026 17:15 👍 3 🔁 1 💬 1 📌 0

also, lol, of course he's a "former finance executive"

15.02.2026 17:15 👍 2 🔁 0 💬 1 📌 0