Already happened.
OpenAI watching Satya Nadella announce Copilot Cowork powered by Claude.
Blueskyβs CEO is stepping down and saying βthe company needs a seasoned operator focused on scaling and execution, while I return to what I do best: building new things."
I think this is a good call as the app seems to have hit a local maxima and needs to unlock the a step change in growth.
Iβve come to believe LinkedInβs full stack builder model is the future of software development. Instead of deep specialization across PM, design, data and engineering, AI agents do most of the work while people coordinate, review and correct the output.
Thatβs basically how Anthropic works today.
Both. π
My life since discovering Claude Code.
Step 1: Get a job as chief of saving money for the Federal government (DOGE).
Step 2: Instead of pushing to simplify the tax code or automating tax collection you fire a bunch of IRS workers.
Step 3: Retweet posts about how your racist AI (aka MechHitler) can do peopleβs taxes.
A developer on Blind says coworkers now ship in an hour what used to take days (3 or 5 story points) thanks to Claude Code.
Thereβs pressure to move so fast you barely understand the code and feel like the bottleneck versus the AI.
This is going to create an interesting fallout across the industry
A study of 1,488 workers using AI tools showed interesting results. Mental fatigue rose when using 3+ tools and overseeing multiple AI agents. However, automating repetitive tasks reduced burnout.
Iβve argued managing people best prepares you to work with AI agents. Now backed up by research.
With flat revenue ($24.12B in 2024, $24.19B in 2025) and poor efficiency, layoffs were inevitable to grow the stock price.
The scale of the cuts (40% versus 10%), based on AI gains, is whatβs notable.
The stock jumped assuming no harm to revenue. Time will tell if thatβs a reasonable assumption.
America has never had a president so intent on destroying the economy and making the lives of regular Americans harder.
OpenAI execs resigning over their deal with the Pentagon makes it obvious that the deal with Hegseth is just to pinky swear that he wonβt use ChatGPT to spy on Americans or build weapons that kill without human oversight.
And if he breaks the promise? Oops. π€·πΎββοΈ
This is a strawman argument given the people seeing the gains are experienced developers not PMs. In fact, PMs vibe coding features seems like an extremely rare and outlier experience.
So this big question mostly exists in your head.
Just listened to the backstory of Boris Cherny being poached by Cursor and then returning back to Anthropic in 2 weeks.
Sounds like he realized Anthropic is building the future of work while Cursor is building future of IDEs when it isnβt clear IDEs even have a future.
Thanks to Claude Code, itβs no longer a question of whether AI productivity gains exist for building software or not.
The questions are now more how do we structure job roles & companies given these gains and what will be the impact on tech workers?
This is whatβs top of mind for me.
It is sad yet unsurprising that the only science the Trump administration believes in is using AI to drop bombs in brown people.
5% is the error rate from standard benchmarks but itβs a fair point that we have no idea what the hallucination rate is specific to, for example, comparing a human soldier in urban combat not accidentally shooting civilians versus an automated drone fitted with a sniper rifles
For some strange reason, a bunch of pundits on X (Ben Thompson, Noah Smith, etc) have decided to fixate on the third question as if the other two are irrelevant when they are actually the most important ones.
The Anthropic versus Pentagon conflict is at least three questions which the media skipped asking
1. Should a computer be able to kill people?
2. Whatβs an acceptable error rate for that computer given LLMs 5% hallucination rate?
3. Can the government force you to sell them your tech to kill?
The official statement from Caitlin Kalinowski, former head of robotics at OpenAI, about why she quit company.
I think this is the most significant ethical discussion in tech of our lifetimes.
Should a computer be able to kill people shouldnβt be framed as a tiff between Hegseth and Anthropic.
I recall talking to a coworker, in 2003, about how insane it was that the U.S. was starting a war with Iraq for flimsy reasons. Their response was βwell, gas prices will be lower.β
It is sadly true that gas prices are the primary way Americans think about the consequences of war in the Middle East.
The idea that humans will be reviewing this code in two years, let alone twenty, seems far fetched.
Tony Stark was the original vibe coder.
Yes, itβs AI.
How I imagine the behind the scenes of the McDonaldβs CEO ad actually went down.
We are in the middle of a seismic shift in software development and itβs clear there will be winners & losers as AI agents transform our industry.
Seeing 50 and 60-year-old HN devs share how Claude Code reignited their passion for building software and solving problems over the framework rat race is exactly how I feel.
I havenβt felt this energized in years.
Detractors argue AI removes the craft and familiarity with implementation details.
Days after OpenAI lost its VP of Research to Anthropic after making its deal with the Pentagon, they are now losing another senior executive.
This time thereβs no ambiguity that itβs because Altman cut a deal with Hegseth to use their AI models to kill people and spy on Americans.
Make the CEO of Adobe try to cancel their Creative Cloud subscription on Instagram Live.