Now, with AI we can build tech enabled companies blazingly fast.
But how does that change outcomes. Does the success rate stay constant?
Maybe we redefine success because now a startup can be ambitious without raising so much capital?
Now, with AI we can build tech enabled companies blazingly fast.
But how does that change outcomes. Does the success rate stay constant?
Maybe we redefine success because now a startup can be ambitious without raising so much capital?
Do people who show up at meetings have a community they belong to, or some sort of support group?
I mean, it's always the same people.
It wasnβt that long ago we were all writing software on screens that were *not* high res. Can you imagine going back to that?
What happens to open source software licenses when we can just re-implement software using an API or test suite as the reference point?
Really great read from @mitsuhiko.at
lucumr.pocoo.org/2026/3/5/the...
How is not full of weeds by now?
AI software engineering agents are so much like the real thing, complete with deficiencies, its almost startling.
I mean, they run out of context, forget to document what they are doing, then can't get the app running again.
I'm becoming more paranoid that a Terminator from the future is coming to stop the future human rebellion.
Just me?
By the way @simonwillison.net 's ever growing codex of agentic coding insights is one of the most valuable things on the internet right now IMO. Please do check it out:
simonwillison.net/guides/agent...
But, if I give it the user story and design constraints (in the form of requirements) it comes up with a good design, often something I didn't think of, and writes the tests for it.
Then future agents can read it and make sense of it before they start their work. Just like real humans ;-)
The agents just keep piling on "working" code which makes the tests pass, but then churns, burning through tokens, while it tries to figure out how something else broke, without ever landing on a good design.
Eventually the code builds up and, after a few refactors, I get things into a good place.
But with agentic coding, the "brain" behind it never seems to catch up.
I'm taking issue with @simonwillison.net 's advice to use red/green TDD to get the best out of agentic coding.
First, for me (as a human) I've found that TDD focuses my brain on just getting the test to pass to the point where I miss the overall design.
Great point.
Side note: when did people start calling it βcodingβ ? I feel like I woke up one day and it was everywhere
I'm having so many mixed feelings about the internet here.
Typing code into a computer has never been the "real" job of a programmer, contrary to popular belief.
I'd add: Boxing the model in with your idea of how you write code. Instead, remove some of the guidance in your prompts and see what it comes up with on it's own. If you don't like it, then `git reset --hard`
Almost all software being built with AI right now is mistaking busy motion for real progress.
To make real progress you need to have a theory, iterate on it, and improve outcomes. AI can help you do this faster, but most people seem to be missing that and snacking on self-indulging treats instead.
Engineers and PMs are being incentivized to complexify instead of simplify. I've definitely been seeing this trend grow over the last 5 years.
terriblesoftware.org/2026/03/03/n...
Really great little tool for creating ASCII diagrams
Cuz you know you all need them for all that markdown you're writing for your Claude skills.
ASCII Flow asciiflow.com#/
Right now we are using AI to do brute force work -- "Do what I do, but do it faster"
But AI driven systems will be writing their own code, modifying themselves as they work and learn. That will completely change the game.
There is this notion going around that coding with AI is like using a higher level programming language; that it is normal like the programming language upgrades we've had in the past.
I don't agree. Programming languages are deterministic, and LLMs are definitely not. This is wayyyyy different.
Gemini gave me this choice between two different outcomes, which I've never seen before. Like a "choose your own adventure" thing. Interesting!
(I'm working on a fake website project to test out some workflows with LLMs.)
Good writing is a personal transaction between the human that wrote it, and the human who is reading it.
We are mostly using LLMs to do brute force work β "do what I do but way faster".
But it is the ability of an LLM to write software, when fully leveraged, will allow it to build it's own bespoke systems. This is mind blowing sci-fi reality.
Curiosity is the best guide. Do things that you are excessively curious about. This excited curiosity is both an engine and a rudder.
I know the hype train is real (and annoying to us old farts) but messing around with LLMs in software is guiding me in some really fun directions.
This is really important. I think we are often getting poor outcomes because we bloat the context.
It's like software: Don't write bloated code, and don't depend on faster processors
Imagine if we had been documenting our knowledge all along, to make our future selves better?
Ironically it took the arrival of LLMs for us to get motivated to write down all these βskill.mdβ files that we should have been writing all along.
Thatβs an interesting thought π€
Itβs the new reality
Is that banana that keeps showing up in Google products going to be the new Clippy?
Working with AI is pretty simple:
When you put garbage in, you get garbage out.