Full instructions and card text:
agilepainrelief.com/blog/human-p...
#AIinSoftwareDevelopment #BuildInPublic
Full instructions and card text:
agilepainrelief.com/blog/human-p...
#AIinSoftwareDevelopment #BuildInPublic
Three takeaways:
β’ GenAI doesnβt understand your prompt, itβs pattern-matching
β’ No memory between requests
β’ More parameters = better mimicry, not intelligence
Run it before your next retro. 10 min, index cards, a die.
New glossary entry on AI and Skill Atrophy, the anti-pattern the software industry needs to face:
agilepainrelief.com/glossary/ai-...
#AIinSoftwareDevelopment #BuildInPublic
Itβs a reinforcing loop: more AI reliance β less skill practice β less ability to judge output β even more reliance.
Junior developers are especially at risk, building confidence in AI before building confidence in themselves.
Brian Graham calls it the Knowledge Cliff: automating away understanding you need.
GPS replaced taxi driversβ mental maps. NASA lost rocket expertise to retirement. In software, every LLM decision erodes skill while giving false confidence.
We will have to differ. You can't use an LLM to find a mistake it's brethern made. Unfortunately the flaws are endemic. See: agilepainrelief.com/blog/genai-c...
AI is a superpower. The question is whether you use it to produce more, or to learn faster.
agilepainrelief.com/blog/gen-ai-...
#AIinSoftwareDevelopment #BuildInPublic
The antidote? Use AI to validate faster, not ship faster.
ScrumMasters: frame Sprint Goals as βlearn whether X solves problem Y.β Product Owners: run three small experiments in the time it used to take to build one feature.
AI agents promised faster delivery. They also delivered 39% more Cognitive Complexity and 30% more static analysis warnings.
Research found a reinforcing cycle: AI generates more code β complexity rises β debt accumulates β velocity drops β teams write even more AI code to compensate.
If you read the details in the blog post, I think we're too trusting of these tools. Myself included.
GenAI is making it easier to switch tool providers. Iβm frustrated with our email platform: most features are GUI-only, no API access.
Iβve written code to extract our entire newsletter archive + stats. When the time comes, exporting will be painless.
...last year I created these guidelines in the context of Scrum Teams: agilepainrelief.com/blog/why-ai-...
I plan to update them later this year.
So far the most important one: only use it domains where I have enough knowledge to verify the output.
- Strategic analysis of my business - yes
- Competitive Market Intelligence - mostly
....
- Medical advice - no
More importantly I thin we're still learning. ...
They learned from the internet's code. No grasp of architecture or design patterns. Just next-token prediction dressed up as engineering.
These aren't bugs. This is how they're built.
buff.ly/d6JTy5j
AI models are trained to bluff.
Pass/fail training offers no reward for admitting uncertainty. Instead of "I don't know," you get a confident wrong answer. Researchers call it "test-taking mode."
#AIinSoftwareDevelopment #BuildInPublic
Start here: never assume the output is correct. Build verification habits that match the stakes.
#AIinSoftwareDevelopment #BuildInPublic
Multiply that across a team. Six people, ten AI-assisted tasks each per week. Even a small error rate means several unnoticed mistakes weekly.
The real cost? Catching errors requires exactly what we're short on: domain expertise, critical thinking, and time.
Sounds about right. There is a massive upside - I need a much better and much needed strategic review in a day. More importantly I automated much of the data collection with Typescript+Playwright so I don't need an LLM to do that in the future.
GenAI didn't replace my judgment. It gave me clear choices to make. The hard work of building the products remains.
Side benefit: reusable scripts that pull data without AI next time.
The productivity gain is real. So is the cost.
Most of the analysis was right. Mistakes were easy to spot.
But compressing a week into a day doesn't reduce cognitive load; it might double it. I'm far more exhausted than after a normal workday.
Bonus: realized my $1.2k Ahrefs subscription had no value left.
GenAI has real downsides (trained on copyrighted material...), but Claude Code + Markdown files are saving me time and money on grunt work. Today I'll take the win.
Also discovered the SEO structure I paid $$$ for is now obsolete. Used Claude Code to restructure internal links across our Astro site. First batch done in an hour. Manually? Days. It even caught broken redirects.
Apple is using the Wizard of Oz MVP to test how we expect to interact with AI Agents that work on our behalf. I never thought I'd see this from Apple.
machinelearning.apple.com/research/map...
Last Minute seat just became available in next weekβs Certified ScrumMaster (CSM) Workshop: buff.ly/pcAmkUf
#ExperientialLearning
Speed is not an advantage. With GenAI, Quality and Fit for Purpose will differentiate.
The emotional attachment some people have to GenAI amazes me. Rather than reading the research on its flaws, they say it's dated or "works on my machine."
I'm not saying stop using the toolsβjust understand their limitations.
Hope to see you tomorrow.
Maybe I should go rewrite the material one last time.
Irony: LLMs helped me research this across 15 sources. Theyβre great at that. Just not at writing safe code.
Full analysis: agilepainrelief.com/blog/genai-c...
Jamie Twiss found newer models silently remove safety checks rather than fail visibly. The result: code that looks right but breaks in production at 2am.
The tool matters less than how you use it.
"Reasoning" models don't reason. They rely on statistical pattern matchingβreplicating patterns from training data. That's not logical deduction.
More effective than models without it? Sure. But it's still a stochastic parrot.