Yeah, that seems...a little weird. Especially considering everything going on right now...
Yeah, that seems...a little weird. Especially considering everything going on right now...
Yup! I will once again be gainfully employed by the end of the month. :)
Ah, the relief when you can stop paying attention to LinkedIn once again.
With code, it's possible to have tests and telemetry enough to have a reasonable degree of non-vibes-based confidence something is correct. The bar for making something that can do the same where the answer is either unknown or very squishy is much higher.
When you have a machine calibrated to make something as answer-shaped as possible, but not actually do anything to ensure it's a correct answer, that's going to make it easier and easier to build up overconfidence when you start pushing at the edges of your knowledge.
This is that same problem with the guy who wiped out his Terraform. If you don't know what the right answer *should* be, it's very easy for an LLM to spit out something that is completely plausible (or even correct, but leaving out some important context, in the Terraform case) and have no idea.
"Unthinkable scenario" to me tells me that JPMorgan Chase probably needs to hire some people who know literally anything about geopolitics or history. Hasn't this been the fear *every* time there has been a crisis with Iran in the last, I don't know, 80 years?
At least it's not unprecedented now!
Engineers will too, if you actually hold them accountable and/or reward the behavior, but... ๐คทโโ๏ธ
I have a hard time imagining even these morons think that would go over well, but there is no level they won't sink to, so...
Yeah, I agree. It's just frustrating having people constantly claim to value them and then...clearly not. :P
This is the big problem for me. If engineers couldn't be bothered to write good docs before, I can't see them being real diligent about being editors. And the less you write the less practice you get at actually writing well.
For something you don't care that much about, sure, but for something in production? It seems like a good way to get that thing from the other day where you wipe your AWS infra because *you* don't know how Terraform works.
This seems like it would quickly result in a possibly catastrophic game of telephone. It also doesn't take into account that context does not live entirely in the computer (and never will as long as people are involved somewhere).
A lot of that is just because the important things about it have nothing to do with the code itself, so it *can't* know unless you tell it. If people did that in the first place, you wouldn't need Claude to.
This has definitely been my experience. Comments that go into great detail about how a function works, nothing about the context for it. Pages of redundant docs on how basic things function. It may be *correct*, but the stumbling point is making it useful for the intended purpose.
That is the bitter irony about all of this. Nobody was willing to invest in documenting any of this when it was for people, but now suddenly it's a concern when it's input for an LLM.
"Our engineers are bad at documentation" Oh, really? When was the last time anyone got promoted for documentation? Or, you know, tons of tech writers who would be happy to have work right now. You get the behavior you reward.
People saying Claude is better at documentation than humans -- is it though? More verbose, maybe, more diligent at making *something*...if it is better, that sounds like a skill issue. AI docs still have the same not-quite-always-right problem and it is bad at knowing what is important, IME.
This is 100% what I have found with my own writing. It's also why I find the AI-written LinkedIn post style so annoying, because it's a bunch of short clipped sentences that make me feel like I'm in stop-and-go traffic getting whiplash every 100 ft.
Yeah, this is pretty much where I am. Especially #2.
Or I could just link it directly :P @shiftf1podcast.bsky.social
I cannot speak to it myself, but I know @robzacny.bsky.social of Remap does the Shift+F1 podcast with Drew Scanlon and Danny O'Dwyer; judging by the other podcasts in that orbit I suspect that might be up your alley.
Thankfully, it's unseasonably warm here in Chicago this weekend. :/
It feels more like "AI told me I had skis but halfway down the mountain I realized I had left them at the bottom of the hill", but yes.
Anyway, long story short -- please learn about the tools you want to use, whether you're using an LLM or not. If you don't know how they work, you won't know how they *don't* work, and then you're not going to be able to catch your LLM deciding to nuke your entire AWS account for funsies.
I mean, you can just do this with Terraform itself. You don't even need a DynamoDB anymore, just an S3 bucket. Like I said later in the thread, I've made similar mistakes in the past, but nothing quite this egregious, simply because I understand the tools, how they work, and the pain points.
Look, I have to admit that during my less-than-funemployment I've been using Claude and I won't deny it seems to be at a point where *if you know how to use it*, it can provide a lot of utility. But if you do not understand the things you are asking it to do, you are asking for problems like this.
...a problem, but *why have Claude run the Terraform for you*?!
Is having Claude type "terraform apply" really saving you that much time? And is it really a good idea to just take off the safeties and tell it to go whole hog without confirmation? This is *wild* to me.
-- you aren't just looking at the map, you're flying through the Alps with the nothing but a map and a compass, but also your map is one of those tourist ones where everything is exaggerated to only show you the interesting parts.
I don't necessarily think using Claude to do your Terraform is...