Sam Barrett, PhD's Avatar

Sam Barrett, PhD

@ai4geo

GeoAI, Climate, Remote Sensing, Generative AI and more!

1,142
Followers
1,148
Following
697
Posts
17.11.2024
Joined
Posts Following

Latest posts by Sam Barrett, PhD @ai4geo

I'd say it's often not an absence of real world smarts but lack of context to give good responses beyond the text book.

09.03.2026 17:13 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
On the Reasoning Capabilities of Homo sapiens Preliminary Notes Essays and writing on AI

Going to have to turn this into the actual paper:
sbgeoaiphd.github.io/rotating_the...

08.03.2026 20:22 πŸ‘ 4 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Yeah. So much "no Claude, this is not about that. Not everything is about that!"
("that" being embeddings)

08.03.2026 20:21 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I love this genre

08.03.2026 20:18 πŸ‘ 4 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Very true but my head hurts enough already...

This is all also replayed in my work with embeddings. It's really impressive what can get packed into just 64 int4 numbers...

08.03.2026 14:08 πŸ‘ 4 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Absolutely. I'm a geologist by training so I've had plenty of time to ingest billions of years, but I absolutely do not have a *real* intuition for billions of years... let alone trillions of parameters.

08.03.2026 14:04 πŸ‘ 4 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Exactly. "Just" only works if your reference point is smartphone word completion or MAYBE early copilot code line completion. If the "next tokens" contain novel solutions to e.g. Erdos problems or suchlike, "just" is thr wrong word.

08.03.2026 13:35 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

I think what's most important for most people isn't whether some analogy is in some sense "correct", it's whether it gives people useful intuitions. And autocomplete and stochastic parrot don't tend to lead to strong intuitions about the capabilities or impacts of llms.

08.03.2026 11:17 πŸ‘ 54 πŸ” 3 πŸ’¬ 1 πŸ“Œ 2

β€œit’s just spicy autocomplete” is advocacy malpractice, is the thing.

the sneering dismissal completely miscalibrates the urgency: media siege devices

β€”printing press, telegraph, internetβ€”

rip holes in civilization itself

08.03.2026 07:03 πŸ‘ 35 πŸ” 4 πŸ’¬ 1 πŸ“Œ 1

The AI discourse sometimes seems to center on "Is AI good or is it bad?"

I find this framing unproductive. AI is not a fixed thing.

I would prefer to ask "How might we use this technology for good, and mitigate the bad?"

What a shame if the best use we can come up with is no use at all.

06.03.2026 05:29 πŸ‘ 40 πŸ” 5 πŸ’¬ 4 πŸ“Œ 2

Moriarty was a star fleet red teaming exercise that escaped.

05.03.2026 19:16 πŸ‘ 15 πŸ” 3 πŸ’¬ 0 πŸ“Œ 0

Average ceases to be meaningful to me in that situation, and should absolutely not be compared to mean stack overflow. The entire argument falls apart with any sustained and intentional prompting.

05.03.2026 19:15 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Thr average of the specific guidelines? OK, maybe that average actually exists, but let's say my guidelines are esoteric and I'm prompting 100 different requirements which have certainly never existed in this combination before, so the result is the average of what exactly?

05.03.2026 19:15 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

The only sense this (see picture) makes any sense to me is if prompting just doesn't work... it's not that it's wrong. It's not even wrong. So I prompt it with coding guidelines so it's stylistically nothing like average stack overflow. So then it's the average of what exactly?

05.03.2026 19:15 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Which raises the question, if you have high quality energy matter converters, why do you have a holodeck, not a "actual real physical stuff created on demand" deck?

05.03.2026 18:36 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Getting GenAI to recreate basic software which massively exists in the training data is the "tea, earl grey, hot" use of GenAI.

05.03.2026 18:33 πŸ‘ 4 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

How did I miss the holodeck thing!
But I always imagined I could ask the replicator for anything. Picard only ever ordering a tea, earl grey, hot, is just his lack of imagination.

05.03.2026 18:33 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

The fascinating and challenging things are those not in the cookbook.

05.03.2026 18:17 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

I love this analogy and it's fascinating. Replicators as the closest start trek has to generative ai? You prompt it and get the thing you asked for... maybe? If you explained well enough or it inferred or you're referring to an established recipie.

05.03.2026 18:17 πŸ‘ 13 πŸ” 0 πŸ’¬ 2 πŸ“Œ 1

Another example, the gpt 4.5 "explicitly" bug.

05.03.2026 07:23 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Literally been grappling with this recently. For high effort collaborative writing, do I just put my name? That doesn't feel right. Do I say "AI assisted"? Most people think low effort slop ahead. Do I link to a long statement explaining what I do? Seems inefficient...

03.03.2026 19:28 πŸ‘ 5 πŸ” 0 πŸ’¬ 4 πŸ“Œ 0

Which is why we need a functional vocabulary to describe the difference between low effort prompting and high effort iterative co-creation and traditional human authorship. Though definitely don't use AI at all and call it homemade.

03.03.2026 19:28 πŸ‘ 10 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

And it is a medium. One if the most powerful and frankly weirdest ever created.

And it needs it's champions.

It needs it's innovators.

Read the full thread.

03.03.2026 10:59 πŸ‘ 15 πŸ” 3 πŸ’¬ 1 πŸ“Œ 0

I've also directly encountered people on here arguing that *any* llm use erodes critical thinking and it is generally impossible to learn anything using them and anyone thinking they do learn *anything* is deluding themselves. Hopefully that is a rare opinion but it exists here.

01.03.2026 19:28 πŸ‘ 7 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

But alfalfa feeds cows and cows are food. And agriculture is the perfect industry because we need to eat... except when we criticise it, but we strictly do that separately. When discussing AI, agriculture is angelic.
/sarcasm

27.02.2026 16:11 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I think I'm so used to chatgpt and not just recent models that I expect that I need to be the one bringing the epistemic rigor, but you're right that if models (Claude?) exposes people to epistemic humility, it might start to rub off on median person.

26.02.2026 18:19 πŸ‘ 5 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Hmmm. I'm curious to apply this to my cowriting. Though that emerges in a different way to code in the first place.

26.02.2026 11:41 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

bsky.app/profile/ai4g...

25.02.2026 20:18 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

The echo chamber point is interesting. I think it's partly mitigated by vigilance but there is some related practice which I can't name right now other than to refer to my rotations essay...

25.02.2026 20:17 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Interesting feedback. On 1, I suspect that while much is learnable, there are traits which will lead to inequality in what's sustainable, but that's true of everything. 2. Is very fatalistic: we failed before therefore let's not even try? Though I agree it's not easy.

25.02.2026 20:17 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0