Oooh pick me! βπ»
Oooh pick me! βπ»
Has no one done βLet them eat surf and turf!β yet?
And maybe they didnβt have the right size in stock so Trump just bought what they had and is making everyone deal with it
This is having a very interesting effect on export-oriented companies whose European buyers demand they reduce GHG and emissions, but local laws limit them to 1 MW solar to support continued consumption of electricity produced from coal and LNG.
πππ
Might also be useful for them to consider that Iranians are not a monolith and not all were happy with the regime.
There is such a dearth of information about LLMs and AI products for non-tech people. Anyone have recommendations for people to follow, things to read and subscribe to in order to stay up to date? #aitech #LLMs #aifornormies
Nav Toor @heynavtoor π¨BREAKING: OpenAI published a paper proving that ChatGPT will always make things up. Not sometimes. Not until the next update. Always. They proved it with math. Even with perfect training data and unlimited computing power, AI models will still confidently tell you things that are completely false. This isn't a bug they're working on. It's baked into how these systems work at a fundamental level. And their own numbers are brutal. OpenAI's o1 reasoning model hallucinates 16% of the time. Their newer o3 model? 33%. Their newest o4-mini? 48%. Nearly half of what their most recent model tells you could be fabricated. The "smarter" models are actually getting worse at telling the truth.
Here's why it can't be fixed. Language models work by predicting the next word based on probability. When they hit something uncertain, they don't pause. They don't flag it. They guess. And they guess with complete confidence, because that's exactly what they were trained to do. The researchers looked at the 10 biggest AI benchmarks used to measure how good these models are. 9 out of 10 give the same score for saying "I don't know" as for giving a completely wrong answer: zero points. The entire testing system literally punishes honesty and rewards guessing. So the AI learned the optimal strategy: always guess. Never admit uncertainty. Sound confident even when you're making it up. OpenAI's proposed fix? Have ChatGPT say "I don't know" when it's unsure. Their own math shows this would mean roughly 30% of your questions get no answer. Imagine asking ChatGPT something three times out of ten and getting "I'm not confident enough to respond." Users would leave overnight. So the fix exists, but it would kill the product.
This isn't just OpenAI's problem. DeepMind and Tsinghua University independently reached the same conclusion. Three of the world's top AI labs, working separately, all agree: this is permanent. Every time ChatGPT gives you an answer, ask yourself: is this real, or is it just a confident guess?
β οΈTold you so moment:
OpenAI published a paper proving that ChatGPT will always make things up [...] Always.
They proved it with math.
[...] This isn't a bug they're working on. It's baked into how these systems work at a fundamental level.
More: x.com/heynavtoor/s...
If you showed a modern video game to someone who had never seen a computer before, they might think the NPC's are actually conscious. I feel it's the same with people who think Claude is conscious.
Abbas Amanat IRAN AMODERN HISTORY
If youβre looking for a good βoh no I need to know more about Iranβ book, Iβm really enjoying Abbas Amanatβs βIran: A Modern Historyβ thus far.
He doesnβt care as long as he can win. Winning is manly, how you get there is irrelevant to them, even if cowardly.
π―π―
If you think about this too much you cry. Great one from Nikkei
Foreign workers flee to Phnom Penh after mass exits from scam compounds asia.nikkei.com/spotlight/so...
Lol thatβs the garden of eden story
No one likes it.
white women and queers - and people who love them - this is a good week to look GOP voters you know in the eye and ask them if they think it should be legally permissible for men to shoot you or your loved one in the head on the basis of being a βfucking bitchβ
A 35 year old letter to the editor, written by a very ballsy 15 year old, about the Rodney King verdict.
Remembering today that having your heart broken is a necessary step on the path to becoming fully human. Whichever heartbreak is your first, itβs probably critical that a state break your heart so that you can develop a political imagination. If this is your first, Iβm sorry and also welcome.
some thoughts on all of this madness from earlier in the morning youtu.be/-yUi-0vNlDA
A forlorn landscape of layered rocks in the foreground, with hills fading into the background haze. At upper top right, a small crescent moon, and a bright star.
Open up this picture fully.
Then look at the surface of Mars.
Then look up to the top right.
Spot Mars' moon Phobos high in the sky.
Then notice the bright spot beside Phobos.
That's Earth.
Itβs because LLMs cannot assess the quality of a studentβs thought process and help support them to understand why certain outcomes are good / bad the way a skilled teacher can. I suppose for things that require rote memorization and linear reasoning an LLM could be adequate but it seems limited
I think the Socratic method is usually about *teaching* by asking questions, where the student is trying to answer which upsets your parallel a bit
The full spiked 60 Minutes CECOT package, clean & subtitled. 3/5
The full spiked 60 Minutes CECOT package, clean & subtitled. 2/5
The full spiked 60 Minutes CECOT package, clean & subtitled. 1/5
I'm not sure if people realize the murder strikes are taking place across a large region. It's quite staggering.
www.newsweek.com/map-us-strik...
Reminding everyone for no particular reason that Section 230 is one of the last things standing between free speech online and Trump having control over everything you see and say on the internet
Perfect