RationallyDense (they/he/she/any)'s Avatar

RationallyDense (they/he/she/any)

@rationallydense

269
Followers
318
Following
2,800
Posts
30.09.2023
Joined
Posts Following

Latest posts by RationallyDense (they/he/she/any) @rationallydense

I prefer the plan where we use nukes to tunnel under the mountains of the Musandam Peninsula to make an underground canal, Atoms-for-Peace-style. It's also a bad idea, but at least it would be kind of cool.

12.03.2026 18:58 👍 1 🔁 0 💬 0 📌 0

No

11.03.2026 22:51 👍 1 🔁 0 💬 0 📌 0

You got it right, yes.

11.03.2026 15:53 👍 10 🔁 0 💬 0 📌 0

Early royal society ass experiment

11.03.2026 15:29 👍 21 🔁 6 💬 0 📌 0

But it's mine!

10.03.2026 20:45 👍 1 🔁 0 💬 0 📌 0

Promise?

10.03.2026 20:13 👍 0 🔁 0 💬 0 📌 0

The past 3 were resounding successes right?

10.03.2026 20:08 👍 0 🔁 0 💬 1 📌 0

In general, the only reason to lean towards EDT is that tracing the consequences of your actions can be hard. And so much of the time, an explicit EDT policy is a better approximation of CDT than an explicit CDT policy. But CDT is the correct decision theory in our world.

10.03.2026 17:01 👍 1 🔁 0 💬 0 📌 0

In all honesty, I think Newcomb's problem is bad except as a way to highlight some situations where CDT and EDT come apart. Otherwise, it's best summed up as "if people out there are rewarding one-boxers, you should be a one-boxer" and like, obviously. But then it's an empirical question.

10.03.2026 17:01 👍 1 🔁 0 💬 1 📌 0

I'm a two-boxer because there is some guy out there who hands out $2 million to the kinds of people who two-box.

10.03.2026 06:23 👍 1 🔁 0 💬 1 📌 0

I really lost a lot of respect for Snyder in the last few years. Posting idiotic stuff like this from Toronto.

08.03.2026 15:11 👍 235 🔁 21 💬 7 📌 7

No. There's no evidence for this. They started a war in Iran to destabilize Iran. That has been the pretty much open goal of Israel and the US right wing for decades. There's no reason to reach for weird conspiracy theories.

10.03.2026 05:07 👍 1 🔁 0 💬 0 📌 0

Lol

10.03.2026 05:04 👍 0 🔁 0 💬 0 📌 0

Yeah, I want evidence of a guy weirdly spending hours on a nonsensical diorama, not pictures of action figures fighting.

10.03.2026 04:44 👍 4 🔁 0 💬 1 📌 0

Distinctive talent or not, 10 people got to go home to their loved ones thanks to you. That matters. :-)

10.03.2026 04:41 👍 3 🔁 0 💬 0 📌 0

Yeah. AIUI, traditionally, the feds hate losing. So they dot their is, cross their ts and if they think they might lose, they back off. The aggressive immigration enforcement means they're both doing riskier cases and doing a bad job. Still surprising to see how bad they've gotten.

10.03.2026 02:57 👍 4 🔁 0 💬 1 📌 0

Soon-enough, the USA.

10.03.2026 02:32 👍 0 🔁 0 💬 0 📌 0

Federal habeas petitions have a ~1% success rate normally. So either this rando averaged 34 petitions a day for the past month, or the feds suck now.

10.03.2026 02:31 👍 1 🔁 0 💬 1 📌 0

I feel happy.

bsky.app/profile/rati...

09.03.2026 22:53 👍 3 🔁 0 💬 0 📌 0

Finally, a brief moment of happiness.

bsky.app/profile/rati...

09.03.2026 22:52 👍 0 🔁 0 💬 0 📌 0

The way I see it, the role of interpretations of QM is to resolve the apparent contradiction between the Born rule and Schrodinger's equation. The simplicity of the MWI is that it doesn't ask us to add anything to the mix. But it doesn't tell us why those are the equations.

09.03.2026 22:50 👍 0 🔁 0 💬 1 📌 0

I don't really get what you mean. We don't expect the Schrodinger equation to fall out of MWI right? Why should the Born rule?

09.03.2026 22:25 👍 0 🔁 0 💬 1 📌 0

Frankly, I am confused by the insistence that LLMs can't do anything. I have now been blocked by multiple people in response to stating that I have found LLMs useful in completing some tasks. I get that they are oversold, but of course they can do some things.

09.03.2026 22:13 👍 1 🔁 0 💬 0 📌 0

Why do we have to justify it? I get that some MWI proponents want to argue the Born rule falls out of the theory, but can't we just say the Born rule just happens to be what it's like to become entangled with a system in a superposition?

09.03.2026 22:10 👍 0 🔁 0 💬 1 📌 0

Frankly, I'm baffled by your reaction. I have personally used LLMs to complete tasks which in my experience would have otherwise have been much more work. What basis do you have to claim that's not possible? Do you think I'm lying for some reason?

09.03.2026 22:03 👍 0 🔁 0 💬 0 📌 0

1. Because in some cases, checking the solution is a lot easier than generating it. For instance, finding a library function that does something is a lot more work than reading that one function's documentation.

2. If it stops being cheap, I'll just stop using it.

09.03.2026 21:47 👍 0 🔁 0 💬 1 📌 0

It definitely has some uses outside of coding. Writing a bunch of bullet-points and feeding them to an LLM ime yields a reasonable first draft for a memo or email. That's useful for a lot of corporate jobs.

But yeah, it's not useful to everyone. And I don't know why people pretend otherwise.

09.03.2026 21:05 👍 0 🔁 0 💬 0 📌 0

They definitely "know" a lot more than a green intern. That's one of their big advantages. I've been using Claude to setup a home lab and not having to read reams of documentation to find the feature I'm looking for is very useful.

09.03.2026 20:39 👍 0 🔁 0 💬 0 📌 0

Only if this process takes more time than doing it myself in the first place on average. For some tasks, that's the case. For others, it's not. Same thing for the amount of effort exerted. That's why I don't use it for everything.

09.03.2026 20:32 👍 0 🔁 0 💬 0 📌 0

You don't need reliability if it's easy to check their answers and cheap to have them try again.

09.03.2026 20:25 👍 0 🔁 0 💬 2 📌 0