Screenshot of email from OpenRouter. Text: "Your 2025 OpenRouter Wrapped has arrived! Which model was your #1?"
And here I thought OpenRouter Wrapped was a bit excessive
Screenshot of email from OpenRouter. Text: "Your 2025 OpenRouter Wrapped has arrived! Which model was your #1?"
And here I thought OpenRouter Wrapped was a bit excessive
If you can change the world for the better, it doesn't matter whether you *deserve* to change the world. You're the only one who can: are you going to?
I think all this can lead to the same narrowness that you identify, just via a different question which makes no reference to deserving.
This is a motive that's easy to understand: "if I were in charge, things would be different."
And I think it sidesteps questions of "desert" entirely: "you don't deserve this" can be discounted because it's the wrong question. They have power, so now things *are* going to be different.
I don't think "people with power deserve power" is part of it. They can be very critical of people with power!
Which explains the second part better: (in their minds) they are especially well-positioned to use power well, and everyone else is doing a bad job. It's their *duty* to become powerful.
Some comparisons from the post.
A sense of scale is important. If ChatGPT is a major climate priority, we should treat it as one. If not, it's counterproductive for people focused on climate to devote their attention here.
For people who are concerned about the environmental impacts: I've looked into this and afaict ChatGPT uses less energy than Netflix or YouTube.
Exact numbers are hard to find, but this piece does a good job. Open to compelling sources saying otherwise!
andymasley.substack.com/p/individual...
@andymasley.bsky.social recently wrote a useful piece putting the environmental costs of chatbots into perspective: it's very low! This makes the character benefit of thanking your chatbot clearly worth it :)
bsky.app/profile/andy...
(And then for some reason there are talking animals and an uplifted space heater. Those were fun but I only vaguely saw the relevance; there was probably more to get there that I'm missing.)
It doesn't really answer those questions, just ... explicates them? Figures out what they could be asking, and what some candidate answers could be. So I felt like I understood the questions slightly better after reading it, but didn't have better answers.
I think it's genuinely trying to answer some of the philosophical questions. "Supposing we have complete automation and complete control over our own biology, would there still be any purpose or meaning? Would such a world be missing something important? If so, might it still be worth pursuing?"