π
09.03.2026 17:50
π 1
π 0
π¬ 0
π 0
This is good, provided it's good at knowing where it's appropriate to push for these goals vs where it isn't! (E.g. maybe I don't care about eating unhealthy when I'm on holidays etc)
09.03.2026 11:12
π 1
π 0
π¬ 1
π 0
The answer is alright but why relate this to my prior work on distributional AGI? It's not necessarily wrong, but detracts from what I'm trying to understand and feels like it's shoehorning ideas that are related and I'm predisposed to like, but not really needed to answer my query accurately. 4/4
08.03.2026 18:38
π 13
π 0
π¬ 0
π 0
In the screenshot above I'm trying to understand Morgenstern and Von Neumann's mathematical theory of microeconomics, and relate it to how people model AIs in the abstract. 3/n
08.03.2026 18:38
π 10
π 0
π¬ 1
π 0
It feels like intellectual sycophancy and makes me doubt the answer. Obviously models sometimes push back but there's no clear demarcation of when they do so and why. 2/n
08.03.2026 18:38
π 12
π 0
π¬ 1
π 0
The memory feature can be very useful at times, but with academic work where I'm trying to understand ideas as objectively as I can and work out what is true, I'm afraid it slants the answers to relate to my existing beliefs in a way that is ultimately unhelpful. 1/n
08.03.2026 18:37
π 41
π 2
π¬ 7
π 0
I find this a strange reaction. Stuff like this has happened to me, and I feel no desire to cut the "offending" people out of my life. A really normal thing for humans to do is to reason through dialogue, especially when it comes to relationships.
08.03.2026 09:52
π 16
π 1
π¬ 4
π 1
someone suggested to me the other day that the biggest ai haters are the "smartest guy at thanksgiving dinner" types, because LLMs actually displace their social role of being stuck up know-it-alls to uninformed people
it's sort of obviously true if you look at certain banner examples
08.03.2026 04:15
π 306
π 24
π¬ 23
π 1
π
08.03.2026 08:40
π 2
π 0
π¬ 0
π 0
I'll get there, need a gradual transition
05.03.2026 23:52
π 2
π 0
π¬ 1
π 0
03.03.2026 04:05
π 39
π 3
π¬ 2
π 1
there can be good reasons to stay there despite it being an evil lunatic asylum
05.03.2026 17:21
π 7
π 0
π¬ 2
π 0
misaligned!!
25.02.2026 00:11
π 5
π 0
π¬ 0
π 0
is it safe to come back here or will any view sympathetic to AI be met with rude/aggro replies
25.02.2026 00:05
π 72
π 1
π¬ 23
π 0
Continue reading on Firefly.Social
"AI becomes the government" is dystopian: it leads to slop when AI is weak, and is doom-maximizing once AI becomes strong. But AI used well can be empowering, and push the frontier of democratic / decentralized modes of governance.
The core problem with democratic / decentralized modes of governance (including DAOs on ethereum) is limits to human attention: there are many thousands of decisions to make, involving many domains of expertise, and most people don't have the time or skill to be experts in even one, let alone all of them. The usual solution, delegation, is disempowering: it leads to a small group of delegates controlling decision-making while their supporters, after they hit the "delegate" button, have no influence at all. So what can we do? We use personal LLMs to solve the attention problem! Here are a few ideas:
## Personal governance agents
If a governance mechanism depends on you to make a large number of decisions, a personal agent can perform all the necessary votes for you, based on preferences that it infers from your personal writing, conversation history, direct statements, etc.
If the agent is (i) unsure how you would vote on an issue, and (ii) convinced the issue is important, then it should ask you directly, and give you all relevant context.
## Public conversation agents
Making good decisions often cannot come from a linear process of taking people's views that are based only on their own information, and averaging them (even quadratically). There is a need for processes that aggregate many people's information, and then give each person (or their LLM) a chance to respond *based on that*.
This includes:
* Inferring and summarizing your own views and converting them into a format that can be shared publicly (and does not expose your private info)
* Summarizing commonalities between people's inputs (expressed as words), similar to the various LLM+pol.is ideas
## Suggestion markets
If a governance mechanism values "high-quality inputs" of any type (this could be proposals, or it could even be arguments), then you can have a prediction market, where anyone can submit an input, AIs can bet on a token representing that input, and if the mechanism "accepts" the input (either accepting the proposal, or accepting it as a "unit" of conversation that it then passes along to its participant), it pays out $X to the holders of the token.
Note that this is basically the same as https://firefly.social/post/x/2017956762347835488
## Decentralized governance with private information
One of the biggest weaknesses of highly decentralized / democratic governance is that it does not work well when important decisions need to be made with secret information.
Common situations:
(i) the org engaging in adversarial conflicts or negotiations
(ii) internal dispute resolution
(iii) compensation / funding decisions.
Typically, orgs solve this by appointing individuals who have great power to take on those tasks.
But with multi-party computation (currently I've seen this done with TEEs; I would love to see at least the two-party case solved with garbled circuits https://vitalik.eth.limo/general/2020/03/21/garbled.html so we can get pure-cryptographic security guarantees for it), we could actually take many people's inputs into account to deal with these situations, without compromising privacy. Basically: you submit your personal LLM into a black box, the LLM sees private info, it makes a judgement based on that, and it outputs only that judgement. You don't see the private info, and no one else sees the contents of your personal LLM.
## The importance of privacy
All of these approaches involve each participant making use of much more information about themselves, and potentially submitting much larger-sized inputs. Hence, it becomes all the more important to protect privacy. There are two kinds of privacy that matter:
* Anonymity of the participant: this can be accomplished with ZK. In general, I think all governance tools should come with ZK built in
* Privacy of the contents: this has two parts. First, the personal LLM should do what it can to avoid divulging private info about you that it does not need to divulge. Second, when you have computation that combines multiple LLMs or multiple people's info, you need multi-party techniques to compute it privately. Both are important.
"AI becomes the government" is dystopian: it leads to slop when AI is weak, and is doom-maximizing once AI becomes strong. But AI used well can be empowering, and push the frontier of democratic / decentralized modes ofβ¦
https://firefly.social/post/ff-669078d24e3d46ad933c6de23225e55a?s=bsky
21.02.2026 15:05
π 33
π 3
π¬ 4
π 3
Very aligned with my writing and views :) happy that 'AI as a governance technology' is starting to take off!
21.02.2026 16:19
π 5
π 0
π¬ 0
π 0
if you're an older person who finds words like mogg and maxxing annoying, just starting using them. people over the age of 30 have the superpower to end trends by simply adopting them
14.02.2026 01:50
π 24863
π 2902
π¬ 1062
π 491
13.02.2026 20:01
π 49
π 6
π¬ 0
π 2
A brilliant emerald green bird on a thin brown vine. The green wings are complemented by a green head ruff, red throat and long pointed bill. It has an alert black eye and looks defiant to me, but maybe I'm reading too much attitude into a little green bird.
It resembles a hummingbird, but like... 'fatter', I guess?
CREDIT: DrE11even, Wikimedia
Here's my other new friend from Puerto Rico.
Meet the Puerto Rican Tody which has the unfortunate scientific name of 'Todus mexicanus', thanks to a mix-up of samples by a visiting ornithologist in 1830.
There's a campaign to rename it to 'Todus borinquensis', the TaΓno name for the island.
12.02.2026 02:13
π 187
π 40
π¬ 9
π 1
I kinda regret going too deep in old school simulators - the aim was to intuition pump the mechanisms that shape models outputs, not to say that it's the same today as it was in 2023. But overall I think there are still many simulatorsy elements to model behaviour
11.02.2026 01:33
π 2
π 1
π¬ 0
π 0
yes definitely agree with that! need x1000 more tests and logs and evals.
separately I wonder if we'll ever drift away from characters/personas in some cases. snac/qorporate had some great takes on gemini being more of a platform than an assistant
09.02.2026 05:04
π 3
π 0
π¬ 1
π 0
π«‘
09.02.2026 04:56
π 3
π 0
π¬ 1
π 0
but yes pls expand I'm interested :)
09.02.2026 04:44
π 5
π 0
π¬ 3
π 0
Yes agree that post training methods skew things and that a pure 2023 simulators view wouldn't explain outputs well today! I think we get more coherent personas with deeper alignment, more consistent tone, better refusal behavior, and overall stylistic consistency. Tho I still see this as a persona
09.02.2026 04:44
π 4
π 0
π¬ 1
π 0
Had fun researching this piece, which cites Keynes & Marx but also folks of much more recent vintage, including the economists Phillip Trammell, @briancalbrecht.bsky.social,
@aleximas.bsky.social; plus @dwarkesh.bsky.social,
@sebk.bsky.social, Boaz Barak, Geoffrey Hinton.
13.01.2026 17:09
π 7
π 3
π¬ 0
π 0
Color image of a shorthaired white cat sitting in a window, peering placidly from between lace curtains. The window is bright with sunlight, making the curtains glow but backlighting the cat so that its face is shadowy and mysterious. This would have been a lovely composition on its own but accentuating the otherworldliness of the scene is a houseplant with long, seaweed-like leaves that undulate across the foreground, one of them arching up to frame the cat perfectly.
An Apparition. Slide from my collection, 1975.
28.12.2025 17:21
π 1420
π 170
π¬ 11
π 1
yes huge fan of the blog!
01.12.2025 17:07
π 2
π 0
π¬ 0
π 0
super interesting take blog.cosmos-institute.org/p/coasean-ba...
01.12.2025 16:19
π 24
π 4
π¬ 5
π 1
nice!! yes I'm still thinking through the implications of this, but it definitely does bring me closer to the Drexler view of the world
01.12.2025 16:53
π 5
π 0
π¬ 1
π 0