I think the current weird state of the world makes more sense when you remember that we are quite literally living in the post apocalypse.
It's been like 4 years since people were suddenly dropping dead on the street, of course things are weird.
I think the current weird state of the world makes more sense when you remember that we are quite literally living in the post apocalypse.
It's been like 4 years since people were suddenly dropping dead on the street, of course things are weird.
I think it's worth thinking about the benefits and costs of a creative world where everyone is a director, but as far as I've seen this is rarely talked about.
A really common argument is that people using Suno etc are not being creative or furthering creative culture. I find this a little silly; creativity is about bringing some new vision into the world and whether you use words or strings to do that is immaterial.
I think one thing that's missing from the AI art discourse is the acknowledgement that some creative pursuits have always been about prompting and tuning. Like film/stage directors, screenwriters, composers, fashion designers...
Projections like this make me think we have actually reached a technological singularity (as Kurzweil defines it, a point at which the accelerating pace of change that makes predicting the future essentially impossible)
I feel like most people don't realize that if you have a Bluetooth or WiFi enabled device on you (earbuds, smartwatch, fitness thing) it is more or less constantly broadcasting a locator beacon.
www.kold.com/2026/02/17/f...
It's more that things like brain and behavioral complexity are not necessarilly very good indicators of consciousness. There isn't a threshold where we can say "below this level you are definitely not conscious" in a way that applies to llma
So one wants AI to be sentient *less* than these CEOs
Ever since the Blake Lemoine incident every AI company has been empahtic that their models are *not* conscious, probably because if the models were conscious then their business model would just be slavery.
We're also much more complex than a toddler but I wouldn't say that means the toddler lacks consciousness.
Complexity/intelligence and sentience seem to be too separate things and we don't have good ways to measure the latter
This is basically the made out of meat argument though. There are differences sure, but no.one has really.given a good reason for why embodiment is *necessary* for consciousness. (And it's conspicuously absent from most of the leading models)
The assumption that consciousness could occur in systems with simpler/different brain organization than ours
Technically behaviorism is a little different--its a branch of psych/philosophy that basically says "we are bad at mechanistic interpretability so the best way to understand a mind is through how it behaves". But behaviorism doesn't make any claims about consciousness.
It's the same reason we assume some animals and young children have conscious experience, even though their brains are organized much differently from ours
There's also a good case to be made that the human brain contains a lot of reinforcement learning and CNN-like motifs doing various things
Yep. That explains a lot about why priming and implicit learning work.
Of course this argument is weaker for LLMs than for other humans, because they are *less* similar. But it implies that the closer to human behavior something gets, the more convinced we should be that it's conscious...and LLMs are the most humanlike entities we've ever seen
One thing is that LLMs can simulate both individual people and populations of people pretty well (given enough data) They sometimes even have the same cognitive biases.
The argument from analogical inference says that the best explanation for this is that they have minds similar to ours.
I think the only thing that we can say confidently is that they might be conscious, similar to how we don't really know whether or not an octopus is conscious.
I think Turings point still stands though. Anyone claiming that LLMs can't be conscious because of something about their inner workings is wildly overstating how much we know about the mechanisms of consciousness.
I think it's also a good argument against the argument from incredulity. A lot of humans will say "OBVIOUSLY a tranfomer model can't be conscious, it has no xxx" and Bisson points out aliens could say the same thing about us.
Poster says βFuck ICE Fundraiser for LUCE & the Massachusetts Bail Fund. 10% of all shop sales and 100% of used book and game sales, 1/28-2/1 @ SQBGβ
Come by Wed-Sun. Weβre also matching up to $150.
^pretty sure this is what they think π
Well sure, if you assume you can measure the effect of policies using data and conclude whether they're effective or ineffective like a chump.
The Senate has to approve a budget next week which means that NOW is a great time to tell your senators to strip funding from ICE. This site has a script and contact info!
5calls.org/issue/dhs-bu...
FOX: Whereβs the protest against the regime that's killing people in the street right now?
Trent: They are doing those today. Itβs called the ICE protests
FOX: Are they talking about Iran at the ICE protests?
Trent: No, I think they are talking about the regime that is shooting people in the face
@ifbookspod.bsky.social had a great debunking of this
player.fm/series/if-bo...
There's been a lot of concern about "AI psychosis" as well but currently it's not really clear that it is even a distinct thing.
One complicating thing is psychotic/delusional people will frequently see confirmation in any form of media. Before chatbots, people talked about messages in TV or radio.
How likely is βlikelyβ? Does βlikelyβ have a higher probability than βprobableβ? I put together aΒ quick quiz so you can see how youΒ interpret probability phrases, then see howΒ you compare with others: probability.kucharski.io