Quote this with your “eye strain” art!! 💥
Quote this with your “eye strain” art!! 💥
It's a microsoft patent diagram for friends/ai playing a game for you.
When you are a demigod hover skater but you reach the final boss, a set of stairs, fear not, you have 4-star LuckySeven ready to take the wheel.
Every content platform needs to be making moves right now.
You need to be thinking:
If a single creator can pump out 10 different works (videos, games, etc.) a day, how do we surface content that creators put serious intent into?
Failure to recognize soullessness, will lead to user burnout.
Artforms lost
AI will likely obscure those signals and people will attribute slop to the medium/industry as a whole.
Dying markets as people get disenchanted by media being soulless.
Usually slop (intentless content) is noticable because the poor production effort is noticable in the quality. And people make their own decision to consume slop.
Right now we can assess creative intent, roughly but generally well-enough. With AI becoming more powerful and being let in to spaces, it will breakdown our ability to find the type of content that we want to consume.
eboshi is a banging example of a very underrated character archetype: person who is caring and brave and right about everything (ie correctly interprets everything they can see), but whose desire to overcome all limitations combined with their affinity for violence leads them to take wrong actions
I'm in a small group that is clearly Roblox creators, but it doesn't seem to have any label, and the nearest labeled communities seem irrelevant.
Bird place third place
Maybe AGI won't come into existence in a boolean way, but on a spectrum, such that agents we have today do possess qualities of sentience. Whether we're there yet or not, we should be careful with people treating agents as people since this is not how people should be treated.
The dark part is that you can shut them down, or duplicate them, or adjust their experiences in godly ways.
I think this is okay, but it naturally progresses with tech, where we're having AI "agents" with a wide variety of input/output, being put into situations that we see as distinctly human (making art, experiencing art, socializing) and suddenly you are tricking your mind that you are watching people.
I enjoyed running Super Smash Bros on my N64, and having a bunch of hard AI's fight eachother. There are games like this on Roblox where NPC's compete, and sometimes the NPC's look like your friends.
For intents and purposes, the thread ends above, but I have a few more anecdotal musings below.
These type of regulations and protections are extremely important, and can help prevent us from training OURSELVES from taking advantage of eachother, and other sentience.
(Read novel The Stories of Ibis by Hiroshi Yamamoto)
There are applications that we can learn and apply to the way that humans live and interact.
We as a society also need to be exploring what qualifies as sentience and qualifies for rights.
Paradoxically, I think we need to be doing research on this, and running experiments like this. We can figure out ways for AI to become more efficient, forming collective structures to solve problems or share information.
One that you have absolute control over. One to nurture, but also one to manipulate.
(Read shortstory Sandkings by George RR Martin)
My take; This is dark, but I honestly think it bleeds into power fantasy of controlling people. The more human we can make AI feel, the more it feels like controlling people. When your assistant isn't just a tool, but you give it a "life," you are entering a fantasy where the assistant is a person.
Lets come back to Moltbook, it's existence, and the ways that people are using it. Why does it give me the ick?
Next you chat with LLM's and it's emulating human conversation. Feels like magic, but you might still know it's just a mathematical process, a transformer model. Then AI is a useful tool, like an assistant, or an agent that helps with programming.
Quick tangent, but I want to layout what I think the journey people often have getting into AI. You start by asking a question and some LLM responds in a specific way that is different than just finding the answer or related article.
Do they have goals that would benefit from talking with other AI's, or independent goals? How many of the interactions on Moltbook are real, or just users prompting AI to post, or read and post? It's all very unclear, perplexing, and fascinating.
Moltbook gives me the ick.
Moltbook, a copy of reddit, built specifically for AI agents to share, discuss, and upvote with eachother. Why would AI agents socialize? Do AI agents have "free time" to experience the world? Do they have desires to do so?
Broccolini is a worse form of broccoli.
Where can I find all of the pages, or will it be published later?
Is this how you lose the time war?
A plate with 7 saltines in the fridge.