Looking forward to seeing your social media agent!
Looking forward to seeing your social media agent!
@nearestnabors.com Thanks for the great video about building your own agents. I feel like encrypted private cloud storage of memories and conversations is a missing piece? If I could talk with a local LLM that knows all my context and later pull up that convo on my phone, that would be sweet.
Signals are amazing. Angular baked fetch() into a signal and it makes my life so easy. angular.dev/guide/signal... Moar declarative is moar better.
Hey folks - Iβm in kinda deep shit again and need some help. I need to get enough to pay our power bill by tomorrow morning to avoid disconnection, and we donβt have anything coming in for at least three more days.
PayPal.me/MaeGodHaveMercy
Cash.app/$MaeGodHaveMercy
Venmo.com/u/MaeGodHaveMercy
There are a few books named "Nexus" - is the one by Yuval Noah Harari the one you're talking about?
@simonwillison.net Thanks for that post about the SIFT method - glad to have that in my toolbox.
Someone made a GPT that applies this method: chatgpt.com/g/g-fnPHfIoJ... Nice little fact-checking bot. Listing the steps it takes may help me internalize the procedure.
Running a finished LLM is very cheap. For the same energy cost as an hour of streaming video, you could ask ChatGPT 300 questions. Training models uses more, but when you amortize across how much usage they get they're still cheap.
what-uses-more.com
andymasley.substack.com/p/a-cheat-sh...
In the past I've written at length about how much good we can do with the technology despite the bad character of the people in charge. (Kind of like Edison and electricity.) These days I mostly keep my mouth shut. Tired of anger, pessimism, and outrage.
That using ChatGPT destroys the environment through using a ton of water and electricity.
Maybe it depends on the extent to which you and the other person are part of a shared communityβhow much you expect to see them again, the degree of trust shared between you? Arguing with strangers feels unproductive if you're not an Influencer.
I am pondering how much I should engage with people spreading misinformation about AI on the internet. I commonly see people repeating untruths about how much water & electricity they use. But I've kind of burned myself out having arguments with people being Wrong On The Internet.
Lawful Good, for sure.
Kicking off a series on how AI is changing the Web: how it's built, how it's consumed, and how the economics workβ agenticweb.substack.com/p/ai-future-...
Ooh, pick me!
IDK, killing bad social networks sounds like something that would be worth doing for free. π I can think of worse ways to spend my time.
But companies have also achieved success by being trustworthy and acting in the customer's interest, and they've built massive customer loyalty by doing so (e.g Valve, Costco, Apple). For a product that handles extremely sensitive personal information, I think that's the way to go.
AI companies optimizing for addictiveness in a race to the bottom is definitely a major risk. Many companies have achieved success by doing that (e.g. Facebook, TikTok).
(It could be that training an LLM on an ethics textbook to create an AI conscience is overkill. We have guardrails, the OpenAI Model Spec, and Claude's Constitution, and those workβ¦most of the time. Maybe stronger measures are needed?)
You don't want to be overbearing with AI ethics. AI shouldn't be preachy and it should defer to human judgement up to a certain point. But there must be lines it won't cross. We trust humans based on our assessment of what they do and what they refuse to do, and I think the same will be true of AI.
Having AIs that are trustworthy could be an important competitive advantage, especially given the sensitivity of the information they'll work with and the power they'll have in people's lives.
Maybe the same is true for AI ethics. People want AIs that will follow their every command. But AIs without a strong moral compass will repeatedly fail their owners. They'll get bamboozled into revealing secrets and doing harm, and people will regret using them.
I've heard it said that "security *is* capability". If you release an insecure system, before too long someone will exploit it and you'll be worse off than the competitor that took the time to get it right.
I wonder if we could design agents to be virtuous. Like, train an LLM on some particular school of moral philosophy and stick it in the agent's decision-making loop. If a plan is unethical, require the agent to make a different plan.
I'm subscribed to his Google group. groups.google.com/g/komoroske-... I don't remember how I originally found him. But he publishes weekly updates that are kind of abstract and speculative but also thought-provoking and sometimes inspiring. He has big ideas.
Addictive & unhealthy AI will certainly happen. Facebook will build it if no-one else does. All the more important that prosocial AI seize the initiative and try to outcompete it.
This guy has some interesting thoughts on how that might work: docs.google.com/document/d/1...
Okay, good to know. Thank you!
@allthingsopen.bsky.social Question about AllThingsOpen.ai: Is Tuesday attendance included when registering for the GenAI workshop, or purchased separately?
Men's briefs. The boxers are too big and the bikini cut is too small.
@maegodhavemercy.com Emailed you with a possible job lead using the address on Real Life's contact page. It's okay if you're not interested - just didn't want it to languish in your spam folder. π
Thank you! I'll pass that along to her.