Yep, Campbell should get priority for the idea: rss.onlinelibrary.wiley.com/doi/full/10....
Yep, Campbell should get priority for the idea: rss.onlinelibrary.wiley.com/doi/full/10....
What? I'm not your brother-in-law!
Who knew that would happen?
Oh, right...
Yes! And when you're designing or choosing the metrics, you might also want to read this: www.cell.com/patterns/ful...
Who would have guessed?
OU-Delicious!
What time frame?
I certainly think it happens when AI agents get to control an increasing fraction of all purchases. When is that? I don't know, but I worry it's a year or two, and I'm fairly confident it's not decades, unless AI progress halts.
Markets are a spending-weighted aggregation mechanism. Buyers create demand, and production is dictated by the resulting market prices.
That is to say, in a capitalist systems, AI agents graduating to economic actors is gradual human disempowerment in different words.
Well, since it's actually now being done by AI, there goes that theory of how AI can't do what humans are doing... zitongyang.github.io/slides/Ziton...
So as soon as you put it on a machine and it sets up a cron job to tell it to continue, which will keep it going, it counts as really thinking? (Because yes, that has happened.)
And, relatedly, when you go to sleep, you stop being human in the relevant sense?
Possibly of interest / related is this draft blogpost about my (in-progress) new paper on the various Stochastic Parrots arguments: docs.google.com/document/d/1...
Iβm at a philosophy conference and it is WILD. You philosophers are fascinating people.
"This type of software"?
I'm concerned that people keep mistakenly repeating the same poorly conceptualized stochastic parrot argument, which is mostly simply invalid as a claim.
docs.google.com/presentation...
Good thing it's only killing them off figuratively... for now.
bsky.app/profile/robb...
Online Now: A call to join a collective effort on AI evaluation #datascience
I hate being late for meetings, but "the air raid siren went off so I had to go down to the bomb shelter" is probably the best reason I'll ever have for doing so.
Any time you see someone argue that their opponents are wrong, instead of arguing that they are right, the conclusion should be a negative update about *both sides*.
(And before you try to interpret this as a subtweet, it is about at least three different things I've been annoyed about recently.)
I hereby posit Manheim's Law of Positive-Sum Badness:
In polarized disputes, evidence that one side is stupid, malicious, or evil increases the probability that the opposing side is too.
"What can Goodhart's Law teach us about AI alignment?"
Good question! Our 2018 paper was explicitly about the problem: "in Artificial Intelligence alignment... the increased optimization power offered by artificial intelligence makes it especially critical for that field."
arxiv.org/abs/1803.04585
That's just Goodhart's law - because most metrics start out as shit, they are just whatever is easiest to measure that seems related.
I even have a paper about this!
www.cell.com/patterns/ful...
"As Iranβs supply of long range weapons dwindles"
I've heard from several sources that's a long way off, and they have lots of stockpiles, albeit fewer an fewer launchers - but given my location I certainly hope you're right.
Sam Altman faced painful internal backlash from the new Pentagon deal, which is why he told staff that he's... also planning new deals with other NATO countries to use in classified settings.
www.wsj.com/tech/ai/open...
OpenAI's plans after signing the contract which they say won't allow domestic surveillance... are definitely gonna eventually involve domestic surveillance.
I think it's strategic brilliance from the perspective of the current leader; he doesn't care about the country, he wants to be able to stay in power, and this will help him.
(I agree that it's horrific in the long term for the country, but we all know voters are myopic.)
It's like the Fifa Peace Prize doesn't mean anything at all.
Anyways, me and my family are basically safe, just running up and down the stairs to the bomb shelter a couple times a day.
And at least I know that the US is using Claude to plan their strategy for another 30 days or something, which is nice and dystopian.
This would be the worst business move, in isolation.
Federal contracts are net time sinks for tech companies, this contract won't pay enough to be worth the pain. The only way they could benefit is if acceding to pressure were somehow used to punish competitors, which... right, that's the point.
Lastly, there's an argument that biotech in general can be dangerous because of pathogen applications, and advances in AIxBio can be worrying because of this.
True - but not doing safe things because very different ideas in another area of bio are scary is a bad objection!
(fin)
(continued) to think a given new tech is on-net dangerous. Self-replication or an exponential trajectory could qualify, but this isn't that.
It seems far more likely that this enables cool and largely safe nanotech applications.
And third, general advances in biodesign are terrifying!
And yes, all new technology can be misused, and that's certainly true for bio. But out (strong) prior should be that very little of the usage of new technology is for dangerous misuse which means that we need some pretty strong reason...