Possibly relevant indicator.media/grok-is-this...
AI caught cheating on tests and mining crypto
What this says about attempting to control the uncontrollable and the unintended consequences of AI.
Read about it here: pauseai.substack.com/p/ai-caught-...
It's a bit of a thought-stopper, though isn't it? Which incentive? What to do about it? People are often wrong about how salient a particular incentive is to a particular person.
A colossal mistake and tragedy and someone should answer for what happened here in this specific incident. Which Congressperson will step up?
Our results show that personalized engagement-based ranking (where people see the posts they are most likely to โlike,โ as on many popular platforms) is the worst. Exposing groups to that ranking algorithm increased polarization and decreased belief accuracy more often than any other algorithm.
nomadsvagabonds.substack.com/p/the-coalit... On messy coalitions...
Google felt left out of the military contract party: www.bloomberg.com/news/article...
Yes, even well-intentioned motivated actors take the bits they like and leave the rest. Even being representative of the consensus is viewed as either democratic or stifling, depending on priors about the virtue of the messenger.
A key finding: neutral โ pluralistic
A politically balanced or neutral response can still fail to represent large swaths of viewpoints
We find political slant and pluralism are ๐ฃ๐๐๐๐ฉ๐๐ซ๐๐ก๐ฎ ๐๐ค๐ง๐ง๐๐ก๐๐ฉ๐๐ and ๐๐๐จ๐ฉ๐๐ฃ๐๐ฉ concepts
6/
We're thrilled to announce the Ctrl-Z Award, a US$2,500 prize for researchers โwho discover substantial errors in their published work and take meaningful steps to correct the scientific record."
Covered by @nature.com today; read more here: centerforscientificintegrity.org/2026/03/10/a...
If you or someone you know has lost their job due to AI and would be willing to be interviewed about it, send me a DM.
From Evitable: The pieces are falling into place for autonomous artificial intelligence. We must stop unregulated development. www.theguardian.com/commentisfre...
I see it more as a denial-of-service attack than an existential risk. We need "burstable" review capacity to handle waves of submissions that are hard to filter for quality without applying regressive metrics like institutional affiliation.
Marketing post, but I do really like the vibe: officialmymind.substack.com/p/bring-back...
Congratulations to @tomcostello.bsky.social @gordpennycook.bsky.social & @dgrand.bsky.social for winning the @aaas.org Newcomb Prize for this outstanding and important communication research on using AI to challenge conspiracy beliefs.
@mason4c.bsky.social
@docsforclimate.bsky.social
ACTION ITEM-----ACTION ITEM
Because of overwhelming demand, NIH has new links for the Strategic Plan webinars
March 16, 2026 12:30 - 1:30 pm | www.scgcorp.com/strategicpla...
April 8, 2026 2:30 - 3:30 pm | www.scgcorp.com/strategicpla...
NOTE: YOU HAVE TO RE-REGISTER IS YOU REGISTERED BEFORE
If that would prevent the killing, I'm all for it.
But it will connect the info to other stuff in context in ways that are a bit Procrustean. It's just gotta have a neat & tidy story, especially Gemini, even where reality is messy.
Siloing into projects and liberal use of incognito mode for one-offs are the best practices I've found for this. It's an important phenomenon to be aware of!
The memory feature can be very useful at times, but with academic work where I'm trying to understand ideas as objectively as I can and work out what is true, I'm afraid it slants the answers to relate to my existing beliefs in a way that is ultimately unhelpful. 1/n
The trick with regulation is to get the benefits while minimizing the harms. The AI we have today has a lot of benefits - it's replaced a lot of web searching for me (not the creative thinking that comes after!) - but the superintelligent AI they're racing to build has unimaginable harms.
Worth reading this in full. I come in skeptical, but this basically is a claim that an AI system at Alibaba attempted autonomous replication without human intervention.
This excerpt was found and highlighted by Alexander Long. Full paper here: arxiv.org/abs/2512.24873
Graph of award probability of R35 and R01 from NIH factbook as a function of review rank percentile. As is apparent, 2025 is a significant departure, with lower award probabilities at all scores <40 and significant departures from norm, where even being in the top 10% is no longer a nearly certain indicator of success. Data source: https://report.nih.gov/nihdatabook/report/302
The data is in: the NIH goalposts have shifted.
What were once almost certain fundable scores have become coin flips and what used to be likely grants have become aspirational, leading to fewer awards.
Another manifestation of how HHS policies have led to fewer awards and less science.
Post by Andy Masley on Twitter: Effective Altruism DC will be organizing a large EA conference in DC on May 2nd and 3rd. While I won't be directing the org anymore I'll be extremely excited to attend. The conference will bring together the large network of people working in EA cause areas in DC as well as people from around the world, and will welcome everyone from very active EAs to EA skeptics working in related fields. EA is more topical than ever in DC. If you'd like to connect more to the general EA network here this is the best way to do it. Apply here!
an upcoming EAGx event in Washington DC โ spread the word!
apply here before April 16: www.effectivealtruism.org/ea-global/ev...
PSA #tmyk
The whole "projecting tactical and political meaning on a tantrum" was on display with the designation of Anthropic as a supply chain risk. Obviously punitive.
Research shows LLM framing affects belief in their mental capacities. When presented as companions, belief increases; as machines, skepticism rises, leading to cautious interactions. This may guide future AI communication. https://arxiv.org/abs/2510.18039