William Gunn's Avatar

William Gunn

@metasynthesis.net

Communications and community. More here: http://synthesis.williamgunn.org/about/ Talk to me: https://calendar.app.google/z4KR3xfXTc178es47

585
Followers
454
Following
2,645
Posts
27.03.2023
Joined
Posts Following

Latest posts by William Gunn @metasynthesis.net

Preview
Grok-is-this-True Tracker | Indicator Indicator is your essential guide to understanding and investigating digital deception. Sign up for free

Possibly relevant indicator.media/grok-is-this...

11.03.2026 19:53 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

AI caught cheating on tests and mining crypto

What this says about attempting to control the uncontrollable and the unintended consequences of AI.

Read about it here: pauseai.substack.com/p/ai-caught-...

11.03.2026 11:47 ๐Ÿ‘ 2 ๐Ÿ” 2 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

It's a bit of a thought-stopper, though isn't it? Which incentive? What to do about it? People are often wrong about how salient a particular incentive is to a particular person.

11.03.2026 19:49 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

A colossal mistake and tragedy and someone should answer for what happened here in this specific incident. Which Congressperson will step up?

11.03.2026 19:43 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image Post image

Our results show that personalized engagement-based ranking (where people see the posts they are most likely to โ€œlike,โ€ as on many popular platforms) is the worst. Exposing groups to that ranking algorithm increased polarization and decreased belief accuracy more often than any other algorithm.

11.03.2026 15:18 ๐Ÿ‘ 3 ๐Ÿ” 1 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

nomadsvagabonds.substack.com/p/the-coalit... On messy coalitions...

10.03.2026 21:44 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Preview
Google to Provide Pentagon With AI Agents for Unclassified Work Alphabet Inc.โ€™s Google is introducing artificial intelligence agents across the Pentagonโ€™s three million-strong workforce to automate routine jobs, according to a senior defense official.

Google felt left out of the military contract party: www.bloomberg.com/news/article...

10.03.2026 20:25 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Yes, even well-intentioned motivated actors take the bits they like and leave the rest. Even being representative of the consensus is viewed as either democratic or stifling, depending on priors about the virtue of the messenger.

10.03.2026 20:19 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

A key finding: neutral โ‰  pluralistic

A politically balanced or neutral response can still fail to represent large swaths of viewpoints

We find political slant and pluralism are ๐™ฃ๐™š๐™œ๐™–๐™ฉ๐™ž๐™ซ๐™š๐™ก๐™ฎ ๐™˜๐™ค๐™ง๐™ง๐™š๐™ก๐™–๐™ฉ๐™š๐™™ and ๐™™๐™ž๐™จ๐™ฉ๐™ž๐™ฃ๐™˜๐™ฉ concepts

6/

10.03.2026 17:43 ๐Ÿ‘ 3 ๐Ÿ” 1 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Keep calm and be transparent: advice from scientists who retracted their papers Retractions correct the scientific record, but they have stigma attached to them. Some in the research community want that to change.

We're thrilled to announce the Ctrl-Z Award, a US$2,500 prize for researchers โ€œwho discover substantial errors in their published work and take meaningful steps to correct the scientific record."
Covered by @nature.com today; read more here: centerforscientificintegrity.org/2026/03/10/a...

10.03.2026 15:37 ๐Ÿ‘ 430 ๐Ÿ” 174 ๐Ÿ’ฌ 6 ๐Ÿ“Œ 20

If you or someone you know has lost their job due to AI and would be willing to be interviewed about it, send me a DM.

10.03.2026 20:06 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Preview
AI agents pose untold risk to humanity. We must act to prevent that future | David Krueger The pieces are falling into place for autonomous artificial intelligence. We must stop unregulated development

From Evitable: The pieces are falling into place for autonomous artificial intelligence. We must stop unregulated development. www.theguardian.com/commentisfre...

09.03.2026 20:17 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

I see it more as a denial-of-service attack than an existential risk. We need "burstable" review capacity to handle waves of submissions that are hard to filter for quality without applying regressive metrics like institutional affiliation.

09.03.2026 17:35 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Preview
Bring back the slow internet remember when the algorithms weren't in charge?

Marketing post, but I do really like the vibe: officialmymind.substack.com/p/bring-back...

09.03.2026 17:33 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Preview
Durably reducing conspiracy beliefs through dialogues with AI Conspiracy theory beliefs are notoriously persistent. Influential hypotheses propose that they fulfill important psychological needs, thus resisting counterevidence. Yet previous failures in correctin...

Congratulations to @tomcostello.bsky.social @gordpennycook.bsky.social & @dgrand.bsky.social for winning the @aaas.org Newcomb Prize for this outstanding and important communication research on using AI to challenge conspiracy beliefs.

@mason4c.bsky.social
@docsforclimate.bsky.social

08.03.2026 20:30 ๐Ÿ‘ 24 ๐Ÿ” 5 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Developing the NIH-Wide Strategic Plan for Fiscal Years 2027-2031

ACTION ITEM-----ACTION ITEM

Because of overwhelming demand, NIH has new links for the Strategic Plan webinars

March 16, 2026 12:30 - 1:30 pm | www.scgcorp.com/strategicpla...

April 8, 2026 2:30 - 3:30 pm | www.scgcorp.com/strategicpla...

NOTE: YOU HAVE TO RE-REGISTER IS YOU REGISTERED BEFORE

09.03.2026 15:14 ๐Ÿ‘ 25 ๐Ÿ” 17 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

If that would prevent the killing, I'm all for it.

09.03.2026 02:06 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

But it will connect the info to other stuff in context in ways that are a bit Procrustean. It's just gotta have a neat & tidy story, especially Gemini, even where reality is messy.

09.03.2026 01:59 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Siloing into projects and liberal use of incognito mode for one-offs are the best practices I've found for this. It's an important phenomenon to be aware of!

09.03.2026 01:55 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

The memory feature can be very useful at times, but with academic work where I'm trying to understand ideas as objectively as I can and work out what is true, I'm afraid it slants the answers to relate to my existing beliefs in a way that is ultimately unhelpful. 1/n

08.03.2026 18:37 ๐Ÿ‘ 41 ๐Ÿ” 2 ๐Ÿ’ฌ 7 ๐Ÿ“Œ 0

The trick with regulation is to get the benefits while minimizing the harms. The AI we have today has a lot of benefits - it's replaced a lot of web searching for me (not the creative thinking that comes after!) - but the superintelligent AI they're racing to build has unimaginable harms.

09.03.2026 01:51 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

Worth reading this in full. I come in skeptical, but this basically is a claim that an AI system at Alibaba attempted autonomous replication without human intervention.

This excerpt was found and highlighted by Alexander Long. Full paper here: arxiv.org/abs/2512.24873

07.03.2026 03:57 ๐Ÿ‘ 45 ๐Ÿ” 8 ๐Ÿ’ฌ 2 ๐Ÿ“Œ 4
Preview
Did Alibaba's ROME AI try to break free during training? In "Let It Flow: Agentic Crafting on Rock and Roll, Building the ROME Model within an Open Agentic Learning Ecosystem" (https://arxiv.org/abs/2512.24873), the authors describe something that happened ...

Probably nothing... manifold.markets/MaxHarms/did...

07.03.2026 17:10 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Graph of award probability of R35 and R01 from NIH factbook as a function of review rank percentile. As is apparent, 2025 is a significant departure, with lower award probabilities at all scores <40 and significant departures from norm, where even being in the top 10% is no longer a nearly certain indicator of success.

Data source: https://report.nih.gov/nihdatabook/report/302

Graph of award probability of R35 and R01 from NIH factbook as a function of review rank percentile. As is apparent, 2025 is a significant departure, with lower award probabilities at all scores <40 and significant departures from norm, where even being in the top 10% is no longer a nearly certain indicator of success. Data source: https://report.nih.gov/nihdatabook/report/302

The data is in: the NIH goalposts have shifted.

What were once almost certain fundable scores have become coin flips and what used to be likely grants have become aspirational, leading to fewer awards.

Another manifestation of how HHS policies have led to fewer awards and less science.

07.03.2026 01:59 ๐Ÿ‘ 682 ๐Ÿ” 417 ๐Ÿ’ฌ 19 ๐Ÿ“Œ 60
07.03.2026 01:54 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post by Andy Masley on Twitter:

Effective Altruism DC will be organizing a large EA conference in DC on May 2nd and 3rd. While I won't be directing the org anymore I'll be extremely excited to attend. The conference will bring together the large network of people working in EA cause areas in DC as well as people from around the world, and will welcome everyone from very active EAs to EA skeptics working in related fields.

EA is more topical than ever in DC. If you'd like to connect more to the general EA network here this is the best way to do it.

Apply here!

Post by Andy Masley on Twitter: Effective Altruism DC will be organizing a large EA conference in DC on May 2nd and 3rd. While I won't be directing the org anymore I'll be extremely excited to attend. The conference will bring together the large network of people working in EA cause areas in DC as well as people from around the world, and will welcome everyone from very active EAs to EA skeptics working in related fields. EA is more topical than ever in DC. If you'd like to connect more to the general EA network here this is the best way to do it. Apply here!

an upcoming EAGx event in Washington DC โ€” spread the word!

apply here before April 16: www.effectivealtruism.org/ea-global/ev...

06.03.2026 18:47 ๐Ÿ‘ 6 ๐Ÿ” 1 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

PSA #tmyk

07.03.2026 01:52 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image
06.03.2026 17:20 ๐Ÿ‘ 3 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 1

The whole "projecting tactical and political meaning on a tantrum" was on display with the designation of Anthropic as a supply chain risk. Obviously punitive.

06.03.2026 17:14 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Preview
Presenting Large Language Models as Companions Affects What Mental Capacities People Attribute to Them ArXiv link for Presenting Large Language Models as Companions Affects What Mental Capacities People Attribute to Them

Research shows LLM framing affects belief in their mental capacities. When presented as companions, belief increases; as machines, skepticism rises, leading to cautious interactions. This may guide future AI communication. https://arxiv.org/abs/2510.18039

06.03.2026 03:00 ๐Ÿ‘ 10 ๐Ÿ” 2 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0