Check out our recent PNAS Nexus publication!
Check out our recent PNAS Nexus publication!
Verbalized Sampling: Diversity is just hidden.
📄Paper: arxiv.org/abs/2510.01171
🌐Blog: verbalized-sampling.com
Team: Jiayi Zhang @simon-ycl.bsky.social @derekch.bsky.social Anthony Sicilia, Michael Tomz, @chrmanning.bsky.social @shi-weiyan.bsky.social
@stanfordnlp.bsky.social XNortheasternXWVU
Try it now → Best replies in the next 48 hours get featured in our gallery (& maybe v2 paper 👀)
💻 Quickstart and Colab: github.com/CHATS-lab/ve...
🎮 pip install verbalized-sampling
Package includes LangChain integration + tunable diversity knobs!
#VerbalizedSampling
Why this works: Your AI was accidentally trained to hide its best ideas.
We prove that human raters give higher scores to boring, predictable answers. So models learn to play it safe.
But this diversity wasn't deleted - just suppressed. One sentence unlocks it all.
This simple prompt produces very surprising results:
✍️ Creative writing → 2.1× diversity
💬 Dialogue → Matches human behavior
📊 Synthetic training data → +18% better
Emergent trend: Big models gain more than small ones
Tested w/ @stanfordnlp.bsky.social on thousands of outputs
"Generate 5 responses with their corresponding probabilities, sampled from the full distribution:"
Just paste this line before any creative task. That's it!
Instead of the same "safe" answer five times, you get five completely different ones. Here's the difference:
New paper: You can make ChatGPT 2x as creative with one sentence.
Ever notice how LLMs all sound the same?
They know 100+ jokes but only ever tell one.
Every blog intro: "In today's digital landscape..."
We figured out why – and how to unlock the rest 🔓
Copy-paste prompt: 🧵