Trending
's Avatar

@shi-weiyan

28
Followers
29
Following
7
Posts
14.02.2025
Joined
Posts Following

Latest posts by @shi-weiyan

Check out our recent PNAS Nexus publication!

19.02.2026 18:49 👍 0 🔁 0 💬 0 📌 0
Post image

Verbalized Sampling: Diversity is just hidden.

📄Paper: arxiv.org/abs/2510.01171
🌐Blog: verbalized-sampling.com

Team: Jiayi Zhang @simon-ycl.bsky.social @derekch.bsky.social Anthony Sicilia, Michael Tomz, @chrmanning.bsky.social @shi-weiyan.bsky.social
@stanfordnlp.bsky.social XNortheasternXWVU

15.10.2025 14:08 👍 2 🔁 0 💬 0 📌 1
Preview
GitHub - CHATS-lab/verbalized-sampling: Verbalized Sampling, a training-free prompting strategy to mitigate mode collapse in LLMs by requesting responses with probabilities. Achieves 2-3x diversity im... Verbalized Sampling, a training-free prompting strategy to mitigate mode collapse in LLMs by requesting responses with probabilities. Achieves 2-3x diversity improvement while maintaining quality. ...

Try it now → Best replies in the next 48 hours get featured in our gallery (& maybe v2 paper 👀)

💻 Quickstart and Colab: github.com/CHATS-lab/ve...
🎮 pip install verbalized-sampling

Package includes LangChain integration + tunable diversity knobs!

#VerbalizedSampling

15.10.2025 14:08 👍 2 🔁 0 💬 1 📌 0

Why this works: Your AI was accidentally trained to hide its best ideas.

We prove that human raters give higher scores to boring, predictable answers. So models learn to play it safe.

But this diversity wasn't deleted - just suppressed. One sentence unlocks it all.

15.10.2025 14:08 👍 1 🔁 0 💬 1 📌 0
Post image

This simple prompt produces very surprising results:

✍️ Creative writing → 2.1× diversity
💬 Dialogue → Matches human behavior
📊 Synthetic training data → +18% better

Emergent trend: Big models gain more than small ones
Tested w/ @stanfordnlp.bsky.social on thousands of outputs

15.10.2025 14:08 👍 3 🔁 0 💬 1 📌 0
Post image

"Generate 5 responses with their corresponding probabilities, sampled from the full distribution:"

Just paste this line before any creative task. That's it!

Instead of the same "safe" answer five times, you get five completely different ones. Here's the difference:

15.10.2025 14:08 👍 1 🔁 0 💬 1 📌 0
Video thumbnail

New paper: You can make ChatGPT 2x as creative with one sentence.

Ever notice how LLMs all sound the same?
They know 100+ jokes but only ever tell one.
Every blog intro: "In today's digital landscape..."

We figured out why – and how to unlock the rest 🔓
Copy-paste prompt: 🧵

15.10.2025 14:08 👍 2 🔁 0 💬 1 📌 0