Over the past year, my lab has been working on fleshing out theory + applications of the Platonic Representation Hypothesis.
Today I want to share two new works on this topic:
Eliciting higher alignment: arxiv.org/abs/2510.02425
Unpaired learning of unified reps: arxiv.org/abs/2510.08492
1/9
10.10.2025 22:13
👍 133
🔁 33
💬 1
📌 5
https://representational-alignment.github.io/2025/
Learn about aligning minds and machines.
Call for papers is live now—join us at
@iclr_conf
#ICLR2025.
representational-alignment.github.io/2025/
16.01.2025 19:38
👍 8
🔁 3
💬 0
📌 0
🚨Call for Papers🚨
The Re-Align Workshop is coming back to #ICLR2025
Our CfP is up! Come share your representational alignment work at our interdisciplinary workshop at
@iclr-conf.bsky.social
Deadline is 11:59 pm AOE on Feb 3rd
representational-alignment.github.io
16.01.2025 19:04
👍 15
🔁 7
💬 0
📌 3
Any downside to having diversity with competing methods of alignment?
If "good" and "ethics" are universal, having many different ways of arriving at it seems to be safer than a unipolar governance.
06.01.2025 03:51
👍 0
🔁 0
💬 0
📌 0
Introducing ASAL: Automating the Search for Artificial Life with Foundation Models
Blog: sakana.ai/asal/
We propose a new method called Automated Search for Artificial Life (ASAL) which uses foundation models to automate the discovery of the most interesting and open-ended artificial lifeforms!
24.12.2024 02:58
👍 117
🔁 38
💬 1
📌 7