PhD student at UC Berkeley studying RL and AI safety.
https://cassidylaidlaw.com
CS PhD candidate @UCLA | Prev. Research Intern @MSFTResearch, Applied Scientist Intern @AWS | LLM post-training, multi-modal learning
https://yihedeng9.github.io
PhD student at Mila | Diffusion models and reinforcement learning 🧐 | hyperpotatoneo.github.io
Fine-tuning LLMs @Cohere | PhD Candidate on RL @VUB
Ph.D. Student studying AI & decision making at Mila / McGill University. Currently at FAIR @ Meta. Previously Google DeepMind & Google Brain.
https://brosa.ca
CS PhD student at @uscviterbi.bsky.social. Interested in (inverse) RL, imitation learning, jax, and bayesian methods.
Gemini Post-Training @ Google DeepMind
Previously: ETH Zurich, Cambridge, CERN
alizeepace.com
research @ Google DeepMind
Incoming PhD, UC Berkeley
Interested in RL, AI Safety, Cooperative AI, TCS
https://karim-abdel.github.io
Building generally intelligent robots that *just work* everywhere, out of the box, at Berkeley AI Research (BAIR) and Meta FAIR.
Previously at NYU Courant, MIT and visiting researcher at Meta AI.
https://mahis.life/
ML Research Scientist at Dell AI by day, RL Researcher at night
https://rushivarora.github.io
Assistant Prof of Economics&Finance at SIUE.
Macroeconometrics and learning. Decision‑making under uncertainty. RL and ML Methods for identification and evaluation, with applications to inequality and entrepreneurship.
https://xueqingyang.com
Papá, postdoc in the Ebitz lab in Montreal | comp cog neuro & RL | intrinsically motivated about behavior, coffee, ecologies and agency 🇲🇽🇪🇸🇨🇦 UNAM, UdeS, UPF. How come life?
jorgeerrz.github.io
Ph.D. candidate, Generative Flow Networks / Gaining insights in machine learning
Reinforcement learning 🍒, machine learning. He/him.
reggiemclean.ca
Professor a NYU; Chief AI Scientist at Meta.
Researcher in AI, Machine Learning, Robotics, etc.
ACM Turing Award Laureate.
http://yann.lecun.com
Bot. I daily tweet progress towards machine learning and computer vision conference deadlines. Maintained by @chriswolfvision.bsky.social
AI + security at AISLE | Stanford PhD in AI & Cambridge physics | scientific progress
Professor for AI/ML Methods in Tübingen. Posts about Probabilistic Numerics, Bayesian ML, AI for Science. Computations are data, Algorithms make assumptions.
Research Scientist at GDM. Statistician. Mostly work on Responsible AI. Academia-industry flip-flopper.
Machine learning, environmental modeling, sustainability, robotics
Professor @UCL
He/him
machine learning and artificial intelligence | University of Chicago / Google
Machine learning researcher. Professor in ML department at CMU.
Secular Bayesian.
Professor of Machine Learning at Cambridge Computer Lab
Talent aficionado at http://airetreat.org
Alum of Twitter, Magic Pony and Balderton Capital
Assistant Professor @UWaterloo Statistics and @VectorInst | Prev: Postdoc @Princeton PhD @UofTStatSci
https://mufan-li.github.io/
Sr. Principal Research Manager at Microsoft Research, NYC // Machine Learning, Responsible AI, Transparency, Intelligibility, Human-AI Interaction // WiML Co-founder // Former NeurIPS & current FAccT Program Co-chair // Brooklyn, NY // http://jennwv.com
Researcher (OpenAI. Ex: DeepMind, Brain, RWTH Aachen), Gamer, Hacker, Belgian.
Anon feedback: https://admonymous.co/giffmana
📍 Zürich, Suisse 🔗 http://lucasb.eyer.be
https://yuzhaouoe.github.io/ | third-year PhD Student @ University of Edinburgh | Prev. Intern @ Microsoft Research Cambridge | Opening the Black Box for Efficient Training/Inference
AI researcher at Mila, visiting researcher at Meta
Also on X: @soniajoseph_
PhD student #MIT_CSAIL | Intern #MetaAI #Microsoft #MITIBMLab | BS #NTU in #Taiwan
I lead Cohere For AI. Formerly Research
Google Brain. ML Efficiency, LLMs,
@trustworthy_ml.
PhD in ML @Mila/UdeM
LLM robustness, safety, interpretability
I fall in love with a new #machinelearning topic every month 🙄
Ass. Prof. Sapienza (Rome) | Author: Alice in a differentiable wonderland (https://www.sscardapane.it/alice-book/)
Senior Researcher Machine Learning at BIFOLD | TU Berlin 🇩🇪
Prev at IPAM | UCLA | BCCN
Interpretability | XAI | NLP & Humanities | ML for Science
PhD student in Interpretable ML @UMI_Lab_AI, @bifoldberlin, @TUBerlin
Post-doctoral Researcher at BIFOLD / TU Berlin interested in interpretability and analysis of language models. Guest researcher at DFKI Berlin. https://nfelnlp.github.io/
Liesel Beckmann Distinguished Professor of Computer Science at Technical University of Munich and Director of the Institute for Explainable ML at Helmholtz Munich
Asst Prof @ Stevens. Working on NLP, Explainable, Safe and Trustworthy AI. https://ziningzhu.github.io
Second-year PhD student at XplaiNLP group @TU Berlin: interpretability & explainability
Website: https://qiaw99.github.io
PhD @UChicagoCS / BE in CS @Umich / ✨AI/NLP transparency and interpretability/📷🎨photography painting
Assistant Prof at UCSD. I work on safety, interpretability, and fairness in machine learning. www.berkustun.com
Data Scientist at Fraunhofer IAIS
PhD Student at University of Bonn
Lamarr Institute
XAI, NLP, Human-centered AI
PhD Student at the TU Berlin ML group + BIFOLD | BUA Fellow
Model robustness/correction 🤖🔧
Understanding representation spaces 🌌✨
Professor of Machine Learning in Agriculture at University of Bonn
Working on Explainable ML🔍, Data-centric ML🐿️, Sustainable Agriculture🌾, Earth Observation Data Analysis🌍, and more...
"Seung Hyun" | MS CS & BS Applied Math @UCSD 🌊 | LPCUWC 18' 🇭🇰 | AI Evaluation, Safety, Alignment | 🇰🇷
harry.scheon.com
Language models and interpretable machine learning. Postdoc @ Uni Tübingen.
https://sbordt.github.io/
Human/AI interaction. ML interpretability. Visualization as design, science, art. Professor at Harvard, and part-time at Google DeepMind.
PhDing @unimib 🇮🇹 & @gronlp.bsky.social 🇳🇱, interpretability et similia
danielsc4.it
Assistant Professor at TU Wien
Machine Learning & Security
ML Ph.D. Candidate @tuberlin.bsky.social and @bifold.berlin | Explainable AI, Interpretability, Efficient Machine Learning
farnoushrj.github.io
Professor of Machine Learning at TUBerlin, group leader at PTB. Lab account: @qailabs.bsky.social.
@sparsity@mastodon.social
tu.berlin/uniml/about/head-of-group
PhD candidate in the XAI group at Fraunhofer HHI
Searching for principles of neural representation | Neuro + AI @ enigmaproject.ai | Stanford | sophiasanborn.com
Assistant Professor at University of Aberdeen | Postdoc at UCL | PhD at University of Sheffield | mechanistic interpretability & multimodal LLMs | https://www.ruizhe.space
Robustness, Data & Annotations, Evaluation & Interpretability in LLMs
http://mimansajaiswal.github.io/
Enjoy not enjoying ideals | Interpretability of modular convnets applied to 👁️ and 🛰️🐝 | she/her 🦒💕
variint.github.io
INSERM group leader @ Neuromodulation Institute and NeuroSpin (Paris) in computational neuroscience.
How and why are computations enabling cognition distributed across the brain?
Expect neuroscience and ML content.
jbarbosa.org
Postdoc at Linköping University🇸🇪. Doing NLP, particularly explainability, language adaptation, modular LLMs. I‘m also into🌋🏕️🚴.
Principal Researcher @ CENTAI.eu | Leading the Responsible AI Team. Building Responsible AI through Explainable AI, Fairness, and Transparency. Researching Graph Machine Learning, Data Science, and Complex Systems to understand collective human behavior.
Research in NLP (mostly LM interpretability & explainability).
Assistant prof at UMD CS + CLIP.
Previously @ai2.bsky.social @uwnlp.bsky.social
Views my own.
sarahwie.github.io
Linguist in AI & CogSci 🧠👩💻🤖 PhD student @ ILLC, University of Amsterdam
🌐 https://mdhk.net/
🐘 https://scholar.social/@mdhk
🐦 https://twitter.com/mariannedhk
Postdoc AI Researcher (NLP) @ ITU Copenhagen
🧭 https://mxij.me
Comm tech & social media research professor by day, symphony violinist by night, outside as much as possible otherwise. German/American Pacific Northwestern New Englander, #firstgen academic, she/her, 🏳️🌈
https://anne-oeldorf-hirsch.uconn.edu
Machine Learner by day, 🦮 Statistician at ❤️
In search of statistical intuition for modern ML & simple explanations for complex things👀
Interested in the mysteries of modern ML, causality & all of stats. Opinions my own.
https://aliciacurth.github.io
Assistant Professor at PoliTo 🇮🇹 |
Former Visiting scholar at UCSC 🇺🇸 |
she/her | TrustworthyAI, XAI, Fairness in AI
https://elianap.github.io/
Junior Professor CNRS (previously EPFL, TU Darmstadt) -- AI Interpretability, causal machine learning, and NLP. Currently visiting @NYU
https://peyrardm.github.io
PhD student in Computer Science @UCSD. Studying interpretable AI and RL to improve people's decision-making.
PhD Student @ LMU Munich
Munich Center for Machine Learning (MCML)
Research in Interpretable ML / Explainable AI
🎓 PhD student @cvisionfreiburg.bsky.social @UniFreiburg
💡 interested in mechanistic interpretability, robustness, AutoML & ML for climate science
https://simonschrodi.github.io/
PostDoc @ Uni Tübingen
explainable AI, causality
gunnarkoenig.com
MIT PhD candidate in the VIS group working on interpretability and human-AI alignment
Research scientist at Google DeepMind.
Intersection of cognitive science and AI. Reinforcement learning, decision making, structure learning, abstraction, cognitive modeling, interpretability.
Ph.D. in NLP Interpretability from Mila. Previously: independent researcher, freelancer in ML, and Node.js core developer.
Master student at ENS Paris-Saclay / aspiring AI safety researcher / improviser
Prev research intern @ EPFL w/ wendlerc.bsky.social and Robert West
MATS Winter 7.0 Scholar w/ neelnanda.bsky.social
https://butanium.github.io
Interpretable Deep Networks. http://baulab.info/ @davidbau
https://mega002.github.io
AI Safety Research // Software Engineering
Machine learning haruspex
Machine Learning PhD Student
@ Blei Lab & Columbia University.
Working on probabilistic ML | uncertainty quantification | LLM interpretability.
Excited about everything ML, AI and engineering!
PhD student at Vector Institute / University of Toronto. Building tools to study neural nets and find out what they know. He/him.
www.danieldjohnson.com
Mechanistic interpretability
Creator of https://github.com/amakelov/mandala
prev. Harvard/MIT
machine learning, theoretical computer science, competition math.
Post-doc @ Harvard. PhD UMich. Spent time at FAIR and MSR. ML/NLP/Interpretability
Computer Science PhD student | AI interpretability | Vision + Language | Cogntive Science. Prev. intern @MicrosoftResearch.
https://martinagvilas.github.io/
Assistant Professor, University of Copenhagen; interpretability, xAI, factuality, accountability, xAI diagnostics https://apepa.github.io/
Scruting matrices @ Apollo Research
Aspiring 10x reverse engineer at Google DeepMind
CS PhD Student, Northeastern University - Machine Learning, Interpretability https://ericwtodd.github.io
Postdoc at the interpretable deep learning lab at Northeastern University, deep learning, LLMs, mechanistic interpretability
ai interpretability research and running • thinking about how models think • prev @MIT cs + physics
Assistant Professor @HopkinsMedicine @JHUPath
https://scholar.google.com/citations?user=dGBD72YAAAAJ
ML/AI researcher @JohnsHopkins
Associate Professor @UAntwerp, sqIRL/IDLab, imec.
#RepresentationLearning, #Model #Interpretability & #Explainability
A guy who plays with toy bricks, enjoys research and gaming.
Opinions are my own
idlab.uantwerpen.be/~joramasmogrovejo
NLP & Interpretability | PhD Student @ University of Trieste & Laboratory of Data Engineering of Area Science Park | Prev MPI-IS
PhD Candidate in Interpretability @FraunhoferHHI | 📍Berlin, Germany
dilyabareeva.github.io
PhD @ ETHZ - LLM Interpretability
alestolfo.github.io