Co-founder & editor, Works in Progress. Writer, Scientific Discovery. Podcaster, Hard Drugs. Advisor, Coefficient Giving. // Previously at Our World in Data.
Newsletter: https://scientificdiscovery.dev
Podcast: https://harddrugs.worksinprogress.co
๐ณ๏ธโ๐
PhD student at the EVP lab @McGill University | Researching how social factors influence disease around the world
friendly deep sea dweller
Faculty at โชthe ELLIS Institute Tรผbingen and Max Planck Institute for Intelligent Systems. Leading the AI Safety and Alignment group. PhD from EPFL supported by Google & OpenPhil PhD fellowships.
More details: https://www.andriushchenko.me/
ai safety researcher | phd ETH Zurich | https://danielpaleka.com
Making AI safer at Google DeepMind
davidlindner.me
Red-Teaming LLMs / PhD student at ETH Zurich / Prev. research intern at Meta / People call me Javi / Vegan ๐ฑ
Website: javirando.com
Assistant professor of computer science at ETH Zรผrich. Interested in Security, Privacy and Machine Learning.
https://floriantramer.com
https://spylab.ai
Visiting Scientist at Schmidt Sciences. Visiting Researcher at Stanford NLP Group
Interested in AI safety and interpretability
Previously: Anthropic, AI2, Google, Meta, UNC Chapel Hill
Assistant Prof of AI & Decision-Making @MIT EECS
I run the Algorithmic Alignment Group (https://algorithmicalignment.csail.mit.edu/) in CSAIL.
I work on value (mis)alignment in AI systems.
https://people.csail.mit.edu/dhm/
born at the right time
CEO @ Astera
e-tsundoku, supplementary info: nlab fan account, arxiv surveyor, pubmed enjoyer, two culture bridger, vacuous high gossiper, dearth of any domain expertise, reluctant g theorist, gpu poor
A LLN - large language Nathan - (RL, RLHF, society, robotics), athlete, yogi, chef
Writes http://interconnects.ai
At Ai2 via HuggingFace, Berkeley, and normal places
Accelerate AI safety.
๐ agus.sh
AI Safety Research // Software Engineering
Forever expanding my nerd/bimbo Pareto frontier. Ex-OpenAI, AGI safety and governance, fellow @rootsofprogress.
I'm mostly on superstimul.us now (profile is @sloeb) (also Twitter)
Co-founder at Asana and Good Ventures (a funding partner of Coefficient Giving). Meta delenda est. Strange looper.
AI alignment & memes | "known for his humorous and insightful tweets" - Bing/GPT-4 | prev: @FHIOxford
Globally ranked top 20 forecaster, former data scientist
As seen on TV! The Daily Show, Good Morning America
Protecting liberty and prosperity in the age of superintelligence
AI safety at Anthropic, on leave from a faculty job at NYU.
Views not employers'.
I think you should join Giving What We Can.
cims.nyu.edu/~sbowman
AI @ OpenAI, Tesla, Stanford
What would we need to understand in order to design an amazing future? Ex DeepMind, OpenAI
AI policy researcher, wife guy, fan of cute animals and sci-fi, executive director of AVERI, Substacker
https://Answer.AI & https://fast.ai founding CEO; previous: hon professor @ UQ; leader of masks4all; founding CEO Enlitic; founding president Kaggle; various other stuffโฆ
Searching for the numinous
Australian Canadian, currently living in the US
https://michaelnotebook.com
Measure is unceasing, disagreement is a ladder.
nunosempere.com
When the going gets weird the weird turn pro. andymasley.com
I run The Update newsletter: www.update.news
Book: academic.oup.com/book/56384
Watch the SB-1047 Documentary on Youtube: https://youtu.be/JQ8zhrsLxhI
eucatastrophic myobci cyborgist. here for the coherent extrapolated volition. how fast can you take your time, kid?
Blog at thezvi.substack.com, this is a pure backup, same handle on Twitter.
Raising kids & bread & grant money. Cleaning data & diapers & fish. EA (bed nets, not light cone). Social scientist. typos. twitter.com/ryancbriggs
Posts on public policy (in ๐จ๐ฆ or abroad), humanities, classical music, altruism effective and ineffective. Many silly posts. Toronto-adjacent.
researcher at Epoch AI. Views my own ๐ธ
policy for v smart things @openai. Past: PhD @HarvardSEAS/@SchmidtFutures/@MIT_CSAIL. Posts my own; on my head be it
Researching Artificial General Intelligence Safety, via thinking about neuroscience and algorithms, at Astera Institute. https://sjbyrnes.com/agi.html
Reverse engineering neural networks at Anthropic. Previously Distill, OpenAI, Google Brain.Personal account.
P(A|B) = [P(A)*P(B|A)]/P(B), all the rest is commentary. Click to read Astral Codex Ten, by Scott Alexander, a [โฆ] [bridged from astralcodexten.com on the web: https://fed.brid.gy/web/astralcodexten.com ]
AI + security at AISLE | Stanford PhD in AI & Cambridge physics | scientific progress
LLM developer, alignment-accelerationist, Fedorovist ancestor simulator, Dreamtime enjoyer.
All posts public domain under CC0 1.0.
Founder/Chief Scientist @ positron.ai
I like to write about Haskell, category theory, AI, and safety, and a whole lot of low level SIMD stuff for some reason
http://calendly.com/ekmett
http://github.com/ekmett
http://x.com/kmett
http://comonad.com/reader
Chief Scientist at the UK AI Security Institute (AISI). Previously DeepMind, OpenAI, Google Brain, etc.
Working on AI gov. Past: Technology and Security Policy @ RAND re: AI compute, telemetry database lead & 1st stage landing software @ SpaceX; AWS; Google.
PhD Student at the Max Planck School of Cognition | Harvard โ23 | Interested in cognitive spaces, hippocampal replay, planning, and AI safety.
AI & natsec @cnas.bsky.social
Researching x-risks, AI alignment, complex systems, rational decision making
https://niplav.site/index
I make sure that OpenAI et al. aren't the only people who are able to study large scale AI systems.
Social policy synthesizer. www.secondbest.ca
Anthropic and Import AI. Previously OpenAI, Bloomberg, The Register. Weird futures.
AI grantmaking at Coefficient Giving
Previously 80,000 Hours
lawsen.substack.com
I'm here to make friends.
Please be patient, I'm intellectually challenged.
https://x.com/CineraVerinia
CEO at Machine Intelligence Research Institute (MIRI, @intelligence.org)
Professional reference class tennis player. I like non-fillet frozen fish, packaged medicaments, and other oily seeds.
Trying to help the world navigate potentially transformative technologies, currently via AI Governance and Policy at Coefficient Giving. Enjoyer of acoustic guitars, history books, and plant-based foods.
I run AI Plans, an AI Safety lab focused on solving AI Alignment before 2029.
For several weeks I used a stone for a pillow.
I once spent a quarter of my paycheck on cheese.
Ping me! DMs not working atm due to totalitarian UK law :(
SurpassAI
Non-profit dedicated to advancing AI safety R&D through targeted events and community initiatives. https://horizonevents.info/
Advancing AI safety through convenings, coordination, software, analysis
https://orpheuslummis.info, based in Montrรฉal
official Bluesky account (check username๐)
Bugs, feature requests, feedback: support@bsky.app