Horizon Events's Avatar

Horizon Events

@horizonevents.info

Non-profit dedicated to advancing AI safety R&D through targeted events and community initiatives. https://horizonevents.info/

28
Followers
12
Following
17
Posts
10.11.2023
Joined
Posts Following

Latest posts by Horizon Events @horizonevents.info

Towards Safe and Hallucination-Free Coding AIs – GasStationManager · Zoom · Luma Towards Safe and Hallucination-Free Coding AIs GasStationManager – Independent Researcher Modern LLM-based AIs have exhibited great coding abilities, and have…

This event was postponed to next Thursday, September 11, 1PM EDT. Join the discussion! luma.com/g88qnql1

04.09.2025 15:25 👍 2 🔁 1 💬 0 📌 0
Using PDDL Planning to Ensure Safety in LLM-based Agents – Agustín Martinez Suñé
Using PDDL Planning to Ensure Safety in LLM-based Agents – Agustín Martinez Suñé YouTube video by Horizon Events

Recording available: youtu.be/anbsnwnMpf8?...

09.01.2025 21:01 👍 0 🔁 0 💬 0 📌 0
Preview
AI Safety Events & Training: 2025 week 1 update Upcoming AI safety events, open calls, and training programs – online and in-person. Published weekly.

AI Safety Events & Training: 2025 week 1 update
aisafetyeventsandtraining.substack.com/p/ai-safety-...

03.01.2025 00:21 👍 7 🔁 2 💬 0 📌 0
Preview
AI Safety Events & Training: 2024 week 51 update – and 2024 review Upcoming AI safety events, open calls, and training programs – online and in-person. Published weekly.

AI Safety Events & Training: 2024 week 51 update – and 2024 review
aisafetyeventsandtraining.substack.com/p/ai-safety-...

21.12.2024 16:48 👍 4 🔁 2 💬 0 📌 0
Preview
Guaranteed Safe AI Seminars 2024 review Dear Guaranteed Safe AI enjoyers,

Guaranteed Safe AI Seminars 2024 review
horizonomega.substack.com/p/guaranteed...

The monthly seminar series grew to 230 subscribers in 2024, hosting 8 technical talks. We had ~490 RSVPs, with ~76 hours and ~900 views of the recordings. Seeking 2025 funding; plans include bibliography and debates.

15.12.2024 18:16 👍 3 🔁 1 💬 0 📌 0
Preview
Using PDDL Planning to Ensure Safety in LLM-based Agents – Agustín Martinez Suñé · Zoom · Luma Using PDDL Planning to Ensure Safety in LLM-based Agents Agustín Martinez Suñé – Ph.D. in Computer Science | Postdoctoral Researcher (Starting Soon), OXCAV,…

Using PDDL Planning to Ensure Safety in LLM-based Agents by Agustín Martinez Suñé
Thu January 9, 18:00-19:00 UTC
Join: lu.ma/08gr7mrs
Part of the Guaranteed Safe AI Seminars

13.12.2024 03:43 👍 3 🔁 2 💬 1 📌 0
​Compact Proofs of Model Performance via Mechanistic Interpretability – Louis Jaburi
​Compact Proofs of Model Performance via Mechanistic Interpretability – Louis Jaburi YouTube video by Horizon Events

Recording is now available: www.youtube.com/watch?v=m_2J...

13.12.2024 03:01 👍 0 🔁 0 💬 0 📌 0
Preview
Compact Proofs of Model Performance via Mechanistic Interpretability – Louis Jaburi · Zoom · Luma Compact Proofs of Model Performance via Mechanistic Interpretability Louis Jaburi – Independent researcher Generating proofs about neural network behavior is a…

Compact Proofs of Model Performance via Mechanistic Interpretability
by Louis Jaburi
Thu December 12, 18:00-19:00 UTC
Join: lu.ma/g24bvacw

Last Guaranteed Safe AI seminar of the year

08.12.2024 15:29 👍 1 🔁 1 💬 1 📌 0
Preview
Horizon Events 2025 Non-profit facilitating progress in AI safety R&D through events

Our goals for 2025:
- Guaranteed Safe AI Seminars
- AI Safety Unconference 2025
- AI Safety Events & Training newsletter
- Monthly Montréal AI safety R&D events
- Grow partnerships

We are looking for donations to support this work. More info:
manifund.org/projects/hor...

19.11.2024 12:24 👍 2 🔁 1 💬 0 📌 0
Bayesian oracles and safety bounds – Yoshua Bengio
Bayesian oracles and safety bounds – Yoshua Bengio YouTube video by Horizon Events

Recording: www.youtube.com/watch?v=SIAZ...

15.11.2024 00:19 👍 3 🔁 1 💬 0 📌 0
Preview
Bayesian oracles and safety bounds – Yoshua Bengio · Zoom · Luma Bayesian oracles and safety bounds Yoshua Bengio – Scientific Director, Mila & Full Professor, U. Montreal Could there be safety advantages to the training of…

Today on the Guaranteed Safe AI Seminars series:

Bayesian oracles and safety bounds by Yoshua Bengio

Relevant readings:
- yoshuabengio.org/2024/08/29/b...
- arxiv.org/abs/2408.05284

Join: lu.ma/4ylbvs75

14.11.2024 12:37 👍 2 🔁 1 💬 1 📌 0
Preview
Bayesian oracles and safety bounds – Yoshua Bengio · Zoom · Luma Bayesian oracles and safety bounds Yoshua Bengio – Scientific Director, Mila & Full Professor, U. Montreal Could there be safety advantages to the training of…

Bayesian oracles and safety bounds
by Yoshua Bengio, Scientific Director, Mila & Full Professor, U. Montreal
November 14, 18:00-19:00 UTC
Join: lu.ma/4ylbvs75
Part of the Guaranteed Safe AI Seminars

11.10.2024 14:20 👍 2 🔁 1 💬 0 📌 0
Preview
Guaranteed Safe AI Seminars Horizon Events announces the Guaranteed Safe AI Seminars. It is a monthly series bringing together researchers to discuss and advance the field. GS AI aims to produce AI systems equipped with high-ass...

Announcing the Guaranteed Safe AI Seminars. This monthly series brings together researchers to discuss and advance the field of GS AI, which aims to produce AI systems equipped with high-assurance quantitative safety guarantees.
horizonomega.substack.com/p/announcing...

17.07.2024 22:38 👍 2 🔁 1 💬 0 📌 0
Preview
Constructability: Designing plain-coded AI systems – Charbel-Raphaël Ségerie & Épiphanie Gédéon · Zoom · Luma Constructability: Designing plain-coded AI systems Charbel-Raphaël Ségerie & Épiphanie Gédéon – Executive director at CeSIA & Independent Researcher Current AI…

Constructability: Designing plain-coded AI systems
by ​Charbel-Raphaël Ségerie & Épiphanie Gédéon
August 8, 17:00-18:00 UTC
Join: lu.ma/xpf046sa
As part of the Guaranteed Safe AI Seminars

12.07.2024 18:16 👍 1 🔁 1 💬 0 📌 0
Preview
Proving safety for narrow AI outputs – Evan Miyazono · Zoom · Luma Proving safety for narrow AI outputs Evan Miyazono, Founder of Atlas Computing User demand for new AI capabilities is growing even as risks from foreseeable AI…

You are invited to the Guaranteed Safe AI Seminars, July 2024 edition.

Proving safety for narrow AI outputs – Evan Miyazono, Atlas Computing

Thursday, July 11 11:30-12:30 UTC-5
RSVP: lu.ma/2715xmzn

19.06.2024 12:56 👍 2 🔁 1 💬 0 📌 0
Preview
Introducing Horizon Events Events consultancy dedicated to advancing research and development in AI safety

Introducing Horizon Events: A non-profit consultancy dedicated to advancing AI safety R&D through high-impact events and initiatives.
horizonomega.substack.com/p/introducin...

09.06.2024 22:38 👍 1 🔁 1 💬 0 📌 0
Preview
Gaia: Distributed planetary-scale AI safety · Zoom · Luma Gaia: Distributed planetary-scale AI safety Rafael Kaufmann, Co-founder and CTO, Gaia In the near future, there will be billions of powerful AI agents deployed…

Next edition of the Provable AI Safety seminars:
Gaia: Distributed planetary-scale AI safety
By ​Rafael Kaufmann, Co-founder and CTO.
Thursday June 13, 13:00-14:00 Eastern, online.
Join us!
lu.ma/qn8p4wp4

10.05.2024 16:22 👍 0 🔁 0 💬 0 📌 0

You are invited to the 2nd edition of the Provable AI Safety Seminars:

**Provable AI Safety, Steve Omohundro**
May 9th, 13:00-14:00 EDT, online
lu.ma/3fz12am7

11.04.2024 18:57 👍 1 🔁 1 💬 0 📌 0

Announcing the first edition of the Provable AI Safety Seminars.

April 11th, 13:00-14:00 EDT. Monthly, on 2nd Thursday.

RSVP: lu.ma/provableaisa...

Talks:
- Synthesizing Gatekeepers for Safe Reinforcement Learning (Sefas)
- Verifying Global Properties of Neural Networks (Soletskyi)

21.03.2024 23:26 👍 1 🔁 1 💬 0 📌 0

AI Safety Events Tracker, February 2024 edition.
A newsletter listing upcoming events and open calls related to AI safety.
aisafetyeventstracker.substack.com/p/ai-safety-...

12.02.2024 09:59 👍 1 🔁 0 💬 0 📌 0

AI Safety Events Tracker, December 2023 edition
Listing upcoming events and open calls related to AI safety
aisafetyeventstracker.substack.com/p/ai-safety-...

10.12.2023 07:43 👍 6 🔁 0 💬 0 📌 0
Preview
Web3 x AI risk · Luma Web3 x AI Risk, Devconnect 2023 A participatory discussion focused on exploring and critically examining emergent risks at the intersection of AI and Web3 technologies. Join this...

At Devconnect Istanbul tomorrow and interested in AI risk? You are invited to a 2h participative discussion tackling topics at the intersection of web3 and AI risk. lu.ma/vh9hrgme

15.11.2023 12:22 👍 1 🔁 1 💬 0 📌 0