ML Safety Newsletter
About
The ML Safety Newsletter is a free, periodically published newsletter hosted on Substack at newsletter.mlsafety.org. It was founded in October 2021 by Dan Hendrycks, Director of the Center for AI Safety (CAIS), as a resource for researchers and practitioners wanting to stay current on machine learning safety research. The newsletter is a project under mlsafety.org, the broader ML Safety research community website, which describes itself as a project by the Center for AI Safety. The newsletter covers a wide range of ML safety topics including adversarial robustness, model alignment, interpretability, monitoring, and systemic safety concerns. Early issues focused heavily on summarizing peer-reviewed papers from major ML conferences such as ICLR. Over time, the newsletter expanded to cover policy developments, new benchmarks, and emerging research themes like agentic AI risks, chain-of-thought monitoring, and emergent misalignment. Contributors have included Thomas Woodside (formerly at CAIS, Yale graduate), Aidan O'Gara, Julius Simonelli, and Alice Blair, with Dan Hendrycks as the consistent primary author throughout. The newsletter went on hiatus between late 2023 and early 2025 before relaunching in February 2025 with Issue #12. As of March 2026, it has published 19 issues and has over 10,000 Substack subscribers. The publication operates entirely on a free model with no paid subscription tiers and no active fundraising. It functions as an outreach and field-building initiative rather than a standalone funded organization, operating under the umbrella of the Center for AI Safety.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Start Date
- -
- End Date
- -
- Expected Duration
- -
- Funding Raised to Date
- -
- Last Updated
- Apr 7, 2026, 7:14 PM UTC
- Created
- Apr 7, 2026, 7:14 PM UTC