AI Standards Lab
AI Standards Lab is an independent nonprofit virtual lab that brings together experts in computer science, AI research, safety engineering, standards development, and law to accelerate the writing of AI safety standards. Founded in November 2023 and incorporated as a US nonprofit in April 2024, it works closely with Holtman Systems Research, a small European company founded in November 2022 by Koen Holtman. Together they contribute to technical standards development through the CEN-CENELEC JTC21 committee supporting the EU AI Act, the General-Purpose AI Code of Practice, and global AI safety engineering frameworks. The combined effort operates with approximately five full-time equivalents and is funded by charitable grants.
AI Standards Lab is an independent nonprofit virtual lab that brings together experts in computer science, AI research, safety engineering, standards development, and law to accelerate the writing of AI safety standards. Founded in November 2023 and incorporated as a US nonprofit in April 2024, it works closely with Holtman Systems Research, a small European company founded in November 2022 by Koen Holtman. Together they contribute to technical standards development through the CEN-CENELEC JTC21 committee supporting the EU AI Act, the General-Purpose AI Code of Practice, and global AI safety engineering frameworks. The combined effort operates with approximately five full-time equivalents and is funded by charitable grants.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- $528,000
- Fiscal Sponsor
- Players Philanthropy Fund, Inc.
Theory of Change
AI Standards Lab and Holtman Systems Research believe that AI safety standards must keep pace with the rapid development and deployment of AI technology, and that well-designed technical standards are a critical mechanism for ensuring safe AI outcomes. Their theory of change centers on three pathways: first, by directly contributing expert knowledge to formal standards bodies like CEN-CENELEC JTC21, they help shape the mandatory technical requirements that AI developers must meet under the EU AI Act, establishing minimum safety floors that prevent competitive pressures from degrading safety. Second, by developing frameworks like the Quality Scorecard for AI Evaluations and cataloging risk sources for general-purpose AI, they provide the analytical groundwork that standards bodies need to write effective, technically grounded standards. Third, by operating as an independent virtual lab that brings together diverse experts, they help bridge the gap between AI safety research and the standards-making process, ensuring that safety-relevant technical knowledge is translated into enforceable requirements.
Grants Received
from Survival and Flourishing Fund
from Open Philanthropy
from Survival and Flourishing Fund
Projects– no linked projects
People– no linked people
Discussion
Sign in to join the discussion.
Key risk: Their standards-first theory of change risks dilution and harmful lock-in—producing slow, weak, or mis-specified requirements for frontier models—and as a small early-stage team their marginal influence may be limited or counterfactually replaceable by incumbent standards bodies.
Details
- Last Updated
- Apr 3, 2026, 2:02 AM UTC
- Created
- Mar 18, 2026, 11:18 PM UTC
Case for funding: With a track record of getting their work officially incorporated into EU AI standards (via CEN-CENELEC JTC21 and the General-Purpose AI Code of Practice), AI Standards Lab can translate safety research into enforceable minimum requirements through concrete artifacts like their Quality Scorecard and risk catalog—a high-leverage bottleneck in AI Act implementation that few orgs are positioned to fill.