International Association for Safe & Ethical AI (IASEAI)
The International Association for Safe and Ethical AI (IASEAI) is an independent 501(c)(3) nonprofit founded in 2024. Its mission is to ensure that AI systems are guaranteed to operate safely and ethically, and to shape policy, promote research, and build understanding and community around that goal. The organization convenes researchers, policymakers, civil society, and industry through an annual international conference and regular workshops, while also running policy working groups, an education and outreach program, and an affiliate program for partner organizations.
The International Association for Safe and Ethical AI (IASEAI) is an independent 501(c)(3) nonprofit founded in 2024. Its mission is to ensure that AI systems are guaranteed to operate safely and ethically, and to shape policy, promote research, and build understanding and community around that goal. The organization convenes researchers, policymakers, civil society, and industry through an annual international conference and regular workshops, while also running policy working groups, an education and outreach program, and an affiliate program for partner organizations.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- $1,850,000
- Fiscal Sponsor
- -
Theory of Change
IASEAI believes that the development of highly capable AI represents one of the most consequential events in human history, and that current systems are being developed without adequate safeguards. By convening a high-credibility international community of researchers, policymakers, civil society representatives, and industry leaders, IASEAI works to accelerate technical AI safety research, build consensus around safety standards and governance frameworks, and increase understanding among the policymakers and public who must ultimately demand and enforce those safeguards. The causal chain runs from community-building and annual conferences that generate shared understanding, to policy working groups that produce concrete governance proposals, to international cooperation frameworks that constrain unsafe AI development, to a world in which highly capable AI systems are provably beneficial rather than catastrophically harmful.
Grants Received– no grants recorded
Projects– no linked projects
People– no linked people
Discussion
Details
- Last Updated
- Apr 2, 2026, 10:11 PM UTC
- Created
- Mar 19, 2026, 10:31 PM UTC
Case for funding: With Stuart Russell/Mark Nitzberg at the helm and OECD/UNESCO-level convening power, IASEAI is unusually well positioned to turn high-credibility international gatherings and policy working groups into concrete safety standards and coordination mechanisms that could meaningfully constrain risky frontier AI development.