The AI Safety Institute (AISI) of South Korea is a government research body affiliated with the Electronics and Telecommunications Research Institute (ETRI), launched on November 27, 2024 following the AI Seoul Summit. Its mission is to evaluate risks arising from AI models and systems, develop technologies to prevent and mitigate those risks, and serve as Korea's hub for AI safety research connecting industry, academia, and research institutes. The institute operates three core divisions covering AI safety policy and international cooperation, AI safety assessment, and AI safety research, while aiming to position Korea as a global AI safety leader and Asia-Pacific hub.
The AI Safety Institute (AISI) of South Korea is a government research body affiliated with the Electronics and Telecommunications Research Institute (ETRI), launched on November 27, 2024 following the AI Seoul Summit. Its mission is to evaluate risks arising from AI models and systems, develop technologies to prevent and mitigate those risks, and serve as Korea's hub for AI safety research connecting industry, academia, and research institutes. The institute operates three core divisions covering AI safety policy and international cooperation, AI safety assessment, and AI safety research, while aiming to position Korea as a global AI safety leader and Asia-Pacific hub.
Funding Details
- Annual Budget
- $8,700,000
- Monthly Burn Rate
- $725,000
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- -
Theory of Change
The Korea AISI believes that by systematically identifying, classifying, and evaluating AI risks at a national level, it can develop preemptive safety frameworks, evaluation tools, and policies that both protect the public and enable Korean AI companies to compete globally. By serving as a hub connecting government, industry, academia, and international partners, the institute can harmonize domestic safety standards with global norms, share knowledge through its consortium, and contribute to the international network of AI safety institutes. This coordinated approach — combining rigorous technical research on AI risks with policy guidance and international cooperation — is intended to ensure that advanced AI is developed and deployed safely in Korea and that Korea shapes global AI safety governance.
Grants Received– no grants recorded
Projects– no linked projects
People– no linked people
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 7, 2026, 8:29 PM UTC
- Created
- Apr 7, 2026, 6:28 PM UTC