The ILINA Program is an African-led research initiative based in Nairobi, Kenya, focused on building talent, generating impactful research, and shaping policy to advance AI safety. Founded in 2022, the program runs two main tracks: a 12-week seminar introducing undergraduates across Africa to AI risks and governance, and a Junior Research Fellowship that immerses recent graduates in AI safety research with mentorship and scholarships. ILINA's research spans AI governance, technical alignment, biosecurity, and the role of developing countries in managing global catastrophic risks from AI.
The ILINA Program is an African-led research initiative based in Nairobi, Kenya, focused on building talent, generating impactful research, and shaping policy to advance AI safety. Founded in 2022, the program runs two main tracks: a 12-week seminar introducing undergraduates across Africa to AI risks and governance, and a Junior Research Fellowship that immerses recent graduates in AI safety research with mentorship and scholarships. ILINA's research spans AI governance, technical alignment, biosecurity, and the role of developing countries in managing global catastrophic risks from AI.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- Berkeley Existential Risk Initiative (BERI)
Theory of Change
ILINA believes that AI safety requires global perspectives, particularly from the Global South, which has historically played a crucial role in designing and using multilateral rules and institutions in areas like international environmental law and intellectual property law. By identifying and training talented Africans to work on AI safety research and governance, ILINA aims to build a pipeline of future AI safety leaders who can bring underrepresented perspectives to bear on the governance of highly capable AI systems. The program's approach combines research production, talent development, and direct policy engagement to ensure that frontier AI is governed in ways that address catastrophic risks, with particular attention to how developing countries can distinctively shape outcomes in global AI governance.
Grants Received
from Survival and Flourishing Fund
Projects– no linked projects
People– no linked people
Discussion
Sign in to join the discussion.
Key risk: A small Nairobi-based team may have limited counterfactual leverage on frontier AI policy and research quality, with alumni plausibly reaching similar opportunities without ILINA and current low funding needs making additional dollars unlikely to materially change x-risk outcomes.
Details
- Last Updated
- Apr 2, 2026, 9:49 PM UTC
- Created
- Mar 18, 2026, 11:18 PM UTC
Case for funding: ILINA is uniquely positioned to build an African pipeline of AI safety researchers and governance specialists and bring underrepresented Global South perspectives into multilateral AI norms, with early signs of placement and policy outputs that could strengthen legitimacy and robustness of catastrophic risk governance.