University of Louisville (Dr. Roman Yampolskiy's Research Group (Cybersecurity Lab))
The Cyber Security Lab at the University of Louisville's J.B. Speed School of Engineering is directed by Dr. Roman V. Yampolskiy, a tenured associate professor widely credited with coining the term 'AI safety' in a 2011 publication. The lab conducts research at the intersection of AI safety, cybersecurity, and digital forensics, with a current focus on the theoretical limits to explainability, predictability, and controllability of advanced intelligent systems. Yampolskiy has authored over 200 publications and several influential books, including 'AI: Unexplainable, Unpredictable, Uncontrollable' (2024), and his work has been cited by over 10,000 scientists worldwide. The lab has received funding from NSF, NSA, DHS, EA Ventures, and the Future of Life Institute.
The Cyber Security Lab at the University of Louisville's J.B. Speed School of Engineering is directed by Dr. Roman V. Yampolskiy, a tenured associate professor widely credited with coining the term 'AI safety' in a 2011 publication. The lab conducts research at the intersection of AI safety, cybersecurity, and digital forensics, with a current focus on the theoretical limits to explainability, predictability, and controllability of advanced intelligent systems. Yampolskiy has authored over 200 publications and several influential books, including 'AI: Unexplainable, Unpredictable, Uncontrollable' (2024), and his work has been cited by over 10,000 scientists worldwide. The lab has received funding from NSF, NSA, DHS, EA Ventures, and the Future of Life Institute.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- -
Theory of Change
Yampolskiy's research group operates on the premise that advanced AI systems are fundamentally uncontrollable, and that establishing the theoretical limits of AI controllability, explainability, and predictability is essential for informing policy and guiding the development of AI safety measures. By producing rigorous academic research demonstrating impossibility results and fundamental limitations in AI control, the lab aims to provide the scientific basis for arguments in favor of extreme caution in AI development, including potential moratoriums on building artificial superintelligence. The group also contributes to public awareness through books, media appearances, and speaking engagements, helping policymakers and the public understand the severity of existential risks from advanced AI.
Grants Received
from Survival and Flourishing Fund
Projects– no linked projects
People– no linked people
Discussion
Sign in to join the discussion.
Key risk: The focus on fundamental uncontrollability and advocating moratoria is strategically contentious and may yield limited actionable alignment or policy wins, and given his existing visibility and low funding needs the marginal counterfactual impact of additional funding could be small.
Details
- Last Updated
- Apr 2, 2026, 10:54 PM UTC
- Created
- Mar 18, 2026, 11:18 PM UTC
Case for funding: Yampolskiy’s lab is one of the few academic groups publishing rigorous impossibility results on AI controllability and detailed accident analyses, and he can convert that scholarship into outsized policy and public influence through major platforms (Lex Fridman, Joe Rogan) and books, with high leverage per dollar via funding PhD students.