Alignment of Complex Systems Research Group
The Alignment of Complex Systems Research Group (ACS) is an interdisciplinary research group based at the Center for Theoretical Study at Charles University in Prague. ACS studies multi-agent systems composed of humans and advanced AI, aiming to develop foundational understanding of how advanced AI systems are embedded in and interact with humans and human institutions. Their research spans gradual disempowerment, AI sociology and ecosystems of intelligence, hierarchical agency theory, realistic models of human cognition and values, and a broader science of intelligent systems. The group draws on methods from machine learning, information theory, network theory, active inference, philosophy of science, ecology, and evolutionary biology.
The Alignment of Complex Systems Research Group (ACS) is an interdisciplinary research group based at the Center for Theoretical Study at Charles University in Prague. ACS studies multi-agent systems composed of humans and advanced AI, aiming to develop foundational understanding of how advanced AI systems are embedded in and interact with humans and human institutions. Their research spans gradual disempowerment, AI sociology and ecosystems of intelligence, hierarchical agency theory, realistic models of human cognition and values, and a broader science of intelligent systems. The group draws on methods from machine learning, information theory, network theory, active inference, philosophy of science, ecology, and evolutionary biology.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- Epistea z.s.
Theory of Change
ACS believes that AI safety cannot be solved by studying individual AI agents in isolation. Instead, the critical challenge is understanding how ecosystems of intelligence, composed of diverse AI systems, human-AI teams, and human institutions, evolve and interact at multiple scales. By developing foundational conceptual frameworks drawing on complex systems theory, evolutionary biology, active inference, and other disciplines, ACS aims to identify and mitigate systemic risks such as gradual disempowerment, where human influence over societal systems is incrementally eroded. Their theory of change posits that rigorous interdisciplinary research into hierarchical agency and multi-scale alignment will provide the conceptual tools needed to design governance strategies and alignment approaches that preserve human agency as AI systems become more deeply integrated into civilization.
Grants Received
from Survival and Flourishing Fund
from Survival and Flourishing Fund
from Survival and Flourishing Fund
from Survival and Flourishing Fund
Projects– no linked projects
People– no linked people
Discussion
Details
- Last Updated
- Apr 2, 2026, 9:59 PM UTC
- Created
- Mar 18, 2026, 11:18 PM UTC
Case for funding: ACS is uniquely focused on civilization-scale AI-human ecosystem dynamics—evidenced by work like Gradual Disempowerment and formal models of hierarchical agency and AI sociology—while also running HAAISS to build talent, offering high-leverage conceptual tools for governance and alignment that most agent-centric safety efforts neglect.