The Australian AI Safety Institute (AISI) was established by the Australian Government to provide expert capability for monitoring, testing, and sharing information on emerging AI technologies and their associated risks and harms. Operating as an advisory body within the Department of Industry, Science and Resources, it is not a regulator but rather a technical and coordination hub that works alongside existing regulators such as the OAIC, ACCC, and eSafety Commissioner. The AISI collaborates with Australia's National AI Centre and participates in the International Network for Advanced AI Measurement, Evaluation and Science alongside counterparts in the UK, US, and other countries. It received AUD $29.8 million over four years from 2025-26 to support its establishment.
The Australian AI Safety Institute (AISI) was established by the Australian Government to provide expert capability for monitoring, testing, and sharing information on emerging AI technologies and their associated risks and harms. Operating as an advisory body within the Department of Industry, Science and Resources, it is not a regulator but rather a technical and coordination hub that works alongside existing regulators such as the OAIC, ACCC, and eSafety Commissioner. The AISI collaborates with Australia's National AI Centre and participates in the International Network for Advanced AI Measurement, Evaluation and Science alongside counterparts in the UK, US, and other countries. It received AUD $29.8 million over four years from 2025-26 to support its establishment.
Funding Details
- Annual Budget
- $7,450,000
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- -
Theory of Change
The AISI reduces AI-related harm by providing government with credible, independent technical capability to assess frontier AI models before and after deployment. By identifying risks early and sharing findings with ministers, regulators, and international partners, it enables timely policy and regulatory responses. Participating in the global network of AI safety institutes multiplies its impact by aligning Australia's safety standards with international norms. Voluntary pre-deployment evaluations incentivize AI developers to build safer systems, while ongoing risk monitoring ensures the government can respond to harms as they emerge in practice.
Grants Received– no grants recorded
Projects– no linked projects
People– no linked people
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 7, 2026, 8:29 PM UTC
- Created
- Apr 7, 2026, 6:27 PM UTC