Voluntary AI Safety Standard (10 Guardrails)¶
Australia’s Voluntary AI Safety Standard provides ten practical guardrails organisations can adopt now to deploy and use AI safely and responsibly.
The standard was developed by the Department of Industry, Science and Resources (DISR) and first published in November 2023. In 2024, the guardrails were updated following consultation to align more closely with the Government’s proposed mandatory guardrails for high-risk AI applications.
The standard is voluntary and complements existing Australian law. It provides a practical framework for organisations to manage AI safely while future regulation is considered.
Importantly, the 10 guardrails are consistent with leading international standards and frameworks, including:
Status update (October 2025): In October 2025 the National AI Centre released the Guidance for AI Adoption, which sets out 6 essential practices (AI6) for responsible AI governance and adoption. This guidance is described as the first update of the Voluntary AI Safety Standard (VAISS) and is now the primary reference for organisations using AI in Australia.
The 10 guardrails on this page remain fully integrated into the new guidance and are best used as a detailed control set and historical reference, especially where contracts, risk registers or external frameworks still refer to the original VAISS guardrails.
Why this matters¶
Adopting the guardrails early helps organisations build trust, resilience, and regulatory readiness. By embedding these practices now, businesses can:
- Reduce risks from bias, errors, and misuse of AI
- Strengthen transparency and customer confidence
- Position themselves ahead of future mandatory compliance requirements
- Demonstrate leadership in responsible AI adoption
The 10 Guardrails¶
- Establish accountability
- Implement risk management
- Protect data
- Ensure transparency
- Enable human control
- Test reliability
- Monitor impacts
- Ensure accountability in the supply chain
- Maintain records
- Support human autonomy
What the guardrails do¶
- Encourage transparency and accountability for AI systems.
- Require risk assessment, testing, and human oversight before and after deployment.
- Promote record-keeping and supplier due-diligence across the AI supply chain.
- Emphasise stakeholder engagement and ongoing monitoring as systems evolve.
How this maps to the 6 essential practices (AI6)¶
The Guidance for AI Adoption condenses the 10 VAISS guardrails into 6 essential practices. At a high level:
| AI6 practice | Closest VAISS guardrails and themes |
|---|---|
| Decide who is accountable | 1. Establish accountability; 9. Maintain records |
| Understand impacts and plan accordingly | 2. Implement risk management; 7. Monitor impacts; 10. Support human autonomy |
| Measure and manage risks | 2. Implement risk management; 3. Protect data |
| Share essential information | 4. Ensure transparency; 8. Ensure accountability in the supply chain; elements of 7 and 10 |
| Test and monitor | 6. Test reliability; 7. Monitor impacts |
| Maintain human control | 5. Enable human control; 10. Support human autonomy |
How to use this mapping
- Use AI6 as your top-level framework when explaining AI governance to boards, executives and regulators.
- Use the 10 guardrails in this page as a control library when:
- building or updating AI policies and standards
- designing AI risk assessments and risk registers
- writing contractual clauses and supplier questionnaires.
- Where you see references to the "Voluntary AI Safety Standard" or "guardrails" in external documents, you can confidently interpret them through this AI6 lens.
How to use this in your business¶
- Adopt the 10 guardrails as acceptance criteria for any AI initiative.
- Update policies and procurement to reflect supplier alignment with the guardrails.
- Integrate testing, documentation, and oversight into your normal change-management.
- Review systems at least annually or on material change.
SME-Scaled Implementation Approach¶
While the 10 guardrails apply to all organisations, SMEs can adopt them at different maturity levels:
Guardrail 1: Establish accountability
- Minimum: Designate an AI responsible person
- Better: Create simple AI governance policy
- Best: Regular board/leadership AI updates
Guardrail 2: Implement risk management
- Minimum: Use SAAM risk assessment tool
- Better: Quarterly risk reviews
- Best: Integrated risk management system
Guardrail 3: Protect data
- Minimum: Follow existing cybersecurity practices
- Better: AI-specific data controls
- Best: Enhanced encryption and access controls
Guardrail 4: Ensure transparency
- Minimum: "Powered by AI" labels
- Better: Explain AI role in decisions
- Best: Full algorithmic transparency
Guardrail 5: Enable human control
- Minimum: Override capability for all AI decisions
- Better: Human review of significant decisions
- Best: Human-in-the-loop for all critical processes
Guardrail 6: Test reliability
- Minimum: Pre-deployment testing
- Better: Monthly performance monitoring
- Best: Continuous testing and validation
Guardrail 7: Monitor impacts
- Minimum: Track errors and complaints
- Better: Proactive impact assessment
- Best: Real-time monitoring dashboard
Guardrail 8: Ensure accountability in supply chain
- Minimum: Vendor compliance check
- Better: Contractual AI requirements
- Best: Regular vendor audits
Guardrail 9: Maintain records
- Minimum: Keep AI decision logs
- Better: Comprehensive documentation
- Best: Automated compliance reporting
Guardrail 10: Support human autonomy
- Minimum: Opt-out options
- Better: User control preferences
- Best: Full user agency over AI interactions
Summary Table¶
| Guardrail | Minimum Requirement | Better Practice | Best Practice |
|---|---|---|---|
| 1. Establish accountability | Designate responsible person | Simple governance policy | Regular board/leadership AI updates |
| 2. Risk management | Use SAAM risk tool | Quarterly reviews | Integrated risk management system |
| 3. Protect data | Follow cybersecurity basics | AI-specific controls | Enhanced encryption & access controls |
| 4. Transparency | “Powered by AI” labels | Explain role in decisions | Full algorithmic transparency |
| 5. Human control | Override capability | Human review of major decisions | Human-in-the-loop for critical processes |
| 6. Reliability testing | Pre-deployment testing | Monthly monitoring | Continuous testing & validation |
| 7. Monitor impacts | Track errors & complaints | Proactive assessments | Real-time monitoring dashboards |
| 8. Supply chain accountability | Vendor compliance check | Contractual AI requirements | Regular vendor audits |
| 9. Maintain records | Keep decision logs | Comprehensive documentation | Automated compliance reporting |
| 10. Human autonomy | Opt-out options | User control preferences | Full user agency over interactions |
Regulatory outlook and the future of the guardrails¶
As at December 2025, the position on AI-specific regulation in Australia has shifted.
The National AI Plan confirms that the Government will, for now, rely on existing technology-neutral laws and sector regulators (for example, privacy, consumer law, financial services, safety, anti-discrimination) to manage AI-related harms. Earlier proposals to introduce standalone mandatory guardrails and a dedicated AI Act for high-risk AI applications have been paused.
Instead:
- the Guidance for AI Adoption (AI6) provides voluntary but strongly endorsed governance expectations
- a new AI Safety Institute will be established from 2026 to monitor, test and advise on emerging AI capabilities, risks and harms
- regulators will continue to apply and, where necessary, adapt existing frameworks to AI use cases.
The 2024 consultation on mandatory guardrails and high-risk AI remains an important signal of how future regulation might be framed if the Government decides that existing laws are not sufficient. The principles-based definition of "high-risk AI" and the examples of sensitive domains (healthcare, employment, financial services, critical infrastructure, government decision-making, etc.) remain useful reference points for organisations when they assess and prioritise their own AI risk.
In practical terms, organisations should:
- treat the 10 guardrails + AI6 as best-practice voluntary standards rather than imminent hard law
- focus on integrating these controls into existing privacy, risk, safety, security and compliance processes
- monitor future updates from the AI Safety Institute, NAIC and sector regulators, which may influence how strongly certain guardrails are expected in particular industries.
Government's principles-based definition of "high-risk AI"¶
In its earlier consultation on mandatory guardrails for high-risk settings, the Australian Government proposed a principles-based definition of "high-risk AI".
An AI application may be considered high-risk if it has a high likelihood of causing material harm in one or more of the following areas:
- human rights or freedoms
- health and safety
- legal rights or obligations
- democratic processes
- environmental outcomes
- broader societal impacts.
Risk was to be assessed based on context, severity and scale, rather than a static list of application types.
Illustrative examples from public commentary¶
The following examples are not part of any official legal definition. They are drawn from media, legal analyses, and the 2024 Senate inquiry into AI, which suggested that some systems and general-purpose models might warrant stronger oversight:
- healthcare diagnosis or treatment
- employment decisions (hiring, firing, promotion)
- financial services (loans, insurance, credit scoring)
- government service delivery and eligibility assessments
- critical infrastructure operation and security
- legal or quasi-legal decisions.
By contrast, many common "everyday" uses of AI (marketing automation, customer service chatbots, internal productivity tools, content generation and basic analytics) are generally treated as lower-risk, but must still comply with existing law and good-practice governance.
Further Reading & Official Resources¶
- Voluntary AI Safety Standard – Overview and guardrails (Department of Industry)
- The 10 guardrails – full guidance and examples (Department of Industry)
- Legal landscape for AI in Australia (Department of Industry)
- Consultation: Mandatory guardrails for high-risk settings (Australian Government)