Guidance for AI Adoption (AI6) – 6 Essential Practices¶
Status: Federal government guidance in effect from 21 October 2025. Non-binding, but now the primary reference for responsible AI adoption in Australia.
What is the Guidance for AI Adoption?¶
In October 2025 the National AI Centre (NAIC), within the Department of Industry, Science and Resources, released the Guidance for AI Adoption. It sets out six essential practices (“AI6”) for responsible AI governance and adoption by organisations operating in Australia.
This guidance updates and replaces the Voluntary AI Safety Standard as the main reference for business. The underlying 10 voluntary guardrails have been retained and integrated into the new framework instead of being discarded.
There are two versions of the guidance:
- Foundations – for organisations getting started with AI or using AI in low-risk ways.
- Implementation practices – detailed step-by-step guidance for more mature or higher-risk AI use.
You can download both versions, along with an AI screening tool, AI policy template and AI system register template, from the Department of Industry website (industry.gov.au).
The six essential practices (AI6) in plain language¶
1. Decide who is accountable¶
Every AI use case should have clear owners.
- Nominate an executive accountable official for AI across the organisation.
- Define who is responsible for approving, operating and monitoring each AI system.
- Clarify decision rights between IT, business units, risk/compliance and vendors.
- Make sure accountability appears in policies, job descriptions and governance forums (e.g. risk committee, digital steering group).
For SMEs: even if you only have a few people, write down who:
- signs off AI use
- checks risks
- responds if something goes wrong.
2. Understand impacts and plan accordingly¶
Before deploying AI, understand what could go wrong and who could be affected.
- Identify people and groups who may be impacted (customers, staff, suppliers, vulnerable communities).
- Consider impacts on privacy, safety, fairness, security, employment and reputation.
- Classify each use case as lower-risk or higher-risk based on context and consequences.
- Plan mitigations for high-impact scenarios, including fallback processes if the AI fails.
For SMEs: use a simple AI screening / intake form to document purpose, data, users and potential harms before you start building or buying.
3. Measure and manage risks¶
Treat AI use like any other material business risk.
- Add AI-related risks to your enterprise risk register (privacy, cyber, safety, conduct, IP, regulatory).
- Align to existing frameworks (privacy management, cyber security, WHS, consumer law, financial services).
- Define risk appetite for AI in different areas (e.g. marketing vs safety-critical operations).
- Record controls and safeguards and review them regularly.
For SMEs: start with a small standard set of AI risks and controls and reuse them across systems rather than reinventing the wheel each time.
4. Share essential information¶
Be open about how you use AI and what it means for people.
- Tell people when they are interacting with, or significantly affected by, an AI system.
- Provide plain-language explanations of what the system does and its limitations.
- Maintain an AI system register covering purpose, data sources, key risks, controls and owners.
- Be transparent with suppliers and partners about expectations, and seek equivalent transparency in return.
For SMEs: a simple AI register and standard wording in privacy notices, contracts and internal policies will go a long way.
5. Test and monitor¶
AI systems must be tested before use and monitored over time.
- Test systems against accuracy, robustness, bias, security and usability criteria before going live.
- Use realistic data and scenarios, including edge cases and stress tests.
- Set up ongoing monitoring and periodic review, not “set and forget”.
- Define incident thresholds and escalation paths (when to pause, roll back or retire a system).
For SMEs: for each important AI system, agree a short list of “things we will measure” and a cadence for checking them.
6. Maintain human control¶
People remain responsible for decisions and outcomes.
- Decide where humans must remain “in the loop” or “on the loop” (reviewing outputs, overruling decisions).
- Ensure staff using AI have training, guidance and authority to question or override it.
- Avoid over-reliance on AI, especially in high-stakes domains (safety, financial hardship, employment, access to services).
- Make it easy for customers and staff to challenge or appeal outcomes influenced by AI.
For SMEs: be explicit about which decisions AI can never make on its own, and build that into processes and systems.
How AI6 relates to the 10 voluntary guardrails¶
The Guidance for AI Adoption is described by government as the first update of the Voluntary AI Safety Standard (VAISS). It:
- Condenses the 10 guardrails into 6 practices.
- Removes duplicated language and expands guidance for developers as well as deployers.
- Provides a “VAISS × implementation practices crosswalk” showing how every VAISS practice maps into AI6 actions.
On SafeAI-Aus:
- The “Voluntary AI Safety Standard (10 Guardrails)” page is maintained as a detailed control catalogue and historical reference.
- This AI6 page provides the current, higher-level framework that Australian organisations are encouraged to follow.
If your organisation has already aligned to the 10 guardrails, you do not need to start again. Instead:
- Map existing policies, controls and risk registers into the 6 practices.
- Use NAIC’s implementation guidance to fill gaps.
- Update internal documentation to reference AI6 as the primary framework.
Where AI6 sits in the broader Australian landscape¶
AI6 is designed to align with:
- The Australian Government’s AI Ethics Principles.
- Existing technology-neutral laws and regulators (privacy, consumer law, workplace, safety, anti-discrimination, financial services, etc.).
- International AI governance frameworks such as ISO/IEC 42001 and the NIST AI Risk Management Framework.
The National AI Plan released in December 2025 confirms that Australia will, at least in the short term, rely on existing laws plus voluntary guidance (including AI6) rather than introducing a standalone AI Act or immediate mandatory guardrails, supported by a new AI Safety Institute from 2026.
How to use this page¶
For most Australian organisations, a practical approach is:
- Adopt AI6 as your top-level framework for AI governance and risk management.
- Use the SafeAI-Aus toolkit (AI use policy, AI risk register, vendor checklist and project intake templates) to operationalise the 6 practices.
- Refer to the 10 guardrails when you need more granular control statements or when external documents still reference VAISS.
- Review annually against future updates from NAIC, sector regulators, privacy and consumer regulators, and the AI Safety Institute.