AI Use Policy Template¶
Purpose: Establish clear guidelines for safe and responsible AI use in your organisation Audience: All staff, contractors and third parties | Format: Policy document template
How to Use This Template
- Copy the policy text below into your document system
- Find and replace [Organisation Name] throughout
- Customise Section 9 (Roles & Responsibilities) for your structure
- Workshop key sections with stakeholders (especially Section 6: Prohibited Use)
- Review with legal/compliance team
- Obtain executive and legal sign-off
- Plan training and communication rollout
- Publish and integrate with existing governance documentation
This template provides a complete AI Use Policy for Australian businesses seeking to adopt AI responsibly. It aligns with the Australian Government's Voluntary AI Safety Standard (VAISS) and references ISO/IEC 42001:2023.
AI Use Policy (Template)¶
Customisation Required
Replace [Organisation Name] and [Insert Date] throughout this template.
Effective date: [Insert Date] Review cycle: [e.g. Annually] Applies to: All employees, contractors and third parties who design, procure, operate, or interact with AI systems on behalf of [Organisation Name].
1. Purpose¶
This policy governs AI use at [Organisation Name]. It sets expectations for how AI should support organisational goals, protect people and align with applicable laws.
This policy aims to:
- Support organisational objectives
- Comply with Australian laws and standards
- Protect privacy, data and intellectual property (IP)
- Align with the Australian Government's Voluntary AI Safety Standard (VAISS) - 10 Guardrails
- Align with ISO/IEC 42001:2023
2. Scope¶
This policy applies across the organisation wherever AI technologies are developed, purchased, or used. It covers both internal and external use cases, ensuring that all applications of AI are appropriately governed.
In scope:
- All AI projects, pilots and procurements
- All AI-generated outputs used for decision-making or external publication
- Any third-party AI services or APIs integrated into organisational workflows
3. Terms & Definitions¶
To ensure consistency and clarity, the following definitions apply within this policy:
- Artificial intelligence (AI): Computer systems that perform tasks normally requiring human intelligence (e.g., text generation, decision support)
- AI System: Any software, service, or model that uses AI to produce outputs or assist in decisions
- Human oversight: A human must review and remain accountable for consequential decisions informed by AI
- Personal Information: Information about an identifiable individual, as defined under the Privacy Act 1988 (Cth)
- Intellectual Property (IP): Creations of the mind (trade secrets, code, designs, works) owned or licensed by the organisation
- High-risk AI Use: Applications that may significantly affect people's rights, safety, or financial position (e.g. HR, medical, or safety-critical systems)
4. Principles¶
The organisation is committed to using AI in a way that is safe, transparent and aligned with community expectations. All AI systems and services must reflect the following principles:
- ✓ Have a clear purpose and benefit
- ✓ Be governed responsibly and accountably
- ✓ Apply risk controls proportionate to impact
- ✓ Embed privacy, security, and IP protection by design
- ✓ Operate safely, reliably and securely
- ✓ Be tested and evaluated before deployment
- ✓ Support social and environmental sustainability
- ✓ Maintain human oversight and contestability
- ✓ Be transparent and explainable
- ✓ Be subject to ongoing monitoring and improvement
5. Acceptable Use¶
AI technologies may be used where they support the organisation's objectives, comply with relevant laws and can be applied responsibly. Acceptable use requires staff to follow the conditions below:
- ✅ Align AI use with organisational goals and policies
- ✅ Comply with applicable laws, standards and ethics
- ✅ Ensure humans remain accountable for significant decisions
- ✅ Protect personal data and organisational IP
- ✅ Document purpose, data sources and limitations
- ✅ Use only approved and secure AI tools for sensitive workloads
6. Prohibited Use¶
To manage risks and maintain compliance, certain uses of AI are not permitted under any circumstances. These restrictions help safeguard the organisation and its stakeholders.
The following are strictly prohibited:
- ❌ Any illegal activity or violation of Australian law
- ❌ Automated decision-making without required human oversight
- ❌ Using unapproved or unvetted AI vendors
- ❌ Uploading confidential or IP-protected data into public AI tools
- ❌ Training AI models on datasets without appropriate rights or licences
7. Privacy, Intellectual Property & Data Rights¶
Respecting privacy and protecting intellectual property is central to responsible AI adoption. AI use must safeguard both personal information and organisational assets, while also respecting the rights of third parties.
The organisation requires that:
- All AI systems comply with the Privacy Act 1988 (Cth)
- Both personal information and organisational IP are protected at all times
- Third-party IP rights are respected when using datasets, models, or outputs
- Copyright or licence status is confirmed before publishing AI-generated content
8. Environmental & Sustainability Considerations¶
The organisation commits to considering the environmental impact of AI systems:
- 🌱 Energy consumption of AI models will be monitored
- 🌱 Preference for energy-efficient AI solutions where feasible
- 🌱 Regular review of computational resource usage
- 🌱 Documentation of sustainability measures in AI projects
9. Roles & Responsibilities¶
Effective governance requires clear accountabilities. Different roles within the organisation carry specific responsibilities for AI oversight and use.
| Role | Responsibility |
|---|---|
| Board or executive | Provide oversight of AI risk and ensure alignment to strategy |
| AI Governance Lead | Maintain the AI register, policies and guardrail compliance |
| Project Owners | Conduct risk assessments, testing and monitoring |
| All Staff | Complete AI literacy and security awareness training; follow this policy |
| Procurement Team | Ensure AI vendor evaluation follows approved processes |
| Training Coordinator | Schedule and track mandatory AI training completion |
| Insurance/Legal | Review liability and insurance coverage for AI deployments |
10. Compliance & Review¶
Compliance with this policy is mandatory. Breaches will be addressed in line with organisational disciplinary procedures or contractual terms.
This policy will be reviewed at least annually, or sooner if required by law, organisational change, or updates to standards (e.g. ISO/IEC 42001:2023).
11. Related Standards & References¶
This policy is guided by relevant standards and legislation that inform responsible AI practice. These include:
- Australian Government Voluntary AI Safety Standard (2024) – 10 Guardrails
- ISO/IEC 42001:2023 – AI Management System (AIMS)
- ISO/IEC 23894:2023 – AI Risk Management
- NIST AI Risk Management Framework 1.0 (2023) – U.S. AI risk guidance
- Privacy Act 1988 (Cth) – Australian Privacy Principles (APPs)
- Fair Work Act 2009 (Cth) – Employee data and workplace rights considerations
12. Quick Guide – Do's & Don'ts¶
Quick Reference for Staff
Do:
- ✅ Use only approved AI systems
- ✅ Keep sensitive data secure
- ✅ Double-check AI outputs before sharing externally
- ✅ Disclose AI use when interacting with customers if relevant
- ✅ Complete mandatory AI awareness training before using AI tools
- ✅ Follow approved procurement processes for new AI tools
- ✅ Consider cross-border data transfer requirements when using cloud-based AI
Don't:
- ❌ Paste confidential or client data into public AI tools
- ❌ Rely on AI for final decisions without human oversight
- ❌ Use unapproved AI vendors or unlicensed data
- ❌ Assume AI-generated content is automatically free of copyright
Common Scenarios¶
| Scenario | Allowed? | Requirements |
|---|---|---|
| Using ChatGPT for draft emails | ✅ Yes | Review before sending, no confidential data |
| Customer data in public AI tools | ❌ No | Privacy breach risk |
| AI for research summaries | ✅ Yes | Fact-check all outputs |
| Automated hiring decisions | ⚠️ With controls | Human review required, bias testing mandatory |
| AI for code suggestions | ✅ Yes | Security testing required |
| Medical/legal advice via AI | ❌ No | Professional oversight essential |
Alignment with Australian Standards¶
This policy implements key requirements from Australian AI frameworks:
Standards Compliance
✓ Decide who is accountable — Section 9 assigns clear responsibility to Board, AI Governance Lead, and Project Owners
✓ Maintain human control — Sections 5-6 mandate human oversight for consequential decisions
✓ Share essential information — Sections 1 and 4 communicate the organisation's AI commitments
✓ Guardrail 1 – Accountability — Section 9 establishes clear lines of responsibility
✓ Guardrail 5 – Human control — Section 6 explicitly prohibits automated decision-making without oversight
✓ Guardrail 4 – Transparency — Section 4 commits to transparent and explainable AI
✓ Guardrail 3 – Data protection — Section 7 mandates Privacy Act compliance and IP protection
Next Steps¶
Where to go from here:
- 📋 Assess readiness: AI Readiness Checklist
- ⚠️ Evaluate risks: AI Risk Assessment Checklist
- 📊 Track projects: AI Project Register
Related templates:
- 🔄 AI Change Management — Plan organisational rollout
- 🗺️ AI Implementation Roadmap — Deploy AI responsibly
- 📝 AI Vendor Evaluation — Assess third-party tools
Disclaimer & Licence
Disclaimer: This template provides best practice guidance for Australian organisations. SafeAI-Aus has exercised care in preparation but does not guarantee accuracy, reliability, or completeness. Organisations should adapt to their specific context and may wish to seek advice from legal, governance, or compliance professionals before formal adoption.
Licence: Licensed under Creative Commons Attribution 4.0 (CC BY 4.0). You are free to copy, adapt and redistribute with attribution: "Source: SafeAI-Aus (safeaiaus.org)"