AI Incident Report Form¶
Purpose: Standardised form to document, track and respond to AI-related incidents Audience: All staff, IT teams, security officers, risk managers | Time: 10-30 minutes per incident
AI systems can create new opportunities for businesses, but they also introduce risks — from unexpected errors and harmful outputs to privacy breaches and security incidents.
Having a clear process for recording and reporting AI incidents is an essential part of your organisation's risk and security management practices. This ensures that issues are identified quickly, lessons are learned and safeguards are improved.
When to Use This Form
Use this form whenever an AI-related incident occurs in your business. The information you capture will help:
- ⚡ Contain and mitigate risks early
- 🛡️ Protect staff, customers and your organisation's reputation
- ⚖️ Meet your obligations under Australian law and AI guardrails
- ✅ Build trust by showing responsible use of AI
AI Incident Report Form (Template)¶
Date of Report: ____________________
Reported By: ____________________
Contact Details: ____________________
Incident Severity: [ ] Critical [ ] High [ ] Medium [ ] Low
Severity Definitions:
- Critical: Immediate threat to safety, major data breach, or system-wide failure
- High: Significant impact on operations or multiple users affected
- Medium: Limited impact, workaround available
- Low: Minor issue, minimal impact
1. Incident ID & Date¶
Incident identification number and date/time of occurrence
☐ Evidence attached
Time of Discovery: __________
Time of Occurrence (if different): __________
Reporting Timeline:
[ ] Within 4 hours (Critical)
[ ] Within 24 hours (High)
[ ] Within 72 hours (Medium/Low)
2. Reporter Details¶
Name, role and contact information of person reporting the incident
☐ Evidence attached
3. AI System Details¶
System name, version, vendor and deployment environment
☐ Evidence attached
4. Incident Description¶
Description of what happened, including inputs, outputs and observed issues
☐ Evidence attached
5. Impact Assessment¶
Actual or potential harm (individuals, organisation, or public).
☐ Evidence attached
Number of Users/Customers Affected: __________
Estimated Financial Impact: $__________
Regulatory Reporting Required: [ ] Yes [ ] No [ ] Under Review
Media/Reputation Risk: [ ] High [ ] Medium [ ] Low [ ] None
6. Data Involved¶
Personal data, sensitive information, or intellectual property affected
☐ Evidence attached
7. Immediate Actions Taken¶
Containment, mitigation, or workaround steps.
☐ Evidence attached
8. Root Cause (if known)¶
Likely cause (e.g., model error, data bias, misuse)
☐ Evidence attached
9. Follow-up Actions¶
Steps to prevent recurrence or improve safeguards.
☐ Evidence attached
10. Review & Approval¶
Reviewer/approver name, role and signature
☐ Evidence attached
11. Lessons Learned¶
What worked well in the response: __________
What could be improved: __________
Preventive measures identified: __________
☐ Evidence attached
Standards Alignment¶
This form aligns with:
- Australian AI Guardrails (2024)
- Privacy Act 1988 (APPs)
- International standards (ISO/IEC 42001, ISO/IEC 23894, NIST AI RMF)
Next Steps¶
- Submit this report to your designated AI governance officer, IT/security team, or senior manager.
- Review lessons learned and update safeguards.
- Notify regulators if required (e.g., OAIC for privacy breaches).
Escalation Matrix¶
- Critical: Immediate notification to CEO, Board and regulators as required
- High: Notification to CIO/CTO and Risk Committee within 4 hours
- Medium: Notification to Department Head within 24 hours
- Low: Logged and reviewed in weekly operations meeting
Alignment with Australian Standards¶
Standards Compliance
✓ Measure and manage risks — Form provides structured way to capture and manage risks when they materialise as incidents
✓ Test and monitor — Logging "Time of Discovery" and "Incident Description" supports monitoring practice by creating feedback loop from real-world operation
✓ Maintain human control — "Review & Approval" section ensures human oversight is applied to resolution and closure of AI incidents
✓ Guardrail 7 – Monitoring impacts — Primary tool for capturing adverse impacts (Section 5 "Impact Assessment") on individuals, organisation, or public
✓ Guardrail 2 – Risk management — "Immediate Actions Taken" (7) and "Follow-up Actions" (9) support active risk mitigation and control improvement
✓ Guardrail 5 – Human control — Requirement for "Reviewer/approver" (Section 10) ensures human accountability for incident resolution
✓ Guardrail 9 – Record-keeping — Serves as formal record for all AI-related incidents, supporting compliance and audit trails
Next Steps¶
Where to go from here:
- 📋 Ready to evaluate AI vendors? → AI Vendor Evaluation Checklist
- 📊 Log ongoing risks: → AI Risk Register
- 🔄 Implement AI safely: → AI Implementation Roadmap
Disclaimer & Licence
Disclaimer: This template provides best practice guidance for Australian organisations. SafeAI-Aus has exercised care in preparation but does not guarantee accuracy, reliability, or completeness. Organisations should adapt to their specific context and may wish to seek advice from legal, governance, or compliance professionals before formal adoption.
Licence: Licensed under Creative Commons Attribution 4.0 (CC BY 4.0). You are free to copy, adapt and redistribute with attribution: "Source: SafeAI-Aus (safeaiaus.org)"