Key Takeaways
| Point |
Summary |
| Total controls |
39 controls across 10 areas |
| Control areas |
Policies, Internal organization, Resources, Impact assessment, Life cycle, Data, Information, Use, Third-party relationships |
| Selection method |
Risk-based: select controls based on your risk assessment |
| Statement of Applicability |
Document which controls apply and justification for exclusions |
| Annex B |
Implementation guidance provided for each control |
| Not all required |
Apply controls appropriate to your AI activities and risks |
Quick Answer: ISO 42001 has 39 AI-specific controls in Annex A. You select applicable controls based on your risk assessment and document decisions in a Statement of Applicability. Controls cover the full AI life cycle from policies through data management, system development, and third-party relationships.
Annex A Overview
Control Areas
| Area |
Title |
Controls |
| A.2 |
Policies for AI |
2 |
| A.3 |
Internal organization |
3 |
| A.4 |
Resources for AI systems |
5 |
| A.5 |
Assessing impacts of AI systems |
3 |
| A.6 |
AI system life cycle |
10 |
| A.7 |
Data for AI systems |
5 |
| A.8 |
Information for interested parties |
4 |
| A.9 |
Use of AI systems |
4 |
| A.10 |
Third-party and customer relationships |
3 |
| Total |
|
39 |
Control Selection Process
1Control Selection Flow
2────────────────────────────────────────────────────
3
41. Conduct AI Risk Assessment (Clause 6.1)
5 └── Identify AI-specific risks
6
72. Review Each Annex A Control
8 └── For each control, determine:
9 ├── Is it applicable to your AI activities?
10 ├── Does it address identified risks?
11 └── Are there regulatory requirements?
12
133. Document in Statement of Applicability
14 ├── Applicable controls: How implemented
15 └── Non-applicable: Justification for exclusion
16
174. Implement Selected Controls
18 └── Use Annex B for implementation guidance
A.2: Policies for AI
Establishing the governance foundation for AI.
A.2.2 AI Policy
| Aspect |
Requirement |
| Purpose |
Document the organization's approach to responsible AI |
| Content |
Commitments to responsible AI, compliance, objectives |
| Communication |
Must be available to relevant parties |
| Review |
Regular review and update |
AI Policy should address:
- Commitment to responsible AI development and use
- Compliance with laws and regulations (including EU AI Act)
- Ethical AI principles
- Framework for setting AI objectives
- Commitment to continual improvement
A.2.3 Responsible AI Topics in AI Policy
| Aspect |
Requirement |
| Purpose |
Address specific responsible AI considerations |
| Topics |
Fairness, transparency, accountability, human oversight |
| Scope |
Applicable across AI system life cycle |
Responsible AI topics to include:
- Fairness and non-discrimination
- Transparency and explainability
- Accountability and governance
- Human oversight and control
- Privacy and data protection
- Safety and security
- Societal and environmental impact
A.3: Internal Organization
Organizing for effective AI management.
A.3.2 Roles and Responsibilities for AI
| Aspect |
Requirement |
| Purpose |
Define who is responsible for AI-related activities |
| Scope |
All roles involved in AI development, deployment, use |
| Documentation |
Clear definition and communication |
Key roles to define:
- AIMS owner/manager
- AI system owners
- Data stewards
- Model developers
- Operations/deployment teams
- Risk owners
A.3.3 Reporting of AI Concerns
| Aspect |
Requirement |
| Purpose |
Enable personnel to report AI-related issues |
| Mechanism |
Clear channel for reporting concerns |
| Protection |
No retaliation for good-faith reports |
A.3.4 Impact of Organizational Changes
| Aspect |
Requirement |
| Purpose |
Manage AI implications of organizational changes |
| Scope |
Mergers, restructuring, technology changes |
| Activities |
Assess AI impact, update AIMS as needed |
A.4: Resources for AI Systems
Ensuring adequate resources for AI management.
A.4.2 Resources Related to AI Systems
| Aspect |
Requirement |
| Purpose |
Provide resources needed for AI activities |
| Types |
Human, technical, financial, infrastructure |
| Planning |
Consider current and future needs |
A.4.3 Competencies Related to AI Systems
| Aspect |
Requirement |
| Purpose |
Ensure personnel have required AI competencies |
| Activities |
Identify competencies, provide training, maintain records |
Competence areas:
- AI/ML technical skills
- Data science capabilities
- Responsible AI practices
- Risk management
- Domain expertise
A.4.4 Awareness of Responsible Use of AI Systems
| Aspect |
Requirement |
| Purpose |
Personnel understand responsible AI expectations |
| Topics |
Policies, risks, ethical considerations, reporting |
A.4.5 Consultation
| Aspect |
Requirement |
| Purpose |
Engage stakeholders in AI decisions |
| When |
Significant AI decisions affecting others |
A.4.6 Communication About the AI System
| Aspect |
Requirement |
| Purpose |
Communicate AI system information to relevant parties |
| Content |
Capabilities, limitations, intended use |
A.5: Assessing Impacts of AI Systems
Understanding AI effects on individuals and society.
A.5.2 AI System Risk Assessment
| Aspect |
Requirement |
| Purpose |
Identify and assess risks from AI systems |
| Scope |
Technical, organizational, ethical, societal risks |
| Frequency |
Initial and periodic reassessment |
Risk categories to assess:
- Risks to individuals (bias, privacy, safety)
- Risks to organization (liability, reputation, compliance)
- Risks to society (fairness, environmental)
- Technical risks (performance, security, reliability)
A.5.3 AI System Impact Assessment
| Aspect |
Requirement |
| Purpose |
Evaluate impacts on individuals and groups |
| Scope |
Affected stakeholders, types of impacts |
| Output |
Documented assessment and mitigation measures |
Impact assessment process:
| Step |
Activities |
| 1. Identify |
AI system scope, affected parties |
| 2. Assess positive impacts |
Benefits to individuals and society |
| 3. Assess negative impacts |
Potential harms, discrimination, privacy |
| 4. Evaluate severity |
Likelihood and consequence |
| 5. Determine mitigations |
Controls to reduce negative impacts |
| 6. Document |
Record assessment and decisions |
| 7. Review |
Periodic reassessment |
A.5.4 Impact of AI System Documentation
| Aspect |
Requirement |
| Purpose |
Document impact assessment results |
| Content |
Impacts identified, mitigations, residual risks |
| Maintenance |
Keep current as AI systems change |
A.6: AI System Life Cycle
Managing AI systems through their entire life cycle.
A.6.2.2 Design and Development of AI System
| Aspect |
Requirement |
| Purpose |
Responsible design and development practices |
| Activities |
Requirements, design decisions, development standards |
Design considerations:
- Intended purpose and use cases
- Performance requirements
- Fairness and bias requirements
- Transparency requirements
- Human oversight mechanisms
A.6.2.3 Training and Testing AI Model
| Aspect |
Requirement |
| Purpose |
Ensure AI models are properly trained and tested |
| Activities |
Training methodology, testing coverage, validation |
Testing requirements:
- Functional testing
- Performance testing
- Bias and fairness testing
- Security testing
- Robustness testing
A.6.2.4 Verification and Validation of AI System
| Aspect |
Requirement |
| Purpose |
Confirm AI system meets requirements |
| Activities |
Verification (built correctly), validation (right system) |
A.6.2.5 Deployment of AI System
| Aspect |
Requirement |
| Purpose |
Controlled deployment to production |
| Activities |
Deployment planning, rollout, monitoring |
A.6.2.6 Operation and Monitoring of AI System
| Aspect |
Requirement |
| Purpose |
Ensure ongoing AI system performance |
| Activities |
Monitoring, performance tracking, issue detection |
Monitoring areas:
- Model performance (accuracy, drift)
- Fairness metrics
- System availability
- Incident detection
- User feedback
A.6.2.7 Retirement of AI System
| Aspect |
Requirement |
| Purpose |
Responsible decommissioning |
| Activities |
Data handling, transition, documentation |
A.6.2.8 Responsible AI System Integration
| Aspect |
Requirement |
| Purpose |
Integrate AI responsibly into systems |
| Activities |
Integration planning, testing, validation |
A.6.2.9 AI System Documentation
| Aspect |
Requirement |
| Purpose |
Maintain comprehensive AI documentation |
| Content |
Design, training, testing, deployment details |
Documentation should include:
- System purpose and intended use
- Training data description
- Model architecture
- Testing methodology and results
- Known limitations
- Operating parameters
A.6.2.10 Defined Use and Misuse of AI System
| Aspect |
Requirement |
| Purpose |
Define intended and prohibited uses |
| Communication |
Clear to users and stakeholders |
A.6.2.11 Management of Third-Party AI System Components
| Aspect |
Requirement |
| Purpose |
Manage AI components from third parties |
| Activities |
Selection, assessment, monitoring |
A.7: Data for AI Systems
Managing data used in AI systems.
A.7.2 Data for Development and Enhancement of AI System
| Aspect |
Requirement |
| Purpose |
Ensure appropriate data for AI development |
| Activities |
Data selection, acquisition, preparation |
A.7.3 Data Quality for ML and Data for AI System
| Aspect |
Requirement |
| Purpose |
Ensure data quality for AI systems |
| Qualities |
Accuracy, completeness, relevance, timeliness |
Data quality dimensions:
- Accuracy: Data correctly represents reality
- Completeness: No critical missing data
- Relevance: Data appropriate for intended use
- Timeliness: Data is current
- Representativeness: Data reflects target population
A.7.4 Data Preparation
| Aspect |
Requirement |
| Purpose |
Properly prepare data for AI use |
| Activities |
Cleaning, labeling, transformation |
A.7.5 Data Acquisition and Collection
| Aspect |
Requirement |
| Purpose |
Responsible data sourcing |
| Considerations |
Consent, privacy, representativeness |
A.7.6 Data Provenance
| Aspect |
Requirement |
| Purpose |
Track data origin and history |
| Documentation |
Source, transformations, lineage |
A.8: Information for Interested Parties
Providing information to stakeholders.
A.8.2 Informing Interested Parties About AI System Interaction
| Aspect |
Requirement |
| Purpose |
Notify when interacting with AI systems |
| Method |
Appropriate disclosure to users |
A.8.3 Informing Interested Parties About AI Outcomes
| Aspect |
Requirement |
| Purpose |
Enable understanding of AI decisions |
| Content |
Explanation of outcomes, decision factors |
A.8.4 Access to Information About AI System Interaction
| Aspect |
Requirement |
| Purpose |
Provide access to interaction records |
| Scope |
As required by law, contract, or policy |
A.8.5 Enabling Appropriate Human Actions in Response to AI Outputs
| Aspect |
Requirement |
| Purpose |
Support human decision-making with AI |
| Content |
Information enabling informed response |
A.9: Use of AI Systems
Controlling how AI systems are used.
A.9.2 Objectives for Responsible Use of AI System
| Aspect |
Requirement |
| Purpose |
Define responsible use expectations |
| Scope |
Internal users, customers, third parties |
A.9.3 Intended Use of AI System
| Aspect |
Requirement |
| Purpose |
Ensure use within intended parameters |
| Activities |
User guidance, use monitoring |
A.9.4 Processes for Responsible Use of AI System
| Aspect |
Requirement |
| Purpose |
Operational processes for responsible use |
| Activities |
Human oversight, review processes |
A.9.5 Human Oversight Aspects
| Aspect |
Requirement |
| Purpose |
Appropriate human control over AI |
| Mechanisms |
Override capability, intervention points |
Human oversight levels:
- Human-in-the-loop (human decides)
- Human-on-the-loop (human monitors)
- Human-in-command (human can intervene)
A.10: Third-Party and Customer Relationships
Managing external AI relationships.
A.10.2 Suppliers of AI System Components
| Aspect |
Requirement |
| Purpose |
Manage AI component suppliers |
| Activities |
Selection, assessment, monitoring |
Supplier considerations:
- AI component quality and safety
- Supplier certifications (ISO 42001, ISO 27001)
- Contractual requirements
- Ongoing monitoring
A.10.3 Shared ML Models
| Aspect |
Requirement |
| Purpose |
Manage use of shared/pre-trained models |
| Considerations |
Model provenance, suitability, limitations |
A.10.4 Provision of AI System to Third Parties
| Aspect |
Requirement |
| Purpose |
Responsible provision to customers |
| Activities |
Documentation, support, communication |
Control Implementation Priorities
High Priority (Implement First)
| Control |
Reason |
| A.2.2 AI Policy |
Foundation for AIMS |
| A.5.2 AI System Risk Assessment |
Core risk management |
| A.5.3 AI System Impact Assessment |
Required for responsible AI |
| A.7.3 Data Quality |
Fundamental to AI quality |
| A.9.5 Human Oversight |
Critical for safety |
Medium Priority
| Control |
Reason |
| A.3.2 Roles and Responsibilities |
Organizational clarity |
| A.6.2.3 Training and Testing |
Quality assurance |
| A.6.2.6 Operation and Monitoring |
Ongoing management |
| A.8.3 Informing About Outcomes |
Transparency |
Context-Dependent
| Control |
When Critical |
| A.10.2 Suppliers |
Heavy use of third-party AI |
| A.6.2.7 Retirement |
Systems being decommissioned |
| A.7.5 Data Acquisition |
Active data collection |
Mapping to Other Frameworks
ISO 42001 to ISO 27001
| ISO 42001 Area |
ISO 27001 Overlap |
| A.2 Policies |
5.1 Policies |
| A.3 Internal organization |
5.2-5.6 Organization |
| A.4 Resources |
7.1-7.3 Support |
| A.6 Life cycle |
8.25 Secure development |
| A.10 Third parties |
5.19-5.22 Supplier management |
ISO 42001 to EU AI Act
| ISO 42001 Control |
EU AI Act Requirement |
| A.5.3 Impact assessment |
Conformity assessment |
| A.7 Data controls |
Data governance |
| A.8 Information |
Transparency obligations |
| A.9.5 Human oversight |
Human oversight requirements |
| A.6.2.9 Documentation |
Technical documentation |
Need help implementing ISO 42001 controls? Talk to our team