ISO 42001 became the first international standard for AI management systems in late 2023. Certifications were slow to materialize at first — organizations were still mapping their internal AI governance programs, building documentation, and figuring out what the standard actually demanded in practice.
That has changed. Regulatory pressure from the EU AI Act, customer due diligence requirements, and enterprise procurement checklists have pushed ISO 42001 from "nice to have" to "explain why you don't have it." Audit firms that had been circling the standard are now actively offering certification pathways. The organizations that started their AIMS programs in 2024 are going into their first certification audits now.
And many of them are discovering the same gap.
The compliance program most orgs built is incomplete
The typical ISO 42001 implementation follows a predictable pattern: map internal AI use cases, establish governance committees, document bias assessment procedures, build an incident response playbook, and set up a records management program. All of that is necessary. None of it is sufficient.
The standard has a supplier control that most programs underestimate.
Annex A, Control A.10.3 — Suppliers requires organizations to assess, evaluate, and continuously monitor third-party AI suppliers. When a supplier changes their model, adjusts training data, or modifies system parameters, the organization must demonstrate they detected the change, assessed its implications, and maintained control of their AIMS.
Most organizations using third-party AI tools — OpenAI, Anthropic, Google, Microsoft, Cohere, Mistral — have no systematic mechanism for any of this. They're checking a vendor's blog occasionally, hoping someone on the team catches a changelog update, and calling it governance.
That isn't a control. It's a gap. And an auditor who knows the standard will find it.
Thirteen controls that require external vendor intelligence
A.10.3 is the most direct requirement, but it's not the only one. Here is how the gap spreads across the standard when you follow the thread:
The six direct enablers
A.10.3 — Suppliers. Continuous monitoring of AI vendor changes. This is the primary control. If you don't have a documented, systematic process for detecting and responding to vendor model updates, you don't satisfy it.
A.6.2.6 — AI System Operation and Monitoring. Operational monitoring must include upstream vendor changes. You cannot detect downstream drift in a third-party AI system if you don't know the model changed. Vendor monitoring is a prerequisite for operational monitoring to be meaningful.
A.5.2 — AI System Impact Assessment Process. The standard requires a dynamic process that triggers reassessment when conditions change. A vendor model update is a condition change. Your impact assessment process needs a mechanism to detect and respond to vendor-side triggers, not just internal ones.
A.8.2 — System Documentation and User Information. Internal users of AI tools must be informed when those tools change in material ways. This is an internal communication requirement with a direct dependency on external detection. You can't inform users about vendor changes you haven't caught.
Clause 7.3 — Awareness. Requires demonstrable AI awareness across all relevant employees, proportionate to their role. A generic email to the whole company when a vendor ships an update is not proportionate awareness — a legal professional needs different information than an engineer using the same tool.
Clause 7.2 — Competence. Required AI competencies must be defined by role, developed, and documented. Continuous role-relevant education about vendor developments is a competency-building mechanism that satisfies this control with evidence.
Seven supporting controls
Beyond the six direct enablers, vendor change intelligence feeds seven additional controls:
- Clause 6.1.2 — AI Risk Assessment. You can't assess risk from changes you don't know about. Risk assessments that omit vendor-side changes are systematically incomplete.
- Clause 6.1.4 — AI System Impact Assessment. Vendor behavior changes have downstream impact implications that belong in your impact assessment documentation.
- Clause 6.3 — Planning of Changes. Vendor AI updates — deprecations, model replacements, capability changes — are external changes affecting your AIMS scope and must be planned for.
- A.5.3 — Documentation of AI Impact Assessment. Impact assessments must reference the specific changes assessed. Timestamped vendor change records are the evidence baseline.
- A.8.4 — Communication of Incidents. Breaking changes and significant vendor updates often qualify as incidents under a well-scoped AIMS. Structured internal communication is required.
- A.6.2.7 — Technical Documentation. When vendor model capabilities change, your technical documentation describing those capabilities becomes stale immediately. Engineering teams need the signal to know what to update.
- A.4.3 — Tooling Resources. Third-party AI tools must be inventoried and subject to oversight. Changes to those tools affect your resource baseline and inventory accuracy.
Why GRC platforms don't solve this
The natural instinct is to add vendor monitoring as a task inside your existing GRC platform. The problem is that GRC platforms don't watch AI changelogs. They help you document processes, track controls, and manage evidence — but they have no mechanism for ingesting a continuous feed of vendor changes, structuring that information by the relevant ISO control, and communicating it to the right people at the right level of detail.
The gap isn't in your GRC workflow. The gap is upstream of it, at the point where external vendor changes enter your compliance program. You need something that monitors the vendors and translates changes into role-appropriate internal communications before your GRC platform can do anything with that information.
That's a different layer of infrastructure. And most organizations haven't built it.
What a mature vendor monitoring program looks like
Organizations that satisfy A.10.3 and the connected controls have a few things in common. They monitor vendor changelogs from official first-party sources — not aggregated news, not social media, but the actual release notes, model cards, and API changelogs. They have a process for assessing whether each change is material to their AIMS scope. And they have a structured way to communicate relevant changes to the people who need to act on them.
Critically, that communication is role-differentiated. The legal team needs to know if a vendor's updated terms of service affect their data processing agreements. The engineering team needs to know if a model change affects API behavior. The compliance officer needs to know if a change triggers a reassessment under A.5.2. Sending everyone the same changelog excerpt doesn't meet the Clause 7.3 requirement for proportionate, role-appropriate awareness.
This is where Changecast fits into the picture. It monitors 22 AI vendors continuously from official sources, generates role-specific briefings for 18 industry groups, and creates a documented record of what was detected, when, and who was informed. For organizations building toward ISO 42001 certification, it functions as the vendor monitoring and awareness layer that the standard requires but compliance programs rarely budget for explicitly. See the full control-by-control mapping.
How ISO 42001 compliance will evolve
The current certification wave is built primarily around organizational readiness — do you have documented policies, governance structures, and assessment procedures? That bar is achievable. The organizations getting certified in 2025 and 2026 have largely met it.
The harder bar is continuous compliance, and that's where vendor monitoring becomes non-negotiable. As AI vendors ship changes faster — and they are — the gap between organizations with systematic vendor monitoring and those without will grow. An audit that takes a point-in-time snapshot of your policies is one thing. Continuous surveillance of your AIMS over time is another.
The organizations that will maintain certification without heroic quarterly scrambles are the ones that treat vendor monitoring as infrastructure, not a manual task. The standard was written with that expectation. It's worth building to it now.
Sources:
ISO/IEC 42001:2023 — Information technology, Artificial intelligence, Management system
ISO/IEC JTC 1/SC 42 — Artificial intelligence