Digisoul Artificial Intelligence Management System (AIMS) Manual
implements, and continuously improves its Artificial Intelligence Management System
in alignment with the principles and requirements of ISO/IEC 42001.
It consolidates our key AI-related policies, roles, and processes into a single,
transparent framework for clients, partners, and auditors.This Manual is a high-level guide. Detailed procedures, records, and internal tools
are maintained within Digisoul’s internal systems and are available for review
under appropriate confidentiality where relevant.
1. Context & Scope
Digisoul operates as an AI-native digital agency delivering AI-powered marketing,
content, experience, and automation solutions for clients across multiple regions.
Our AIMS focuses on ensuring these solutions are safe, compliant, ethical, and
aligned with our strategic objectives and stakeholder expectations.
- Internal & External Issues: rapid AI evolution, data protection laws,
platform dependencies, client regulatory constraints, reputational risk. - Interested Parties: clients, end-users, partners, regulators,
staff, technology providers. - AIMS Scope: all AI-related services, components, and processes
defined in our AI Governance & AIMS Policy.
2. Leadership & Governance
Digisoul’s leadership endorses the AI Governance & AIMS Policy and ensures that
AI use aligns with our values, legal obligations, and client commitments.
- AI Governance Council (AIGC): sets direction, allocates resources,
reviews risks and performance. - AIMS Lead: coordinates implementation, monitoring, and reporting.
- Policies: AI Governance & AIMS Policy, AI Data & Vendor Governance Policy,
Privacy Policy, Cookie Policy, Accessibility Statement, Social Responsibility & Ethical AI Statement.
3. Planning & Risk Management
Planning within the AIMS focuses on setting measurable objectives and addressing
AI-specific risks and opportunities.
- AIMS Objectives: trustworthy AI, client protection, regulatory alignment,
measurable value, continuous improvement. - Risk Framework: AI Risk & Safety Framework assessing
privacy, bias, security, misuse, explainability, and business impact. - Integration: AI risk linked with enterprise risk, security,
and data-protection processes.
4. Support (People, Competence, Data & Vendors)
- Competence: mandatory training for relevant staff on
responsible AI, AIMS processes, privacy, and security. - Awareness: internal guidance, playbooks, and regular updates
on AI risks, controls, and best practices. - Communication: dedicated channels (aims@digisoul.io, privacy@digisoul.io,
ethics@digisoul.io) for concerns and clarifications. - Documented Information: maintained in internal repositories:
policies, SOPs, templates, risk registers, system inventories, incident logs. - Data & Vendor Governance: AI Data & Vendor Governance Policy defines:
selection criteria, DPAs/SLAs, safeguards, and ongoing reviews.
5. Operation (AI Lifecycle Controls)
Operational controls ensure that all AI systems in scope follow a defined lifecycle.
- Use Case Intake: document purpose, legal basis, data, impact, owner.
- Design & Build: privacy-by-design, security-by-design,
bias-aware design, vendor alignment. - Testing: functional, safety, bias, robustness, and compatibility checks.
- Deployment: controlled release with approvals and documentation.
- Monitoring: logging, KPI tracking, incident detection, periodic reviews.
- Change Management & Decommissioning: structured process for
updates and controlled retirement of AI components.
6. Performance Evaluation
- Monitoring: track AI performance, reliability, safety flags,
and user feedback. - Internal Reviews: scheduled AIMS reviews by AIGC and AIMS Lead.
- Client & Partner Feedback: incorporated into improvements
where AI is part of contracted services. - Audits: internal audits of high-risk use cases, vendors,
and processes.
7. Improvement & Incident Handling
- Nonconformities: gaps in controls or policy adherence logged
and corrected. - AI Incidents: defined, investigated, remediated, and
escalated via AI Governance and security processes. - Corrective Actions: updates to models, prompts, configs,
vendors, training, or policies. - PDCA Loop: Lessons learned feed back into planning and operations.
Annex: ISO/IEC 42001 Alignment Matrix (High-Level)
The matrix below provides an indicative mapping between ISO/IEC 42001 thematic
requirements and Digisoul’s internal framework. It is designed to support
readiness assessments and client due diligence.
| Theme / Clause Group* | Digisoul Policy / Artefact | Key Processes |
|---|---|---|
| Context & Scope | AI Governance & AIMS Policy; AIMS Manual | Scope definition; stakeholder analysis; AI use case register |
| Leadership | AI Governance & AIMS Policy | AIGC charter; AIMS Lead role; policy approvals |
| Planning & Risk | AI Risk & Safety Framework; AI Data & Vendor Governance | Risk assessment; risk register; treatment plans |
| Support | Internal Training Guidelines; Communication Plan | Competence management; awareness; documentation control |
| Data & Vendors | AI Data & Vendor Governance Policy; Privacy Policy | Vendor due diligence; DPAs/SLAs; data classification |
| Operation | AI Governance & AIMS Policy; Solution Playbooks | Use case intake; lifecycle controls; change management |
| Performance | AIMS Manual; Internal Review Records | Monitoring; KPIs; internal audits; management review |
| Improvement | Incident & Nonconformity Procedures | Incident handling; corrective actions; PDCA loop |
*Thematic groups are indicative and do not reproduce the standard’s text.
Detailed clause-level mapping is maintained internally.
Last updated:8th of Nov 2025