Training Course

Artificial Intelligence Act (ΑΙ Act) & ISO 42001 AI Management System

ICAP CRIF in collaboration with the “Hellenic Association of Risk Managers” (www.harima.gr), member of FERMA (Federation European of Risk Management Associations, www.ferma.eu) & FECMA (Federation of European Credit Management Associations, www.fecma.eu) & ISSP (International Society of Sustainability Professionals, www.sustainabilityprofessionals.org), with the support of “Academics University of London Worldwide”, present the seminar “Artificial Intelligence Act (AI Act)”. After attending, participants are granted a Certificate of Attendance.

1.Introduction to the AI Act Regulation and the ISO 42001 standard for AI, the new international standard for Artificial Intelligence Management Systems (AIMS)
2.Analysis of the standard's requirements for safe, responsible, and trustworthy use of AI in organizations
2.Analysis of the standard's requirements for safe, responsible, and trustworthy use of AI in organizations
Understanding the governance framework that ensures compliance, transparency, and accountability
4.Practical guidance in the creation and implementation of an AI Management System 5.Connection / relationship of the standard with other regulatory frameworks and laws (AI Act, GDPR etc.)
6.Implementation examples, best practices, and tools for AI risk assessment
7.Preparing for successful compliance with the Regulation and/or ISO 42001 certification in your organization
1.Business executives who implement or design AI systems
2.Regulatory Compliance Officers, Data Protection Officers and Risk Managers
3.IT Managers, CTOs, AI Product Owners & ML Engineers
4.Quality Consultants, ISO Consultants and digital transformation professionals
5.Organizations that wish to be certified to ISO 42001
6.Those who want to understand the new risk, ethics, safety and regulatory compliance assessment framework in artificial intelligence
1.What is the AI Act Regulation, how is it structured, what requirements are created
2.What is the risk rating according to the Regulation
3.Who are the main stakeholders of an AI System and what is the role of each
4.What is ISO 42001 and what problems does it solve
5.How an AI Management System is structured and what documents/processes are required
6.How AI risk identification, analysis and management is done
7.How the transparency of algorithms, accuracy, and data quality are ensured
8.The different roles and compliance responsibilities of companies depending on the use, development, introduction, distribution of AI systems
9.How a system of continuous improvement and monitoring of AI systems is organized
10.How an organization prepares for successful ISO 42001 certification
11.Practical techniques and templates for policies, assessment reports and AI impact evaluation
About

Subject Areas

  1. Basic principles of Artificial Intelligence and modern applications
  2. Challenges, risks & need for responsible use of AI
  3. The need for standards and regulatory compliance
  4. AI Act Regulation (purpose-structure-requirements)
  5. Basic Regulation Definitions
  6. Key Stakeholders Regulation: Providers-Deployers-Importers-Manufacturers
  7. AI Act Risk Rating
  8. General Purpose AI Model with Systemic Risk
  9. Documentation-Transparency-Monitoring-Risk Assessment Obligations
  10. Risk Management during the Selection-Development-Purchase-Implementation-Distribution-Change of AI Systems
  11. Supervision and Non-Compliance Penalties
  12. AI Act vs. GDPR Requirements
  13. Brief overview of ISO 42001 and EU AI Act
  1. What is an AI Management System (AIMS)
  2. ISO 42001 Structure & High-Level Structure (HLS) Relationship
  3. Key requirements: policies, roles, responsibilities, governance
  4. Documentation and change management in AI systems
  1. AI Governance principles & ethical AI
  2. Transparency, reliability, and accountability in AI systems
  3. Data, quality, and integrity
  4. Governance Models and AI Ethics Committees
  • Μεθοδολογίες αναγνώρισης και αξιολόγησης κινδύνων AI
  • Operational, technical, ethical & societal risks
  • Εργαλεία και τεχνικές για AIRisk Assessments
  • Διαχείριση bias, explainability και robustness
  1. Risk categories: Unacceptable, High-Risk, Limited, Minimal
  2. Systemic Risk
  3. Requirements for high-risk AI systems
  4. Technical & organizational compliance requirements
  5. Preparation for Conformity Assessment

Obligations for Risk Assessment-Transparency-Accountability-Monitoring-Compliance

  1. How ISO 42001 works as a compliance framework in the AI Act
  2. Mapping requirements AIMS ↔ AI Act obligations
  3. Integration of legal obligations into AIMS procedures
  4. Practical guidelines for organizations
  5.  
  1. Requirements for data quality
  2. Transparency in the collection, processing and use of data
  3. Documentation dataset & data lineage
  4. Protection of personal data (in relation to GDPR)
  5.  
  1. Model development, documentation & validation
  2. Monitoring of AI systems and failure detection
  3. Explainability, transparency and evaluability of the models
  4. Continuous improvement and retraining pipelines
  5.  
  1. AI Policy, AI Risk Policy, Data Governance Policy
  2. Incident management & escalation procedures
  3. Model cards, data sheets, audit logs & technical documentation
  4. Documentation for audits and certification
  5.  
  1. AIMS Application Steps
  2. Gap analysis, readiness assessment & maturity evaluation
  3. Internal audits and corrective actions
  4. Certification process & best practices.
  5.  

16

Hours Live Online

Course start date 7 May 2026
Attendance Certificate
Subsidized by LAEK
Early Bird: €600
Cost of Attendance: € 780

0%

Skilled and Profesional Advisors

0k+

Ambitious Training Attendes

0+

Years of Experience

0+

Trainings & Certificates

Our Team Experts

Scientific Associates

en_US