Daniel Liang: Enterprises Prepare for AI Management System and Risk Criteria
The global development and application of Artificial Intelligence (AI) have garnered significant attention worldwide. Mr. Daniel Liang, General Manager of TCIC Global Certification Ltd. (TCIC), an international certification institution, provided an overview of the global trends in AI governance and highlighted the efforts of the International Organization for Standardization (ISO) in developing AI management systems.
In December 2023, the United Nations’ high-level AI Advisory Body (AIAB) released an interim report titled “Governing AI for Humanity”, with the latest report published in September 2024. Additionally, in November 2021, the United Nations Educational, Scientific and Cultural Organization (UNESCO) established the first global AI ethics standard, applicable to its 194 member states. Member states are currently considering the ongoing promotion of AI governance through the Global Digital Compact, initiatives that reflect the global emphasis on AI governance.
ISO/IEC 38507 provides a framework for AI governance, while ISO/IEC 42001 focuses on AI management systems. Together, these two standards collectively form the foundation for AI governance and management at the organizational level. AI risk management, a critical component of AI governance, is addressed by ISO/IEC 23894, which offers guidelines for AI risk management, encompassing the processes of risk identification, analysis, assessment, and treatment.
ISO/IEC 42001 delineates requirements for the establishment, implementation, maintenance, and continuous improvement of AI management systems, and is applicable to organizations of all sizes, types, and natures that provides or utilize AI systems. ISO/IEC 22989 defines the AI system lifecycle model, which comprises the phases of conception, design, development, validation, deployment, operation, and decommissioning.
The ISO standard identifies various stakeholders in the AI system lifecycle, including AI providers, producers, partners, customers, subjects, and relevant authorities, etc. The ISO/IEC TR 24027:2021 and ISO/IEC TS 12791 standards provide guidelines for identifying and addressing biases in AI systems.
In response to the inevitable progression of AI, global regulatory developments have accelerated. Those including the EU AI Act, the U.S. Executive Order (EO 14110), and Taiwan’s AI Basic Law (Draft), which adopt a risk-based approach to AI regulation, reflecting the concerns of global regulators about the potential risks associated with AI.
ISO/IEC 42006 is currently under development to provide requirements for auditing and certification bodies for AI management systems. This will establish the basis for organizations to implement and obtain certification of AI management systems. Furthermore, ISO/IEC CD 27090 and ISO/IEC WD 27091.2 provide guidance on security threats and privacy risks of AI systems, respectively, to assist organizations in identifying and mitigating these risks.
Overall, global trends in AI governance indicate that international organizations, governments and standardization bodies are actively developing frameworks and standards to ensure the responsible development and use of AI. It is imperative for organizations to remain cognizant of these trends and consider implementing an AI governance system to effectively manage AI-related risks and opportunities.