Add a bookmark to get started

China_Scyscrapers_L_0938
12 September 202432 minute read

China releases AI safety governance framework

On September 9, 2024, China’s National Technical Committee 260 on Cybersecurity released the first version of its AI Safety Governance Framework (the Framework), which was formulated to implement the Global AI Governance Initiative. The Framework acknowledges that Artificial Intelligence (AI) is “a new area of human development” that “presents significant opportunities to the world while posing various risks and challenges.” Aimed at addressing the safety, ethics, and social implications of AI, the Framework is designed around a “people-centered approach” and the “principle of developing AI for good.”

The Framework outlines principles for AI safety governance, classifies anticipated risks related to AI, identifies technological measures to mitigate those risks, and provides governance measures and safety guidelines. Our alert provides a detailed summary of the Framework.

Principles and framework for AI safety governance

The Framework prioritizes addressing ethical concerns in AI development, including safety, transparency, and accountability, and seeks to develop AI in ways that are inclusive and equitable, avoiding biased or discriminatory outcomes. It outlines control measures to address different types of AI safety risk through technological and managerial strategies, emphasizing the importance of continuously updating control measures and refining the governance framework.

Some of the identified principles include:

  • Prioritizing the innovative development of AI
  • Establishing governance mechanisms that engage all stakeholders, ensuring coordinating efforts and collaboration
  • Creating a whole process, including all elements of the governance chain
  • Fostering safe, reliable, equitable, and transparent AI, and
  • Protecting the rights and interests of citizens and organizations such that AI technology benefits humanity.

The Framework indicates that those principles will be best served by:

  • Taking an innovative and inclusive approach to ensure safety in AI research, development, and application
  • Identifying and mitigating risks with agile governance
  • Integrating technology and management to prevent and address safety risks throughout AI research, development, and application, and
  • Promoting openness and cooperation on AI safety governance, with the best practices shared worldwide.

Classification of AI safety risks

The Framework places a concentrated focus on proactively identifying AI safety risks through the technology’s development, deployment, and application, and mandating continuous monitoring to spot emerging risks and quickly mitigate them. The Framework classifies AI safety risks into two overarching categories: inherent risks from the technology itself, and risks posed by its application. The types of risk identified by the Framework are depicted in the table below.

Safety risks

 

Inherent safety risks

Risks from models and algorithms

Risks of explainability

 

Risks of bias and discrimination

 

Risks of robustness

 

Risks of stealing and tampering

 

Risks of unreliable output

 

Risks of adversarial attack

 

Risks from data

Risks of illegal collection and use of data

 

Risks of improper content and poisoning in training data

 

Risks of unregulated training data annotation

 

Risks of data leakage

 

Risks from AI systems

Risks of exploitation through defects and backdoors

 

Risks of computing infrastructure security

 

Risks of supply chain security

 

Safety risks in AI applications

Cyberspace risks

Risks of information and content safety

 

Risks of confusing facts, misleading users and bypassing authentication

 

Risks of information leakage due to improper usage

 

Risks of abuse for cyberattacks

 

Risks of security flaw transmission caused by model reuse

 

Real-world risks

Inducing traditional economic and social security risks

Risks of using AI in illegal and criminal activities

Risks of misuse of dual-use items and technologies

Cognitive

Risks of amplifying the effects of "information cocoons"

Risks of usage in launching cognitive warfare

Ethical Risks

Risks of exacerbating social discrimination and prejudice, and widening the intelligence divide

 

 

Risks of challenging traditional social order

Risks of AI becoming uncontrollable in the future

 

 

 

Unlike the EU AI Act, which divides AI systems into four different risk levels (unacceptable, high, limited, and minimal) and provides level-specific regulatory requirements, the Framework only categorizes the AI safety risks (ie, identifying the areas where AI systems could trigger risks) and not the level of risks (ie, the negative impact or consequences that could be caused by AI systems).

Nevertheless, the EU AI Act model for risk level-based regulation is briefly mentioned in the Framework as a proposal for future regulations. This proposal indicates that only the AI systems with computing and reasoning capacities reaching (to be) prescribed thresholds or applied in (to be) prescribed industries and sectors need to be assessed and approved by the Chinese authorities before being put into the market. This is different from the current regulatory approach, which requires all AI systems to be assessed and approved and thus (strictly speaking) makes all overseas-hosted AI systems illegal and subject to the risk of being blocked in China.

Technological measures to address risks

The Framework calls for AI developers, service providers, and system users to take technological measures to address the associated risks as identified above, and proposes technical countermeasures to improve safety, fairness, reliability, and robustness of AI. These measures focus on things like enhancing development practices, improving data quality, and conducting more rigorous evaluation to ensure that AI systems are performing reliably and safely. The technological measures generally track to the identified risks, seeking to propose techniques for risk mitigation for each of the identified categories of risk. The Framework provides a chart that links the risks to the proposed mitigations.

In particular, with regard to training data, the Framework emphasizes that sensitive data in high-risk fields such as nuclear, biological, and chemical weapons and missiles must not be used in the training. When using personal data and “important data” (which refers to data that may affect national security or public interests and that will be prescribed in catalogues formulated by sector regulators), all existing privacy and data protection laws must be complied with.

Considering that the Chinese data protection law is mainly consent-based, requires the disclosure of processing details (eg, data categories, processing purposes, and third parties involved) on an exhaustive basis, and imposes strong restrictions on cross-border data transfers, ensuring training data compliance can be an extremely challenging task for developers. Moreover, many industrials and production data may constitute “important data” and thus must not leave China without government approval. To use such data to train algorithms, developers would have to move the servers exclusively within China.

Comprehensive governance measures

In addition to technological controls, the Framework emphasizes the need for adaptive control measures, recognizing that, as AI technologies evolve, so must its governance mechanisms. The Framework makes clear that comprehensive governance measures rely on multiple stakeholders (eg, technology R&D institutions, service providers, users, government authorities, industry associations, and social organizations) to identify, prevent, and respond to risks.

According to the Framework, comprehensive governance mechanisms and regulations should:

  • Implement a tiered and category-based management for AI application based on AI risk level classification, with higher risk use cases requiring enhanced control and oversight
  • Develop a traceability management system for AI services
  • Improve data security and personal information protection regulations in various stages such as AI training, labeling, utilization, and output
  • Create a responsible AI R&D and application system by proposing instructions and best practices that uphold the people-centered approach and adhere to the principle of developing AI for good, which includes establishing ethical standards, norms, and guidelines to improve the ethical review system
  • Strengthen AI supply chain security by promoting knowledge sharing in AI
  • Advance research on AI explainability, transparency, and trustworthiness
  • Track and analyze security vulnerabilities, defects, risk, threats, and safety incidents related to AI, and establish emergency response mechanisms for such incidents to ensure rapid and effective response
  • Enhance and promote AI safety education and training, and
  • Establish and improve mechanisms for AI safety education, industry self-regulation, and social supervision.

While the Framework is focused on China, it emphasizes the need to align AI governance with global norms and standards. According to the Framework, cross-border collaboration is necessary for addressing global challenges like cybersecurity, ethical usage, and safety.

Safety guidelines for AI development and application

The Framework further outlines safety guidelines for the development and application of AI that are specific to developers, service providers, and types of users. Individuals along the value chain are expected to comply with regulatory requirements, including developing mechanisms for regular compliance checks, audits, and reviews. In addition to these high-level themes across the value chain, individuals are also provided role-specific guidelines.

According to the Framework:

  • Developers should adhere to ethics, strengthen data security and protection, guarantee the security of training environments, assess potential biases, evaluate readiness of products and services, regularly conduct safety and security evaluations, and generate and analyze testing reports.
  • Service providers should publicize information and disclosures related to their AI use, obtain user consent, establish and improve real-time risk monitoring and management systems, report safety and security incidents and vulnerabilities, and assess the impact of AI products on users.
  • Users in key areas (including government departments, critical information infrastructure, and areas directly affecting public safety and people’s health and safety) should assess impacts of applying AI technology, conduct risk assessments, regularly perform system audits, fully understand data processing and privacy protection measures, enhance network and supply chain security, limit data access, and avoid complete reliance on AI for decision making without human intervention.
  • General users should raise their awareness of potential safety risks associated with AI, carefully review all terms of service, enhance awareness of personal information protection, become informed about data processing practices and cybersecurity risks, and be aware of the potential impact of AI products on minors.

As a key policy and cultural consideration reflected in many of the mechanisms mentioned above, AI developers and service providers are encouraged to monitor and screen both the training data and the outputs generated by AI systems, and put in place effective control measures to prevent information that is inconsistent with China’s core ideology of socialism, or that has public opinion manipulation or negative social mobilization capabilities, from being processed in the AI systems. Put simply, companies are encouraged to comply with China content monitoring regulations.

Takeaways

Like other international regulations, the Framework takes a risk-based approach to AI governance and ties the risk level to specific mitigations – both technological and governance-based. It also stresses the need for ongoing assessments of AI systems to ensure they meet safety standards and do not pose unintended risks. The Framework is part of China’s stated goal to be a global leader in AI by 2030, and purports to reflect a balancing act between the desire for innovation with the need for regulating the development and use of AI that is safe and ethical.

DLA Piper is here to help

DLA Piper’s team of lawyers and data scientists assist organizations in navigating the complex workings of their AI systems to ensure compliance with current and developing regulatory requirements. We continuously monitor updates and developments arising in AI and its impacts on industry across the world. Moreover, DLA Piper has experience helping insurers navigate the emerging global legal and regulatory landscape, including testing of their AI systems for bias or other harms.

As part of the Financial Times’s 2023 North America Innovative Lawyer awards, DLA Piper was conferred the Innovative Lawyers in Technology award for its AI and Data Analytics practice.

For more information on AI and the emerging legal and regulatory standards, visit DLA Piper’s focus page on AI.

Gain insights and perspectives that will help shape your AI Strategy through our newly released AI ChatRoom series.

For further information or if you have any questions, please contact any of the authors.

Print