Add a bookmark to get started

28 de outubro de 20248 minute read

Understanding AI Regulations in Japan

Current Status and Future Prospects
Introduction

AI-related technologies are advancing rapidly. And generative AI in particular is gaining significant attention due to its accessibility and diverse applications.

But as the use and scope of AI technologies expand, the risks associated with AI are also becoming more varied and pronounced. In response, many countries are swiftly developing rules to govern the use of AI.

Japan is also moving toward introducing AI-related laws and regulations. In this newsletter we provide an overview of the current AI regulations and expected new rules for AI-related business in Japan.

 

Summary

AI has long been regulated by soft law in Japan. On 19 April 2024, the Ministry of Economy, Trade and Industry (METI) issued the “AI Guidelines for Business Ver1.0.” The guidelines provide guidance for developing, providing and using AI to all entities (including public organizations such as governments and local governments) involved in these activities.

Although not mandatory, the guidelines, with the appendix, suggest specific desirable approaches for AI-related business entities in Japan.

The Japanese government is also discussing establishing enforceable of AI laws and regulations. On 16 February 2024, the government disclosed the rough draft of the “Basic Law for the Promotion of Responsible AI” (AI Act).

Although there are no binding laws or regulations governing AI development in Japan, it's possible that regulations will be enacted in the future. AI developers in Japan should continue to closely monitor the legislative landscape.

 

AI Legislation in Japan

Legislative Situation

In Japan, no laws or regulations have yet been enacted that explicitly regulate AI. However, discussions are underway to establish such laws. On 16 February 2024, the ruling party's project team on the “Evolution and Implementation of AI” released the rough draft of the AI Act. This rough draft intends to propose legal governance for frontier AI models.

Frontier AI models are high-performance, general-purpose AI models capable of performing a wide variety of tasks and are as capable or more capable than today's most advanced models.

The AI Act seeks to minimize risks and maximize benefits through appropriate AI governance. Although the discussion is ongoing, the AI Act is expected to include the following provisions:

  • Designation of “Developer of Specific AI Infrastructure Model.” The AI Act will designate AI developers of a certain business size and specific business purpose as “Developer of Specific AI Infrastructure Model” (Designated Developer). Entities meeting the criteria for Designated Developer will have to file a notification to a competent authority.
  • Obligation for Designated Developer to establish a system to secure safety of AI development. Private business operators that fall under the scope of Designated Developer will have to establish a system to ensure the safety of AI development. Although the details of such obligations are still under discussion, these obligations may include conducting safety verification before developing AI in high-risk areas, sharing risk information with the government, and notifying users of generative AI of certain matters.
  • Obligation to report the status of compliance. It's expected that Designated Developers will have to periodically report their compliance status to the government or a third-party organization.

The government will review reports from Designated Developers. If necessary, the government will audit the Designated Developers, which may include hearing from interested parties. The government is expected to publicize the results of the audits and, in certain cases, order the Designated Developers to take necessary corrective measures. Additionally, the AI Act will allow the government to collect reports and conduct on-site inspections if a Designated Developer fails to comply with its obligations or if an incident occurs.

  • Penalties. While the details of the penalties are yet to be clarified, the government is considering imposing surcharges or penalties for violating obligations and orders under the AI Act.

Guidelines

Previously, there were three guidelines regarding the use of AI: the AI R&D Guidelines for International Discussions, AI Utilization Guidelines, and the Governance Guidelines for Implementation of AI Principles. On 19 April 2024, the Ministry of Internal Affairs and Communications and the METI integrated these three guidelines and released the AI Guidelines for Business Ver1.0 (AI Guidelines) as a revised version.

Although the AI Guidelines are non-binding, they aim to establish guiding principles for business operators using AI to promote innovation and use of AI while reducing the social risks posed by AI.

The AI Guidelines define three types of business entities that use AI: “AI Developers,” “AI Providers,” and “AI Business Users,” and provide basic guidelines for desirable voluntary actions for each type.

For example, AI Developers, who can directly design and modify AI models, have a significant impact on society. The AI Guidelines state it's important for AI Developers to assess, to the extent possible, the potential impact of their AI in advance, and take measures to address the impact. The AI Guidelines also emphasize the importance of taking safety considerations into account when developing AI and taking measures to prevent bias when learning data.

For AI Business Users, the AI Guidelines stress the importance of complying with the usage guidelines set by the AI Providers, using the AI system and service within the intended scope, and using it appropriately with consideration for its safety. It is also important for AI Business Users to be aware of the risk that the input data or prompts may contain bias, and to take measures to prevent inappropriate input of personal information and privacy violations.

The AI Guidelines are pretty abstract, outlining the basic principles and approaches for ensuring safety of AI use while maximizing the benefits. In response, the appendix to the AI Guidelines provides specific desirable approaches for AI-related business entities. For example, to secure the safe use of AI, the appendix suggests the following specific methods for AI Business Users: periodically confirming that AI is being used within appropriate scope and methods; updating AI systems to the latest version; regularly inspecting AI and, if necessary, repairing it or requesting AI Providers to repair; and providing feedback to AI Providers or AI Developers on any incidents.

In addition to the AI Guidelines, there are several guidelines that establish legal interpretations of AI-related matters within the existing legal framework. For example, on 15 March 2024, the Agency for Cultural Affairs released the “General Understanding on AI and Copyright in Japan” to clarify its view on the copyrightability of AI-generated works. The copyrightability of such works is determined on a case-by-case basis, considering factors such as the quantity and content of instructions and inputs (such as prompts), the number of attempts to generate works, and the selection process from among multiple generated works.

Model contracts

The METI and the Japan Patent Office have published several model contracts for AI related transactions. These Model Contracts are prepared with specific use cases and include many practical tips. So, although the model contracts are not binding and need to be modified as necessary in individual situations, they are useful in practice.

  • Contract Guidelines on Utilization of AI and Data (Contract Guidelines) The Contract Guidelines consist of two parts: the data section and the AI section. When making a contract in relation to the use of data and AI (for example, contracts in which data held by one party is provided to the other, contracts in which a vendor that develops a trained model delivers it to users, or contracts in which a vendor provides AI technology to users and the users use it), the content of the contract often includes the latest discussions regarding AI and is often complex. So contracts drafted solely between the parties often result in incomplete agreements. The Contract Guidelines explain the typical issues in relation to AI related contracts and set forth a model contract to facilitate reasonable contract negotiations and execution. Although not binding, the Contract Guidelines provide detailed explanation of matters that should be addressed in contracts. So they're useful during contract negotiations.
  • Model Contracts Ver. 2.1 for Promoting Open Innovation (Model Contracts for Open Innovation). When startups engage in joint research and development with large companies, their lack of understanding of the legal aspects of technology transaction often hinders open innovation. The Model Contracts for Open Innovation clarifies the key issue in concluding NDAs, POC agreements, joint development agreements, and terms of use along the timeline of the collaborative R&D process. It also provides concrete measures to deal with typical problems faced by startups.

Future prospects

The AI Act is expected to be submitted to the regular Diet session in 2025. As discussed above, the AI Act could have a significant impact on AI-related businesses. So, it's advisable for AI development companies operating in Japan to continuously monitor the legislative developments.

Print