Co-authored by Nguyen Thai Cuong (Academic Advisor) and Alexandre Ho Thanh (Head of Advisory Services, RBA Group)
Abstract
This article comments on and analyzes the interaction between the Law on Digital Technology and Industry No. 71/2025/QH15 (DTI Law) (effective on 01 June 2026), and the emerging legal policy framework governing artificial intelligence (AI) in Viet Nam. In which, the DTI Law is primarily designed as an economic and industrial policy instrument, focusing on digital infrastructure, support for foundational technologies (including semiconductors, cloud computing, and telecommunications) and the provision of fiscal and non-fiscal incentives to accelerate innovation. Through these mechanisms, the DTI Law No. 71/2025/QH15 supplements and institutionalizes State policies to promote the digital technology industry as a foundational sector, thereby contributing to the acceleration of the country’s industrialization and modernization.
Additionally, to address emerging challenges, technology-specific risks, and urgent issues relating to governance, ethics, and safety that are not fully covered by the existing legal framework under the DTI Law, the Law on Artificial Intelligence No. 134/2025/QH15 (AI Law), which will take effect on 01 March 2026, introduces a more detailed regulatory regime for AI. The AI Law is intended to replace certain AI-related provisions previously stipulated in the DTI Law and is expected to operate as a flexible and adaptive legal framework capable of responding to rapid technological developments.
In this context, the two newly enacted laws play a crucial role in regulating issues related to the development of science and technology and in establishing a legal framework for practical applications. The DTI Law governs the development of the digital technology industry, the semiconductor industry, artificial intelligence (prior to the effective date of the AI Law), digital assets, as well as the rights and responsibilities of relevant agencies, organizations, and individuals. Meanwhile, the AI Law provides a more comprehensive regulatory framework for artificial intelligence, covering the research, development, provision, deployment, and use of AI systems, the rights and obligations of relevant organizations and individuals, and the State management of AI activities in Viet Nam.
Introduction: Establishing a comprehensive legal system to regulate issues related to industry, digital technology, and AI
Viet Nam is developing a comprehensive legal framework for the digital economy. Within this framework, the DTI Law serves as a cornerstone instrument aimed at macroeconomic incentive policies and infrastructure development. In parallel, AI presents novel regulatory challenges that exceed the scope of traditional technology legislation, necessitating the development of dedicated regulatory principles, potentially through standalone AI Law or sector-specific decrees.
This article examines how the DTI Law and the emerging AI policy directions interact, complement one another, and collectively shape the regulatory environment governing the AI ecosystem in Viet Nam.
Analysis of the Complementary Roles of the DTI Law and AI-Specific Regulation
The DTI Law: Infrastructure and Incentive-Driven Enablement
The DTI Law’s primary function is to create a favorable material and economic environment when providing the terminologies regarding this sector, and the legal framework on the development of digital technologies and industry (DTI), the development of DTI enterprises, controlled testing (sandbox) based on experience from developing countries, the semiconductor industry, and state management. Through the DTI Law, the State policy also provides some incentives focusing on investment, digital infrastructure, and human resources, such as:
- Investment incentives: By providing a range of incentive mechanisms, including tax preferences, land-use rent reductions, credit, and other preferential mechanisms and support in the research, testing, development, production, and application of digital technology products and services.
- Digital infrastructure: Laying groundwork for the development of foundational infrastructure for the digital technology industry, including investment in research and development, technology transfer, and the establishment of shared digital technology infrastructure at national and regional levels. Digital data is recognized as a strategic production resource underpinning innovation and industry growth.
- Human resources development: Emphasizing the development of digital technology human resources through education and training systems, together with special and preferential mechanisms to attract, utilize, and retain high-quality talent. It also facilitates innovation by introducing exempt from liability for the involved party in controlled testing (sandbox) mechanisms for digital technology products and services.
Commentary: While the DTI Law effectively addresses the question of how digital technologies should be developed and scaled, it does not comprehensively regulate how such technologies should be used, particularly in relation to the societal, ethical, and legal risks posed by AI systems.
AI Legal Policy Directions: Risk, Responsibility, and Ethics
AI Law is expected to address AI-specific regulatory issues, thereby, and, together with the DTI Law, contribute to the establishment of a comprehensive legal framework. The AI Law provides key definitions relating to artificial intelligence, AI systems, relevant parties, and serious incidents. One of its primary principles is the adoption of a human-centered approach, emphasizing the protection of human rights, the objective of serving human interests, and the principle that AI must not replace human authority or responsibility. In addition, the AI Law classifies AI systems into three risk levels – low, medium, and high – thereby enabling differentiated regulatory treatment based on risk.
To align the legal framework with AI development, the AI Law promotes the national AI strategy, the AI ecosystem and market, and establishes an AI regulatory sandbox, while requiring compliance with ethical and responsibility standards. It also provides support for AI startups and small and medium-sized enterprises, with details to be specified by the Government.
Commentary: The legislative approach emphasizes that AI is intended to serve humans and must not replace human authority. AI risk classification is to be conducted by providers prior to deployment, in a transparent and responsible manner as stipulated by law.
Legal Challenges at the Intersection of the Two Instruments
The interaction between these texts raises three major challenges.
Responsibility for Training Data and Text-and-Data Mining
The DTI Law encourages AI development, which in practice requires massive volumes of training data, while copyright and text-and-data mining (TDM) are governed by intellectual property law.
Challenge: incorporate guiding principles or cross-references enabling non-commercial TDM for research (non-commercial) purposes, while supporting licensing mechanisms for commercial TDM to avoid undermining innovative objectives so as not to create barriers to AI development that the DTI Law is intended to promote.
A Shared Sandbox Mechanism
Sandbox mechanisms are essential for disruptive technologies.
Challenge: The sandbox established under the DTI Law should operate as a general mechanism. However, an AI sandbox requires specific criteria focused on algorithmic risks (for example, testing model fairness). It is necessary to clearly define which authority is competent to supervise each AI domain within the sandbox (e.g., the Ministry of Information and Communications, the State Bank of Viet Nam, the Ministry of Health, etc.) in order to avoid overlaps.
Risk-Based AI Classification
International practice increasingly relies on risk-based AI governance models (for example, the EU model distinguishes minimal risk, high risk, and unacceptable risk).
Challenge: The DTI Law should provide the foundation for this classification, while the Draft AI Law should operationalize legal requirements for each risk level. For instance, high-risk AI should be subject to strict requirements concerning training-data retention, activity logging, and transparency reporting.
Recommendations to Improve Viet Nam’s AI Legal Framework
- Regulatory harmonization and exceptions: Ensure that key definitions across instruments and introduce targeted TDM exceptions linked to DTI Law objectives.
- Centralized AI supervisory authority: Establish a dedicated body or an inter-ministerial committee to coordinate and oversee ethical compliance, liability frameworks, and sandbox implementation.
- Principle-based regulation: Prioritize enduring regulatory principles such as transparency, explainability, and fairness over technology-specific rules to ensure long -term adaptability.
Conclusion
The DTI Law and the emerging AI regulatory instruments form two interdependent pillars of Viet Nam’s digital legal framework. The DTI Law delivers economic incentives and infrastructure, while AI-specific regulation addresses conduct-related, ethical, and liability concerns. Synchronizing the development of these instruments is critical to ensuring regulatory certainty, technological flexibility, and the establishment of a safe, trustworthy, and sustainable AI ecosystem in Viet Nam.