In the ever-evolving corporate landscape, the mastery of AI, especially the game changing Generative AI (“Gen AI”), has become a non-negotiable skill for both professionals and businesses. Unlocking its potential can supercharge efficiency and productivity by automating the drudgery of repetitive tasks, liberating valuable time for high-value activities. This not only propels operational excellence but also indirectly heightens competitiveness.
“AI” is defined by the Organization for Economic Co-operation and Development (OECD) as a machine-based system that can influence the environment by producing an output (predictions, recommendations, or decisions) for a given set of objectives. GenAI models (ex. Chat GPT, Bard) are the most commonly used AI. Easy-to-use features, free access and wide usage range contributed to the rapid adoption and spread of this technology. GenAI provides solutions and creates responses, pursuant to the prompts received as instructions and according to the data it is trained with.
While there are no specific regulations on AI currently, there’s a proactive stance from the government. In Decision 749/QD-TTg dated 3 June 2020 and Decision 127/QD-TTg dated 26 January 2021, addressing national digital transformation and AI strategy, there’s a clear acknowledgement of the need for AI-specific legal frameworks. The anticipation is that Vietnam will chart its AI regulatory course, likely drawing inspiration from established approaches in other countries. Meanwhile, the general laws will apply to AI use and development in Vietnam with grey areas already promising challenges.
Despite AI’s potential benefits, its use does not come without risks. This article discusses an overview of the three main risks that GenAI-users in Vietnam may encounter and possible legal mitigating actions that businesses and professionals should implement to minimize the risks and leverage the power of this new technology while safeguarding their interests.
Main risks in Generative AI use in Vietnam
1. Confidentiality Breach
GenAI operates by collecting information on the internet. This information is two-fold: (a) the information on which it is trained (and which is used to produce the outputs), and (b) the prompt and the information contained therein. Most GenAI models are trained on publicly available information. A prompt including confidential information (sometimes inadvertently) makes the information potentially widely available as once the confidential data has been fed to the GenAI, little can be done to remove such information from the machine-learning algorithm. This confidential information can be the trade secrets of the business, personal data, proprietary information, know-how, copyrighted materials, etc.
To mitigate the risks of confidential information disclosure, businesses should develop and implement internal policies, guiding employees on permissible content in prompts. Training sessions on such internal policies are also essential for users to understand the disclosure risks and the importance of safekeeping the confidential information. In addition to guidelines, a repository of template prompts to slightly adapt depending on the use case could be made available. Lastly, businesses should continuously monitor the development of good practices in the use of AI to keep the policies and training materials up to date.
2. Inaccuracy
While having access to a wide range of diverse data, GenAI cannot guarantee the accuracy of its output. Inaccuracies are often due to it being trained on insufficient, outdated or inaccurate data. In some cases, it was even found that GenAI invented information or included reference to documents that do not exist in order to provide an answer to a prompt it received when faced with missing information. These non-existing documents appear real on the façade (with reference number, specific details, etc.), but if one was to try to find a copy, it would be an impossible task. GenAI can also disregard some information or analysis, giving biased or incomplete results. Blind reliance on an AI-output is hence very risky as it can be fallacious.
To address such risks, companies should emphasize the importance of systematic verification using critical-thinking skills and impose procedural requirements on employees using GenAI at work. At a minimum, users should be (a) prompting the GenAI to provide the sources of its answers and (b) verifying manually such sources or cross-referencing the output with reliable external sources. Offering training to staff will be key in enhancing their AI-using skills and their capacity to critically review the AI outputs.
3. Intellectual Property Violations
The attribution of copyrights for AI-generated works is a complex issue in intellectual property law, starting with the determination of the author and rightful copyright owner; a question that has not yet been resolved in Vietnam. The answer differs from one jurisdiction to another (either (i) the programmer (who coded the AI) or AI-owner holds the copyright on all AI-outputs; (ii) the prompt is considered a creative work of the user, considering and without which the AI would not produce this exact output and hence the user holds the copyrights; and even (iii) the AI itself, implying the recognition of AI’s independent thinking - a concept still theoretical), making it difficult to predict the approach that will be adopted in Vietnam.
From a Vietnamese perspective, obtaining copyright for an AI-generated work would be difficult for a user, as in most cases, the produced work would not be considered a user’s original work. The Law No. 50/2005/QH11 dated 29 November 2005 on Intellectual Property and Law No. 07/2022/QH15 dated 16 June 2022 amending some articles of the Law on Intellectual Property define author as a person who directly creates the work, and clarify that a person who only gives instructions, ideas or materials cannot be considered as an author, nor co-author of the work. Hence, it is likely that an application to register copyrights from an AI-user on an AI-created output would be rejected by the authorities.
In addition to the risk of being denied copyrights, users may also be violating the copyrights of the true owner of the work when using a GenAI-output. As indicated above, the GenAI may not be able to indicate whether the information included in the output is protected and if proper citation of copyrights is required. This is even more important as copyrights infringement lawsuits against AI-owners are multiplying. A mitigating action for businesses and users would be to rephrase and redraft the GenAI-outputs to ensure that the new draft does not include copy-pasted extract from copyrighted materials without proper attribution. The validation, quotations (where necessary) of the sources and accuracy of the information included in section 2 above will also shed light on the requirement to include acknowledgements.
Other risks should be considered depending on the intended use of the GenAI-output. For example, businesses using GenAI generated content to boost their Search Engine Optimization (SEO) could be classified as spam by Google in the absence of a human review guaranteeing the quality and relevance of the generated content. This would lead to lower SEO rankings and could significantly reduce a website’s traffic. AI-detectors are also being developed to expose companies using AI-generated content to attract customers. It is still too early to say whether this would impact customer trust and a company’s reputation.
Notwithstanding the non-negligeable benefits of being an early adopter of AI, businesses and employees alike should not undermine the value of human’s contextual understanding and insights into a company's dynamics and strategic thinking. Businesses should stay informed on AI risks and establish clear in-house guidelines for AI usage in the workplace. They should make it clear to users that they are responsible for protecting the confidential information of the company, thoroughly examine the information provided by the AI, verify the accuracy of the output, and independently seek and acknowledge any protected works incorporated into the generated content.
If you have any questions or require any additional information , please contact Nguyen Thi Nhat Nguyet or the usual KPMG contact that you deal with at KPMG Law in Vietnam.