Applying Artificial Intelligence (AI) to work has become an inevitable trend, helping to improve efficiency and save time.
However, behind that convenience are potential risks to data security, especially the phenomenon of "Shadow AI" - when employees use AI tools carelessly, without control, accidentally putting confidential company data on public platforms.
Dependence and initial concerns
Thanh Huyen (21 years old), a Content Marketing employee at a cosmetics and functional food company, shared that she relies on AI 90% in her work, from planning, building content to designing images and videos .
However, once she accidentally came across an article similar to the idea she asked ChatGPT about, Huyen began to worry about the ability of AI to remember and share input information.

Applying AI to work has almost become a default for many industries at the present time (Photo: Bao Ngoc).
While Huyen's story may be a coincidence, the undeniable fact is that providing information to AI means allowing these tools to collect and store data to train the model.
The problem becomes serious if the AI platform is hacked or poorly secured, leading to information leaks and negative consequences.
"Shadow AI" - A Potential Danger
HM (20 years old), a customer service specialist, regularly provides a list of customer information (full name, date of birth, phone number, purchase history) to AI for analysis, saving time and improving performance.
M. believes that this is not a potential risk because the company is small and the data will not be disclosed. However, M.'s case is typical of "Shadow AI" - the use of AI by employees without the approval or supervision of the information technology or cybersecurity department.

The phenomenon of “Shadow AI” refers to the use of AI by employees in businesses without any control or management (Illustration: CV).
A Cisco report found that 62% of organizations in Vietnam lack confidence in detecting unregulated use of AI by employees. A UK survey also found that 75% of cybersecurity managers are concerned about insider threats like Shadow AI.
This shows that Shadow AI is becoming a common threat, especially when many businesses do not have clear policies on AI use and employees lack awareness of data security.
Accept trade-offs to avoid being left behind
Despite being aware of the risks, Thanh Huyen still accepts to "bet" on AI. She believes that the immediate benefits that AI brings are too great, helping her produce dozens of articles and ideas every day.
The convenience, speed, and ability to increase performance have made Huyen willing to provide sensitive, confidential information to AI, even her superiors' personal information.

Despite the potential risks of using AI without control, many employees still accept the trade-off for immediate benefits (Illustration: CV).
Similarly, Trung Hieu (20 years old), a Content Marketing employee, also regularly provides internal documents and business information to AI chatbot.
Hieu noticed a significant increase in productivity, creating a competitive advantage and believes that this does not affect the company's operations too much.
These cases show that Shadow AI is spreading due to a lack of clear policies from the company and insufficient awareness among employees about security, quality, and dependency risks.
A series of risks when trusting AI too much
Mr. Nguyen Viet Hung, CEO of an AI application software development company, explains the popularity of Shadow AI due to three factors: AI helps work faster and more efficiently; the habit of depending on AI forms quickly; and businesses lack warnings and training about risks.

Mr. Nguyen Viet Hung, CEO of an AI application software development company (Photo: Cong Khanh).
Experts warn that Shadow AI can lead to data leaks (customer, internal) when put on free AI tools.
In addition, the quality of AI-generated content is not verified, which can easily lead to bias and affect business decisions.
More seriously, the uncontrolled use of AI can create security vulnerabilities, making it difficult for IT systems to monitor and respond promptly, and making it difficult to determine responsibility when incidents occur.
Dark areas are difficult to control.
Monitoring and managing employees using AI is a long and coordinated process. Experts say that employees having access to internal data poses a major obstacle to preventing them from passing private data through personal AI tools.
Furthermore, AI tools are now easily accessible, making them difficult for IT to detect or manage.

Easy access to AI tools is one of the barriers that makes it more difficult to control the information that employees put on AI platforms (Illustration: CV).
To cope, expert Nguyen Viet Hung suggested that businesses need to increase employee training on risk awareness when using AI and gain a deeper understanding of "Shadow AI".
At the same time, promptly issue legal regulations and internal policies. Another important solution is to strengthen security capabilities at the enterprise, including behavioral monitoring, data access control, and careful authorization for each employee.
Currently, many companies still do not have a clear process for using AI, only encouraging its use to increase productivity. To deal with Shadow AI, many large enterprises have begun to deploy internal AI platforms or require employees to use approved tools with clear information security policies.
Source: https://dantri.com.vn/cong-nghe/sep-buong-long-nhan-vien-than-nhien-cap-du-lieu-mat-cho-ai-20250806090132034.htm
Comment (0)