Skip to main content

Cookie Consent

We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

AI in ASIA
AI privacy and security in the workplace
Business

Navigating the Privacy and Security Risks of AI and AGI in the Workplace

This article explores the privacy and security risks of using AI tools like ChatGPT and Copilot in the workplace.

Intelligence Desk6 min read

Generative AI tools like ChatGPT and Copilot are becoming common in businesses, but they raise privacy and security concerns.,These tools can inadvertently expose sensitive data, be targeted by hackers, or be used to monitor employees.,To mitigate risks, avoid sharing confidential information, use AI as a first draft, validate AI-generated content, and apply the "least privilege" principle.

The Rise of AI in the Workplace and Its Potential Risks

AI tools such as OpenAI's ChatGPT and Microsoft's Copilot are increasingly being integrated into everyday business life. However, as these tools evolve, concerns about privacy and security issues, particularly in the workplace, are growing.

Privacy Concerns with Microsoft's Recall and OpenAI's ChatGPT

In May, privacy advocates raised concerns about Microsoft's new Recall tool, dubbing it a potential "privacy nightmare" due to its ability to take screenshots of a user's laptop every few seconds. The UK's Information Commissioner's Office is seeking more information from Microsoft about the safety of the product, which will soon be launched in its Copilot+ PCs.

Similarly, concerns are mounting over OpenAI's ChatGPT, which has demonstrated screenshotting abilities in its upcoming macOS app. Privacy experts warn that this could result in the capture of sensitive data.

Bans and Warnings

The US House of Representatives has banned the use of Microsoft's Copilot among staff members due to the risk of leaking House data to non-House approved cloud services. Market analyst Gartner has also cautioned that using Copilot for Microsoft 365 exposes the risks of sensitive data and content exposure internally and externally.

The Risks of Inadvertent Data Exposure

One of the biggest challenges for those using generative AI at work is the risk of inadvertently exposing sensitive data. Most generative AI systems are "essentially big sponges," according to Camden Woollven, group head of AI at risk management firm GRC International Group. They absorb vast amounts of information from the internet to train their language models. For more on how AI recalibrates data value, read about why AI Recalibrated the Value of Data.

The Threat of AI Systems Being Targeted by Hackers

Another concern is the potential for AI systems themselves to be targeted by hackers. Woollven warns that if an attacker managed to gain access to the large language model (LLM) that powers a company's AI tools, they could siphon off sensitive data, plant false or misleading outputs, or use the AI to spread malware. This risk extends to new AI browsers, which are under threat as researchers expose deep flaws.

Proprietary AI Offerings and Their Potential Risks

While consumer-grade AI tools can create obvious risks, an increasing number of potential issues are arising with "proprietary" AI offerings broadly deemed safe for work, such as Microsoft Copilot, according to Phil Robinson, principal consultant at security consultancy Prism Infosec.

AI Tools and Employee Monitoring

Another concern is the potential use of AI tools to monitor staff, which could infringe on their privacy. While Microsoft states that snapshots taken by its Recall feature stay locally on a user's PC and are under their control, it's not difficult to imagine this technology being used for employee monitoring, according to Steve Elcock, CEO and founder of software firm Elementsuite.

Mitigating the Risks of AI in the Workplace

To improve privacy and security when using AI tools, businesses and individual employees should avoid putting confidential information into a prompt for a publicly available tool such as ChatGPT or Google's Gemini, according to Lisa Avvocato, vice president of marketing and community at data firm Sama. Understanding how people really use AI in 2025 can help in crafting safer practices.

Crafting Safe Prompts

When crafting a prompt, be generic to avoid sharing too much. For example, ask, "Write a proposal template for budget expenditure," not "Here is my budget, write a proposal for expenditure on a sensitive project," Avvocato advises.

Validating AI-Generated Content

If you use AI for research, avoid issues such as those seen with Google's AI Overviews by validating what it provides. Ask it to provide references and links to its sources. If you ask AI to write code, you still need to review it, rather than assuming it's good to go, Avvocato says. For more on this, consider when AI Slop Needs a Human Polish.

Applying the "Least Privilege" Principle

Microsoft emphasizes that Copilot needs to be configured correctly and the "least privilege" principle—the concept that users should only have access to the information they need—should be applied. This is a crucial point, according to Prism Infosec's Robinson. Organizations must lay the groundwork for these systems and not just trust the technology and assume everything will be OK. The National Institute of Standards and Technology (NIST) provides detailed guidelines on Zero Trust Architecture, which embodies this principle.

Assurances from AI Firms

The firms integrating generative AI into their products say they're doing everything they can to protect security and privacy. Microsoft outlines security and privacy considerations in its Recall product and the ability to control the feature in Settings > Privacy & security > Recall & snapshots.

Google says generative AI in Workspace "does not change our foundational privacy protections for giving users choice and control over their data," and stipulates that information is not used for advertising.

OpenAI reiterates how it maintains security and privacy in its products, while enterprise versions are available with extra controls. "We want our AI models to learn about the world, not private individuals—and we take steps to protect people's data and privacy," an OpenAI spokesperson tells WIRED.

OpenAI says it offers ways to control how data is used, including self-service tools to access, export, and delete personal information, as well as the ability to opt out of use of content to improve its models. ChatGPT Team, ChatGPT Enterprise, and its API are not trained on data or conversations, and its models don't learn from usage by default, according to the company.

The Future of AI in the Workplace

As AI and AGI systems become more sophisticated and omnipresent in the workplace, the risks are only going to intensify, according to Woollven. With this in mind, people—and businesses—need to get in the mindset of treating AI like any other third-party service. "Don't share anything you wouldn't want publicly broadcasted," Woollven advises.

Comment and Share

What steps are you taking to ensure the privacy and security of your data when using AI tools in the workplace? Share your thoughts and experiences in the comments below, and don't forget to Subscribe to our newsletter for updates on AI and AGI developments.

YOUR TAKE

We cover the story. You tell us what it means on the ground.

What did you think?

Written by

Share your thoughts

Be the first to share your perspective on this story

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

This article is part of the AI Safety for Everyone learning path.

Continue the path →

Liked this? There's more.

Join our weekly newsletter for the latest AI news, tools, and insights from across Asia. Free, no spam, unsubscribe anytime.

Loading comments...