Microsoft Admits Copilot Chat Bug Exposed Confidential Emails in AI Assistant Glitch
Frank Ocansey
Editor, PulseView
Microsoft has acknowledged a technical error in its AI-powered workplace assistant, Microsoft 365 Copilot, that caused some enterprise users to see summaries of confidential emails stored in their Draft and Sent folders.
The issue affected Microsoft 365 Copilot Chat, a generative AI tool integrated into workplace applications such as Outlook and Teams. The company confirmed that the bug led to certain emails — including those marked with confidentiality or sensitivity labels — being processed and summarised in ways that were not intended.
Microsoft says it has now deployed a global configuration update to resolve the problem.
What Went Wrong?
In a statement to BBC News, a Microsoft spokesperson said the company had “identified and addressed an issue where Microsoft 365 Copilot Chat could return content from emails labelled confidential authored by a user and stored within their Draft and Sent Items in Outlook desktop.”
Although the emails surfaced were limited to content users were already authorised to access, Microsoft acknowledged that the behaviour did not align with the intended Copilot experience.
“While our access controls and data protection policies remained intact, this behaviour did not meet our intended Copilot experience, which is designed to exclude protected content from Copilot access,” the spokesperson added.
The company emphasised that no unauthorised individuals gained access to confidential information as a result of the bug.
First Reported by Tech Media
The issue was initially reported by tech news outlet Bleeping Computer, which cited a Microsoft service alert confirming that confidential emails were being “incorrectly processed” by Copilot Chat.
According to the report, a “work” tab within the AI tool had summarised messages from users’ Draft and Sent folders — even when those emails carried sensitivity labels and data loss prevention (DLP) policies designed to restrict access.
Microsoft is believed to have first become aware of the issue in January. The root cause was later attributed to a “code issue,” according to notices shared on enterprise support dashboards, including one used by NHS IT services in England.
The NHS said that while it may have been affected, patient information was not exposed, and any processed content remained accessible only to the original email authors.
The Growing Risks of Enterprise AI
Microsoft 365 Copilot Chat has been marketed as a secure enterprise-grade AI assistant, offering stricter data protections compared to consumer-facing AI tools. It is available to organisations subscribing to Microsoft 365 services and is designed to help summarise documents, generate responses, and answer workplace-related queries.
However, experts warn that as companies race to deploy generative AI features, errors like this may become increasingly common.
Nader Henein, a data protection and AI governance analyst at Gartner, described such incidents as “unavoidable,” given the rapid rollout of new AI capabilities.
“Under normal circumstances, organisations would simply switch off the feature and wait till governance caught up,” he said. “Unfortunately, the pressure caused by the torrent of AI hype makes that near-impossible.”

Cybersecurity expert Professor Alan Woodward of the University of Surrey echoed those concerns, noting that even with strong security frameworks, fast-paced AI development can introduce unintended vulnerabilities.
“There will inevitably be bugs in these tools, not least as they advance at break-neck speed,” he said. “Even though data leakage may not be intentional, it will happen.”
Balancing Innovation and Security
The incident highlights a broader tension facing technology companies and enterprise customers: how to integrate powerful AI tools into sensitive work environments without compromising privacy and data protection standards.
While Microsoft insists its core access controls remained intact, the episode underscores the importance of privacy-by-default settings and careful governance when deploying AI tools in sectors such as healthcare, finance, and government.
As businesses continue adopting AI assistants to streamline workflows and improve productivity, experts say robust oversight, transparency, and cautious rollout strategies will be essential to prevent similar incidents in the future.
For now, Microsoft maintains that the issue has been resolved — but the episode serves as a reminder that even enterprise-grade AI systems are not immune to mistakes in an era of rapid technological transformation.
Also read: Epstein Files: Bill Gates Pulls Out of India AI Impact Summit Amid Epstein Files Controversy
Continue Reading
PS5 Price Hike 2026: Sony Raises Console Prices by £90 Amid Global Pressures
PS5 Price Hike 2026: Sony has announced a sharp increase in the price of the PlayStation 5, raising it by £90 in the UK and $100 in the US, citing “continued pressures in the global economic landscape.” The gaming giant Sony said the changes, which will take effect from 2 April, will affect several of […]
Parents Should Monitor Children ‘24/7’ on Roblox, Warns Developer
Roblox continues to be the most popular gaming platform for children aged eight to 12 in the UK, attracting millions of young players every day. The platform allows users to create and play games in an open-world environment, encouraging creativity and social interaction. However, despite its widespread appeal, concerns are growing over the safety of […]
Meta social media addiction: LA Jury Rules Against Meta and YouTube
Meta social media addiction: In a groundbreaking decision, a Los Angeles jury has awarded $6 million in damages to a young woman, known as Kaley, who sued Meta and YouTube over her childhood addiction to social media. The verdict found that the companies intentionally designed addictive platforms that harmed her mental health. Meta, which owns […]