Microsoft Admits Copilot Chat Bug Exposed Confidential Emails in AI Assistant Glitch
Frank Ocansey
Editor, PulseView
Microsoft has acknowledged a technical error in its AI-powered workplace assistant, Microsoft 365 Copilot, that caused some enterprise users to see summaries of confidential emails stored in their Draft and Sent folders.
The issue affected Microsoft 365 Copilot Chat, a generative AI tool integrated into workplace applications such as Outlook and Teams. The company confirmed that the bug led to certain emails — including those marked with confidentiality or sensitivity labels — being processed and summarised in ways that were not intended.
Microsoft says it has now deployed a global configuration update to resolve the problem.
What Went Wrong?
In a statement to BBC News, a Microsoft spokesperson said the company had “identified and addressed an issue where Microsoft 365 Copilot Chat could return content from emails labelled confidential authored by a user and stored within their Draft and Sent Items in Outlook desktop.”
Although the emails surfaced were limited to content users were already authorised to access, Microsoft acknowledged that the behaviour did not align with the intended Copilot experience.
“While our access controls and data protection policies remained intact, this behaviour did not meet our intended Copilot experience, which is designed to exclude protected content from Copilot access,” the spokesperson added.
The company emphasised that no unauthorised individuals gained access to confidential information as a result of the bug.
First Reported by Tech Media
The issue was initially reported by tech news outlet Bleeping Computer, which cited a Microsoft service alert confirming that confidential emails were being “incorrectly processed” by Copilot Chat.
According to the report, a “work” tab within the AI tool had summarised messages from users’ Draft and Sent folders — even when those emails carried sensitivity labels and data loss prevention (DLP) policies designed to restrict access.
Microsoft is believed to have first become aware of the issue in January. The root cause was later attributed to a “code issue,” according to notices shared on enterprise support dashboards, including one used by NHS IT services in England.
The NHS said that while it may have been affected, patient information was not exposed, and any processed content remained accessible only to the original email authors.
The Growing Risks of Enterprise AI
Microsoft 365 Copilot Chat has been marketed as a secure enterprise-grade AI assistant, offering stricter data protections compared to consumer-facing AI tools. It is available to organisations subscribing to Microsoft 365 services and is designed to help summarise documents, generate responses, and answer workplace-related queries.
However, experts warn that as companies race to deploy generative AI features, errors like this may become increasingly common.
Nader Henein, a data protection and AI governance analyst at Gartner, described such incidents as “unavoidable,” given the rapid rollout of new AI capabilities.
“Under normal circumstances, organisations would simply switch off the feature and wait till governance caught up,” he said. “Unfortunately, the pressure caused by the torrent of AI hype makes that near-impossible.”

Cybersecurity expert Professor Alan Woodward of the University of Surrey echoed those concerns, noting that even with strong security frameworks, fast-paced AI development can introduce unintended vulnerabilities.
“There will inevitably be bugs in these tools, not least as they advance at break-neck speed,” he said. “Even though data leakage may not be intentional, it will happen.”
Balancing Innovation and Security
The incident highlights a broader tension facing technology companies and enterprise customers: how to integrate powerful AI tools into sensitive work environments without compromising privacy and data protection standards.
While Microsoft insists its core access controls remained intact, the episode underscores the importance of privacy-by-default settings and careful governance when deploying AI tools in sectors such as healthcare, finance, and government.
As businesses continue adopting AI assistants to streamline workflows and improve productivity, experts say robust oversight, transparency, and cautious rollout strategies will be essential to prevent similar incidents in the future.
For now, Microsoft maintains that the issue has been resolved — but the episode serves as a reminder that even enterprise-grade AI systems are not immune to mistakes in an era of rapid technological transformation.
Also read: Epstein Files: Bill Gates Pulls Out of India AI Impact Summit Amid Epstein Files Controversy
Continue Reading
Epstein Files: Bill Gates Pulls Out of India AI Impact Summit Amid Epstein Files Controversy
Epstein Files: Bill Gates will no longer deliver his keynote address at the India AI Impact Summit in New Delhi, according to a statement from the Bill & Melinda Gates Foundation, issued just hours before he was scheduled to speak. The foundation said the decision was made after “careful consideration” and to ensure that attention […]
Starlink Shutdown Disrupts Russian Forces: Has Elon Musk’s Decision Shifted the Balance in Ukraine’s War?
Starlink Shutdown: Evidence is mounting that Elon Musk’s decision to deny Russian forces access to Starlink satellite internet services may be having a measurable impact on the battlefield in Ukraine. Ukrainian officials and frontline soldiers report that the move has disrupted Russian communications, slowed offensive operations, and created confusion among troops — potentially handing a […]
Yaytseslav: Russia Unable to Confirm Nationality of Suspect in Viral Explicit Videos Case – Ghana’s Foreign Ministry
Yaytseslav: The Ministry of Foreign Affairs has disclosed that the Ambassador of the Russian Federation to Ghana, Sergei Berdnikov, has stated that he is unable to confirm the nationality of the individual alleged to have unlawfully published sexually explicit images of Ghanaian women. The revelation was contained in a press release dated February 17, following […]