Microsoft Teams New Option Enables Users to Flag Malicious Messages

In Cybersecurity News - Original News Source is cybersecuritynews.com by Blog Writer

Teams Malicious Messages Flag

Microsoft is significantly expanding the threat detection capabilities within Microsoft Teams by granting Defender for Office 365 Plan 1 users the ability to report suspicious messages directly.

This update, tracked under Roadmap ID 531760, marks a shift in Microsoft’s security strategy by democratizing threat intelligence gathering, a feature previously reserved only for higher-tier Plan 2 subscribers.

According to an update released on February 9, 2026, this rollout addresses the growing necessity to secure collaboration platforms as aggressively as email environments.

With the lines between internal communication and external threats blurring, empowering end-users to act as the first line of defense has become a critical component of a robust security posture.

Previously, the ability to report messages within Teams, whether in direct chats, channels, or meeting logs, was exclusive to organizations with Defender for Office 365 Plan 2. This left Plan 1 environments relying solely on automated backend protections without the benefit of real-time user feedback.

The new update unifies this experience. Once the rollout is complete in late March 2026, Plan 1 users will be able to tag messages in two distinct categories:

  • Security Risk: For content suspected of being phishing, malware, or spam.
  • Not a Security Risk: For legitimate messages that were incorrectly flagged by automated filters (false positives).

These user-generated signals are vital for security operations centers (SOCs). They provide immediate visibility into potential breaches and help train Microsoft’s detection algorithms to better recognize nuance in conversational attacks, such as business email compromise (BEC) attempts launched via chat.

While this feature enhances security, it requires administrative action to function. Microsoft has emphasized that the reporting capability is an opt-in feature. It respects the existing “User-reported” settings within the organization’s configuration.

To prepare for the mid-March launch, security administrators should navigate to the Microsoft Defender portal. By enabling the “User reported” settings, the toggles for Teams reporting will automatically activate.

Reports submitted by users will subsequently flow into the “User reported” page in the Defender portal or to a specific mailbox configured by the IT team, allowing for centralized triage and investigation.

This move arrives at a time when attackers are increasingly pivoting away from traditional email phishing toward collaboration tools. Platforms like Teams are often viewed as “trusted spaces” by employees, making them susceptible to social engineering attacks.

By integrating user feedback directly into the Defender detection loop, organizations can respond faster to campaigns that bypass automated filters.

Security teams are advised to update their internal documentation and communicate these changes to staff, ensuring employees know how and when to flag suspicious activity when the feature goes live next month.

Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.