The Rise of Shadow AI in the Enterprise
AI applications have gone from experimental to mainstream in a matter of months. As David noted, “We went from zero to almost 100% of our customers using some kind of AI application somewhere.” Tools like ChatGPT, Copilot, and other generative AI platforms are being embraced at all levels—but often without the IT team’s knowledge or approval.
This uncontrolled adoption introduces a new kind of shadow IT: shadow AI. Unlike traditional shadow IT, these tools can exfiltrate sensitive data via prompts sent to external servers, raising concerns around regulatory compliance, IP protection, and overall data governance.
Visibility First: Discovering AI in the Wild
Microsoft’s first step in addressing shadow AI is comprehensive discovery. Defender for Cloud Apps can now identify and classify over 1,100 generative AI applications—far more than just the usual suspects like ChatGPT or Gemini.
Key features include:
App discovery dashboards showing usage patterns, top users, and geographic access
Risk scoring for each app based on compliance, security posture, and data handling practices
Compliance flags tied to legal obligations like GDPR
As David explained, “It’s not just about knowing you’re using an app—it’s about understanding the risks that come with it.”
Control Mechanisms: From Warnings to Blocks
Once discovery is in place, organisations can take action. Defender for Cloud Apps provides a spectrum of control options:
Sanction/unsanction tags to allow or block applications
Monitor mode to warn users when they’re about to use unapproved tools
Policy-based automation to scale enforcement without daily manual oversight
This allows teams to take a graduated approach. For example, if ChatGPT is in use, users can be guided toward Microsoft 365 Copilot instead, through in-context warnings.
David summed it up perfectly: “Our goal is not to decide for our customers. Our goal is to empower them with the right information.”
Real-Time Detection and Deep Integration
Shadow AI is not just about awareness—it’s about prevention. Microsoft has gone further by integrating Defender for Cloud Apps with Microsoft 365 and Defender for Endpoint. This allows for:
Real-time blocking at the device level, regardless of location
Detection of risky behaviours, such as sensitive data queries in Copilot
Custom KQL queries for hunting threats related to specific projects or terms (e.g., Project Obsidian)
In one demonstration, David showed how an attacker could exfiltrate financial data using a single Copilot prompt—and how the system could detect and respond to that behaviour.
What's Next: Evolving to Meet the AI Threat
The landscape is changing rapidly. David shared insights into Microsoft’s roadmap, which includes expanding coverage to:
Custom-built AI apps
Low-code/no-code platforms
More granular prompt analysis and data loss prevention
With dedicated research teams now focusing on AI-specific threats, the Defender ecosystem is evolving in lockstep with enterprise AI adoption.
As Ru put it, “This is no longer about just blocking apps. It’s about giving organisations the visibility, context, and control they need to manage a fundamentally new category of risk.”
Immediate Actions for Cyber Security Teams
If you already have Microsoft Defender for Cloud Apps:
Start with discovery—your AI app usage data is already available
Review incidents and alerts for Copilot and other AI tools
Configure integration with Defender for Endpoint to enable policy enforcement
If you’re not yet configured:
Begin with visibility
Enable Defender for Endpoint integration
Scale from monitoring to policy-driven enforcement
