Generative AI offers organisations the opportunity to enhance operational efficiency and foster innovation. However, these advancements introduce a new range of cyber security demands, particularly concerning data security, which are prompting security teams to refocus their attentions. Crucially, there is a balance to be struck between identifying and managing the risks of generative AI and utilising its powerful capabilities to streamline existing security processes.
The 2024 Microsoft Data Security Index highlights key statistics and provides actionable takeaways to safeguard the data utilised by generative AI applications. Up from 800 security professionals in 2023, the 2024 index includes responses from 1,300 respondents, uncovering fresh insights into data security and AI practices.
The Microsoft 2024 Data Security Index Headline Insights
User adoption of generative AI increases the risks surrounding sensitive data.
Of the organisations surveyed, 84 per cent reported seeking greater levels of confidence in managing and discovering data input into AI applications and tools.
While concerned about its implications, on the whole decision makers are optimistic about AI’s future potential to boost their organisations’ data security effectiveness.
The Risks of Complex Data Security Solutions
With complexity comes risk. Fragmented solutions hinder a clear understanding of data security posture, as siloed data and disconnected workflows limit visibility into potential risks. Without seamless integration, security teams are forced to manually correlate data to construct a unified view of threats, which often results in blind spots, making detecting and mitigating risks more difficult.
On average, organisations manage 12 different data security solutions, adding more layers for security teams to parse, analyse, and action. This challenge is especially pronounced in larger organisations, who are more likely to deploy a greater number of tools: medium enterprises use nine tools on average, large enterprises use 11, and extra-large enterprises manage 14.
Microsoft’s data reveals a notable connection between the number of security tools used and the frequency of incidents. In 2024, organisations using 11 or more tools reported an average of 202 security incidents, compared to 139 incidents for those using 10 tools or fewer. And the challenge is felt internally – 21 per cent of decision-makers identify the lack of consolidated and comprehensive visibility caused by disparate tools as their primary security challenge.
Compounding this issue is the rise in security incidents linked to AI applications, which surged from 27 per cent in 2023 to 40 per cent in 2024. These attacks not only compromise sensitive data but also disrupt the functionality of AI systems, which are often built into users’ productivity workflows, exacerbating an already fragmented security environment.
The growing complexity of data security, coupled with the rise of AI-related threats, underscores the urgent need for more integrated and cohesive strategies to address both traditional and emerging risks.

Generative AI and Data Security Risk
According to Microsoft, the adoption of generative AI significantly increases the risk of exposing sensitive data. As AI becomes more ingrained in daily operations, organisations are recognising the need for stronger safeguards. In fact, 96 per cent of surveyed organisations acknowledged some level of concern regarding employee use of generative AI.
Unauthorised AI applications pose a serious threat to security by accessing and misusing data, often through employees logging in with personal credentials or using personal devices for work-related tasks. Alarmingly, 65 per cent of organisations admit that their employees are using unsanctioned AI apps, while 93 per cent of organisations surveyed reported making a proactive effort to manage employee use of these tools.
To address these concerns, organisations must implement robust data security measures to mitigate risks and ensure responsible AI use. Currently, 43 per cent are focused on preventing the upload of sensitive data into AI apps, while 42 per cent log all activities and content for incident response and investigations, and 42 per cent are blocking access to unauthorised tools and investing in employee training on secure AI practices.
Effective security controls require organisations to enhance visibility into AI application usage and the data flowing through these tools. Additionally, they need mechanisms to assess the risk levels of emerging generative AI applications and enforce conditional access policies based on user risk profiles.
To maintain and maximise security, organisations must access detailed audit logs and generate comprehensive reports to evaluate overall risk, maintain transparency, and ensure compliance with regulatory requirements.
AI’s Potential Security Benefits
Traditional data security measures often struggle to keep pace with the massive volumes of data generated in today’s digital landscape. AI, however, offers a transformative advantage by analysing this data, identifying patterns, and detecting anomalies that could signal security threats.
AI’s capability to analyse vast datasets, detect anomalies, and respond to threats in real-time is proving invaluable, driving optimism and investment in AI-powered data security technologies. These tools are expected to play a critical role in shaping future security strategies.
While not without its risks (which must be understood and mitigated), organisations already leveraging AI in their security operations report tangible benefits, including a reduction in alert volumes. On average, AI-enabled organisations receive 47 alerts per day, compared to 79 alerts for those without AI-based solutions.
Looking ahead, organisations are prioritising ways to streamline the discovery and labelling of sensitive data, enhance the precision of alerts, simplify investigations, and make informed recommendations to secure their environments more effectively. Ultimately, these efforts aim to reduce the frequency of data security incidents and strengthen overall security posture.
Threatscape’s complimentary Microsoft Purview Advisory Service helps you to understand the data security protections available within your Microsoft 365 license. With a no-obligation consultation with one of our award-winning Microsoft security experts, you’ll receive advice and recommendations on the type of data security risks companies face, and insight into how Purview and other capabilities within Microsoft 365 help defend against those risks.