Understand how generative AI tools create new security, privacy and compliance risks at work—and how to control shadow AI usage.
Image Alt Text: Office worker using a generative AI tool on a computer screen with documents nearby
Generative AI tools have moved from experimentation to everyday business use in a very short time. Employees now paste source code, contract clauses, customer emails and even board papers into AI assistants to get faster answers, drafts and ideas. This productivity boost comes with a cost: a new, largely unmanaged layer of cybersecurity and data protection risk.
IBM’s 2025 Cost of a Data Breach Report notes that around one in five breaches now involve “shadow AI” usage—unsanctioned or poorly governed AI tools used outside formal IT controls. At the same time, Gartner predicts that by 2030, 40 percent of enterprises will experience a security or compliance incident linked to shadow AI.
This article walks through the main cybersecurity risks of generative AI tools in the workplace and practical steps to manage them without killing innovation.
How Generative AI Introduces New Cyber Risks
Data leakage into external AI services
The most immediate risk is sensitive data leaving your environment:
Research highlighted by Harmonic Security shows that roughly 8–9 percent of prompts to major GenAI platforms contain sensitive information. When scaled across thousands of employees, that becomes a significant leakage channel.
Shadow AI and unapproved tools
Shadow AI is the AI-era equivalent of shadow IT: staff using unsanctioned tools and browser plugins to get work done. UK and global studies cited by GartnerIT Pro indicate that a majority of knowledge workers have used AI tools without formal approval, often for critical tasks.
Risks include:
Prompt injection and model abuse
The UK’s National Cyber Security Centre (NCSC) warns that prompt injection, malicious instructions designed to override or subvert an AI model’s behaviour, could drive data breaches at a scale comparable to past waves of injection attacks if misunderstood.
Examples include:
AI tools amplifying existing security weaknesses
Generative AI does not operate in a vacuum. It runs on top of your existing IAM, data classification and access control landscape:
National security agencies such as the NPSA and NCSC emphasise that AI security must be treated as an extension of core cyber hygiene: identity, access, configuration and monitoring.
Regulatory and Compliance Pressure Around GenAI
Regulators are moving quickly to catch up with AI adoption:
For organisations subject to GDPR, PDPA or other privacy regimes, GenAI raises questions such as:
Governance, not prohibition, is becoming the expectation. Boards are increasingly asking CISOs to show how AI usage is inventoried, risk-assessed and controlled.
Building a Practical GenAI Security Strategy
1. Inventory AI usage and shadow AI
You cannot protect what you cannot see. Start by:
DACTA often pairs this discovery phase with a broader Risk Assessment so AI risk is managed alongside other technology and operational risks rather than in isolation.
2. Define guardrails and acceptable use
Next, translate your findings into clear, pragmatic policies:
Policies should be short, understandable and accompanied by training. DACTA’s work on Cybersecurity Awareness Training for Professionals shows that scenario-based examples are more effective than generic AI warnings.
3. Architect secure GenAI integrations
When you integrate AI into your own applications or workflows:
4. Strengthen monitoring and incident response for AI
AI-related incidents should not sit outside your existing detection and response capabilities:
DACTA’s Managed Detection & Response (MDR) and Incident Response services increasingly include AI-centric scenarios, reflecting how quickly this risk surface is expanding.
5. Educate employees to use AI securely
Many AI-related breaches begin with well-intentioned employees trying to save time. Effective awareness programmes should cover:
Rather than telling staff “Do not use AI,” focus on “Here is how to use AI safely and where the red lines are.” Align this with your broader cybersecurity awareness content so AI becomes part of the security culture, not a separate topic.
Conclusion: Enable AI Innovation Without Losing Control
GenAI tools are here to stay. They can accelerate work, improve customer experiences and help security teams themselves become more effective. But unmanaged AI use—especially shadow AI—creates real security, privacy and compliance exposures that adversaries are already learning to exploit.
The organisations that will benefit most from GenAI in the long term are those that treat AI governance as a core element of their cybersecurity strategy. DACTA Global supports clients in building that governance, combining risk assessment, enterprise security architecture and managed security services to keep AI adoption on a secure, compliant path.
If you're experiencing an active security incident and need immediate assistance, contact the DACTA Incident Response Team (IRT) at support@dactaglobal.com.