General

The Cybersecurity Risks of GenAI Tools in the Workplace

May 7, 2025

Understand how generative AI tools create new security, privacy and compliance risks at work—and how to control shadow AI usage.

Image Alt Text: Office worker using a generative AI tool on a computer screen with documents nearby

Generative AI tools have moved from experimentation to everyday business use in a very short time. Employees now paste source code, contract clauses, customer emails and even board papers into AI assistants to get faster answers, drafts and ideas. This productivity boost comes with a cost: a new, largely unmanaged layer of cybersecurity and data protection risk.

IBM’s 2025 Cost of a Data Breach Report notes that around one in five breaches now involve “shadow AI” usage—unsanctioned or poorly governed AI tools used outside formal IT controls. At the same time, Gartner predicts that by 2030, 40 percent of enterprises will experience a security or compliance incident linked to shadow AI.

This article walks through the main cybersecurity risks of generative AI tools in the workplace and practical steps to manage them without killing innovation.

How Generative AI Introduces New Cyber Risks

Data leakage into external AI services

The most immediate risk is sensitive data leaving your environment:

  • Employees paste personal data, financial models or source code into public AI tools.
  • AI providers may retain prompts and responses for model training, logging or product improvement.
  • Even with contractual restrictions, misconfigurations or future changes on the provider side can expose this data.

Research highlighted by Harmonic Security shows that roughly 8–9 percent of prompts to major GenAI platforms contain sensitive information. When scaled across thousands of employees, that becomes a significant leakage channel.

Shadow AI and unapproved tools

Shadow AI is the AI-era equivalent of shadow IT: staff using unsanctioned tools and browser plugins to get work done. UK and global studies cited by GartnerIT Pro indicate that a majority of knowledge workers have used AI tools without formal approval, often for critical tasks.

Risks include:

  • No security review of the provider’s architecture or controls
  • Unknown data retention and jurisdiction
  • No logging or visibility into what is being sent to the tool
  • Fragmented usage that bypasses central governance and risk assessments

Prompt injection and model abuse

The UK’s National Cyber Security Centre (NCSC) warns that prompt injection, malicious instructions designed to override or subvert an AI model’s behaviour, could drive data breaches at a scale comparable to past waves of injection attacks if misunderstood.

Examples include:

  • A user opens an internal document that contains hidden instructions telling their AI assistant to exfiltrate data.
  • A third-party system connected via plugins or actions manipulates prompts to gain broader access.
  • Attackers craft emails specifically to trick AI-assisted email triage tools into taking unsafe actions.

AI tools amplifying existing security weaknesses

Generative AI does not operate in a vacuum. It runs on top of your existing IAM, data classification and access control landscape:

  • If an employee already has excessive permissions, an AI assistant integrated into your stack can expose even more data faster.
  • Inaccurate or incomplete data classification means AI tools cannot reliably enforce “do not show this” rules.
  • Poor logging or monitoring makes it difficult to investigate suspicious AI-mediated actions.

National security agencies such as the NPSA and NCSC emphasise that AI security must be treated as an extension of core cyber hygiene: identity, access, configuration and monitoring.

Regulatory and Compliance Pressure Around GenAI

Regulators are moving quickly to catch up with AI adoption:

  • The UK government’s AI Cyber Security Code of Practice sets out 13 principles for secure AI, covering the full lifecycle from design to deployment.
  • ENISA highlights AI as a defining factor in its 2025 threat landscape, including AI-supported phishing and social engineering.
  • Sectoral regulators in finance, healthcare and critical infrastructure are beginning to explicitly reference AI in their ICT and risk management guidance.

For organisations subject to GDPR, PDPA or other privacy regimes, GenAI raises questions such as:

  • What personal data is being sent to external AI providers?
  • Is there a clear lawful basis and appropriate data processing agreement?
  • Can data subjects exercise their rights if data is stored in opaque AI platforms?

Governance, not prohibition, is becoming the expectation. Boards are increasingly asking CISOs to show how AI usage is inventoried, risk-assessed and controlled.

Building a Practical GenAI Security Strategy

1. Inventory AI usage and shadow AI

You cannot protect what you cannot see. Start by:

  • Using network and CASB tooling to discover which AI services are being accessed from corporate devices.
  • Running short surveys and workshops with key business units to understand how staff are using AI day-to-day.
  • Classifying use cases by risk (for example, “low-risk drafting” versus “high-risk customer data analysis”).

DACTA often pairs this discovery phase with a broader Risk Assessment so AI risk is managed alongside other technology and operational risks rather than in isolation.

2. Define guardrails and acceptable use

Next, translate your findings into clear, pragmatic policies:

  • Specify which AI tools are approved for use and for what types of data.
  • Prohibit input of certain categories of information (for example, secrets, credentials, special-category personal data) into external tools.
  • Require the use of enterprise AI offerings with tenant-isolated data where possible.
  • Clarify that employees remain accountable for the outputs they use and must review them for accuracy and bias.

Policies should be short, understandable and accompanied by training. DACTA’s work on Cybersecurity Awareness Training for Professionals shows that scenario-based examples are more effective than generic AI warnings.

3. Architect secure GenAI integrations

When you integrate AI into your own applications or workflows:

  • Follow secure development and deployment guidance such as the NCSC’s “Guidelines for secure AI system development.
  • Place AI services behind API gateways with strong authentication, rate limiting and detailed logging.
  • Enforce data minimisation: send only the fields needed for the specific task.
  • Apply output filtering and policy enforcement (for example, stripping secrets before responses reach users).
  • Design explicit mitigations for prompt injection, including content validation and limiting the AI’s ability to perform sensitive actions directly.

4. Strengthen monitoring and incident response for AI

AI-related incidents should not sit outside your existing detection and response capabilities:

  • Extend SIEM, UEBA and MDR visibility to AI-related logs (prompt history, access tokens, plugin actions).
  • Add AI-specific scenarios to incident response playbooks: for example, “shadow AI data leak” or “prompt injection in internal assistant”.
  • Test AI-themed incidents as part of your tabletop exercises with legal, HR and communications.

DACTA’s Managed Detection & Response (MDR) and Incident Response services increasingly include AI-centric scenarios, reflecting how quickly this risk surface is expanding.

5. Educate employees to use AI securely

Many AI-related breaches begin with well-intentioned employees trying to save time. Effective awareness programmes should cover:

  • Concrete examples of what not to paste into AI tools
  • How to recognise malicious prompts or AI-mediated phishing
  • The difference between approved enterprise tools and consumer applications
  • When to escalate concerns to security or privacy teams

Rather than telling staff “Do not use AI,” focus on “Here is how to use AI safely and where the red lines are.” Align this with your broader cybersecurity awareness content so AI becomes part of the security culture, not a separate topic.

Conclusion: Enable AI Innovation Without Losing Control

GenAI tools are here to stay. They can accelerate work, improve customer experiences and help security teams themselves become more effective. But unmanaged AI use—especially shadow AI—creates real security, privacy and compliance exposures that adversaries are already learning to exploit.

The organisations that will benefit most from GenAI in the long term are those that treat AI governance as a core element of their cybersecurity strategy. DACTA Global supports clients in building that governance, combining risk assessment, enterprise security architecture and managed security services to keep AI adoption on a secure, compliant path.

Under attack or experiencing a security incident?

If you're experiencing an active security incident and need immediate assistance, contact the DACTA Incident Response Team (IRT) at support@dactaglobal.com.

You might also be interested in