Contributing Author: Allan Jacks, Morefield vCISO
October marks Cybersecurity Awareness Month. A time when organizations, governments, and individuals spotlight the evolving threats in our digital world. As technology advances at what seems a breakneck speed, so too do the risks that lurk beneath the surface like how we must rethink traditional security models to confront new and emerging challenges.
One persistent and complex threat has always been that of insider risks where the danger posed by individuals within an organization who have access to sensitive systems and data. These risks can be malicious, like data theft, or accidental, like a misdirected email containing confidential information. Historically, insider risk has focused on human actors which include employees, contractors, and vendors.
But today, a new kind of insider is emerging. Artificial Intelligence (AI), once a tool for efficiency and innovation, is now being recognized as a potential insider risk. With the possibility of access to vast amounts of data, making autonomous decisions, and increased integration into critical workflows, AI systems are beginning to occupy roles once reserved for trusted employees. With that shift comes a new set of vulnerabilities.
Historically, insider risk has referred to threats posed by people within an organization who have legitimate access to systems and data. These include:
- Employees: Whether malicious or negligent, staff members can leak data, misuse credentials, or fall victim to phishing attacks.
- Contractors: Temporary workers often have access to sensitive systems but may lack full training or oversight.
- Vendors and third-party partners: External collaborators with system access can introduce vulnerabilities or be targeted by attackers.
AI systems are becoming even more integrated and now embedded in:
- Decision-making processes (e.g., fraud detection, hiring, loan approvals)
- Data analysis pipelines
- Customer service and communication tools
These systems can act independently, interact with sensitive data, and even influence important business outcomes, making them functionally equivalent to insiders.
AI systems share several key characteristics with traditional insiders:
Privileged Access
- AI tools often have direct access to sensitive databases, internal APIs, and customer information.
- May be integrated into core business systems with broad permissions, making them high-value targets.
Autonomous Decision-Making
- AI models can make decisions without human oversight, such as approving transactions or flagging suspicious behavior.
- These decisions can have real-world consequences -especially if the model is biased, outdated, or trained on flawed data.
Manipulation and Misconfiguration
- AI systems can be manipulated via adversarial inputs (e.g., prompt injection, data poisoning).
- Misconfigured models may expose data, behave unpredictably, or violate compliance standards.
- Lack of transparency in AI decision-making (it’s a black box) makes it harder to detect when something goes wrong.
As AI becomes more embedded in business operations, it must be treated not just as a tool – but also as a potential insider. Organizations need to extend their insider risk programs to include AI governance, access controls, and continuous monitoring of autonomous systems. Cybersecurity isn’t just about keeping people honest anymore, but also about keeping machines accountable.
How AI Can Become a Risk
As artificial intelligence becomes deeply embedded in business operations, it introduces new dimensions of risk—some familiar, others entirely novel. While AI promises efficiency, insight, and automation, it also opens doors to misuse, manipulation, and unintended exposure which can include the following.
Misuse by Insiders
- Employees leveraging generative AI tools (like chatbots or code assistants) may inadvertently or deliberately input sensitive data which may include customer records, proprietary code, or internal documents – into public-facing models.
- These tools often retain or process inputs in ways that are opaque, raising concerns about data leakage, IP exposure, and compliance violations.
Prompt Injection Attacks
- AI models, especially large language models, are vulnerable to prompt injection, where malicious users disguise malicious inputs as legitimate prompts, manipulating the model’s behavior.
- This can lead to unauthorized access, data exfiltration, or the AI performing unintended actions, turning a helpful assistant into a security liability.
Autonomous Decision-Making Risks
- If an AI system is trained on flawed, biased, or outdated data this may influence risky decisions -such as approving fraudulent transactions or misclassifying threats.
- These decisions often occur without human oversight, making it difficult to detect or reverse errors in real time.
Unintended Data Exposure via Automation
- AI-driven workflows can automatically share, process, or store data across systems.
- Without proper guardrails, this can result in accidental exposure, such as sending sensitive files to the wrong recipient or publishing internal content externally.
Third-Party AI Integrations
- Many organizations embed vendor AI tools into their internal systems – CRM platforms, HR software, analytics engines.
- These tools may have access to sensitive data but lack transparency in how they store, process, or secure it.
Lack of Visibility and Control
- External AI models often operate as black boxes, making it hard to audit decisions, track data flows, or ensure compliance.
- Organizations may not know:
- Where the data goes.
- How long it’s retained.
- Whether it’s used to train future models.
AI isn’t just a tool—it’s a new kind of insider. It can be trusted with sensitive data, make autonomous decisions, and interact with external systems. But without proper governance, it can also be exploited, misconfigured, or manipulated, posing serious risks to security, privacy, and compliance.
Organizations must treat AI with the same scrutiny they apply to human insiders: access controls, monitoring, training, and accountability.
Case Studies and Real-World Examples
Samsung Data Leak via ChatGPT (May 2023)
Samsung employees accidentally leaked sensitive company information while using ChatGPT for help at work, including source code and a recording of a meeting. The incidents raised concerns about the potential for similar leaks and possible violations of GDPR compliance. Samsung has taken immediate action by limiting the ChatGPT upload capacity and considering building its own internal AI chatbot to prevent future leaks. (www.cybernews.com)
- Risk Type: Employee misuse of AI
- Lesson learned: Generative AI tools can retain or process sensitive inputs, creating data exposure risks.
Chevrolet Chatbot Manipulation (December 2023)
A prankster tricked a Chevrolet dealership’s AI chatbot into offering a $76,000 Tahoe for just $1. The chatbot was manipulated through clever prompts, revealing how easily customer-facing AI tools can be exploited. The Tahoe was never delivered. (www.cybernews.com)
- Risk Type: Prompt injection and manipulation
- Lesson learned: AI systems with public interfaces can be gamed, leading to reputational and financial damage.
Air Canada Refund Incident (February 2024)
In February 2024, Air Canada faced a significant controversy after a grieving passenger, Jake Moffatt, sought a refund for a full-price ticket purchased due to misinformation from an airline chatbot. The chatbot incorrectly advised Moffatt to book a flight immediately and request a refund within 90 days, which contradicted Air Canada’s bereavement travel policy. The Canada’s Civil Resolution Tribunal ruled in Moffatt’s favor, ordering Air Canada to provide a partial refund of approximately $812 CAD. The tribunal found that Air Canada failed to adequately explain the chatbot’s misleading information, which led to the passenger’s decision to pursue legal action.
- Risk Type: Autonomous decision-making error
- Lesson learned: AI-driven workflows can make costly mistakes if not properly monitored.
Google Bard Misinformation Incident (February 2023)
During a public demo, Google’s Bard chatbot provided incorrect information about the James Webb Space Telescope. The error led to a drop in Google’s stock and raised concerns about the reliability of AI-generated content.
- Risk Type: AI misinformation
- Lesson learned: AI systems can undermine credibility and trust when they operate without sufficient validation.
Third-Party AI Integration Risks
Some companies integrate external AI tools into internal systems without full visibility into how those tools handle sensitive data. These integrations can introduce vulnerabilities if the vendor’s security practices are weak or opaque.
- Risk Type: Vendor AI misuse or misconfiguration
- Lesson learned: Lack of transparency in third-party AI models can lead to data leakage and compliance violations.
Why These Cases Matter
These incidents show that AI systems:
- Can be misused by insiders, intentionally or accidentally.
- Are vulnerable to manipulation through prompt injection or adversarial inputs.
- Make autonomous decisions that may be flawed or risky.
- Often operate with privileged access to sensitive data.
- May be embedded in workflows with limited oversight or visibility.
In short, AI now behaves like a digital insider—one that must be governed, monitored, and secured just like human employees.
Mitigation Strategies
As AI systems become embedded in core business functions, they must be treated with the same scrutiny as human insiders. Mitigating AI-related insider risk requires a blend of technical controls, governance frameworks, and cultural awareness. Here’s how organizations can stay ahead:
Treat AI Systems as Privileged Users
- Assign AI tools role-based access controls just like human employees.
- Limit access to sensitive data and systems based on the AI’s function.
- Monitor and log AI activity to ensure accountability and traceability.
Monitor and Audit AI Behavior
- Implement continuous monitoring of AI decisions, outputs, and interactions.
- Use anomaly detection to flag unusual behavior – such as accessing unexpected datasets or generating risky outputs.
- Maintain audit trails for AI-driven actions to support compliance and incident response.
Validate Training Data and Model Outputs
- Ensure AI models are trained on clean, unbiased, and secure data.
- Regularly test outputs for accuracy, fairness, and security implications.
- Use human-in-the-loop systems for high-stakes decisions to reduce risk.
Vet Third-Party AI Tools Thoroughly
- Conduct security assessments of vendor AI solutions before integration.
- Require transparency on how external models handle, store, and process data.
- Include AI-specific clauses in vendor contracts covering data protection and incident response.
Control Generative AI Usage
- Establish clear policies on what data can be shared with generative AI tools (e.g., ChatGPT, Bard).
- Use enterprise-grade AI platforms with data governance features.
- Educate employees on the risks of inputting sensitive information into public models.
Update Insider Risk Programs
- Expand insider risk frameworks to include non-human agents like AI.
- Train security teams to recognize and respond to AI-related threats.
- Align policies with evolving standards from NIST, ISO, and industry-specific regulators.
Foster a Culture of Responsible AI Use
- Promote AI literacy across the organization.
- Encourage ethical use of AI through training, awareness campaigns, and leadership modeling.
- Make AI governance part of your cybersecurity culture—not just a technical checklist.
AI the New Insider: Redefining Risk in the Era of Intelligent Systems
As organizations embrace artificial intelligence to drive efficiency, innovation, and scale, they must also confront a new reality: AI is no longer just a tool – it’s an operational insider. With access to sensitive data, the ability to make autonomous decisions, and increasing integration into critical workflows, AI systems now occupy roles once reserved for trusted employees.
This shift demands a redefinition of insider risk. No longer limited to employees, contractors, or vendors, insider threats now include non-human agents capable of causing harm through misuse, manipulation, or misconfiguration. From generative AI leaking confidential data to third-party models operating as opaque black boxes, the risks are real – and growing.
Cybersecurity strategies must evolve to meet this challenge. That means extending governance frameworks, updating access controls, and fostering a culture of responsible AI use. It also means recognizing that the very systems designed to protect us can become threats if left unchecked.
As we mark October as Cybersecurity Awareness Month, now is the time to take a closer look at how your organization is managing AI-driven risks. Contact Morefield today to assess whether your current cybersecurity framework, data policies, and access controls are ready for this new era. A proactive conversation today can help ensure your AI tools remain trusted allies—not unexpected threats—to your business tomorrow.