What if your team’s go-to productivity tool is also one of your biggest security and privacy blind spots? Artificial Intelligence (AI) has come to stay and is rapidly reshaping the business landscape and transforming business operations, with Generative AI (Gen AI) at the forefront. Gen AI, which is AI that can create content in response to a user’s request or prompt, is part of the latest AI buzz —and rightly so. Think of generative AI as that ever resourceful and helpful work buddy you always wished you had, as an extra set of hands and brain. It can serve as an extra pair of eyes to review that important email you have read for the umpteenth time, professionalise the tone of your business communication, generate outlines, and review content for business proposals and reports. It answers questions intelligently using machine learning to train models using millions of data and records, saving you the time and energy you would have spent scouring through screens of Google search results.
Shadow AI refers to the use of artificial intelligence tools by employees of an organisation to carry out work-related tasks without the approval, permission, and oversight of the IT department. Most employees use these tools with the good intention of enhancing their productivity.
The most common AI tools used by employees without the IT department’s consent are generative AI chatbots. Generative AI chatbots, including Google’s Gemini and OpenAI’s ChatGPT, generate intelligent, human-like dialogue in response to prompts using natural language processing.
They are also easily accessible. Most Gen AI tools offer a free trial and can be accessed easily via a web browser or mobile app.
The use of unauthorised AI for work-related tasks is increasing rapidly, with employees sharing the organization’s data on generative chatbots. Many employees turn to Gen AI chatbots for efficiency, often without considering the security and privacy concerns of their organizational or even personal data. As employees experience the intelligent responses from these tools, they trust them more to support in refining and improving their deliverables. Employees share
emails for review by chatbots, data and reports for analysis or summarization, and other portions of their day-to-day work tasks. By doing this, they may not realise it, but they’re sharing sensitive data—emails, reports, client information. The consequences are security, privacy, and regulatory breaches, as well as potential fines.
A 2023 report by Fishbowl, a social network for professionals, found that 43% of professionals have used AI tools, including ChatGPT, for work-related tasks, and 68% of those professionals are doing so without their boss’ knowledge. This unauthorized use may expose confidential company data, creating risks for trade secrets, private client, and organisational information.
Read also: Artificial intelligence (AI): Angel or Devil
Organisational information entered in the chatbots may be revealed when other users, who may be competitors or even malicious entities, query the models on similar subjects. While these generative AI applications
themselves are not designed to exploit organisational data for malicious purposes, the data input into the model is used for training the model. After all, that’s how they got so smart—by learning from trillions of data points.
The rise of shadow AI introduces security and compliance risks. Large Language Models (LLMs), which are at the core of AI chatbot applications, learn from large data sets. The data sets are sourced from a variety of sources, including information available publicly on the web
as well as data received from users. The sharing of organizational information on unauthorized AI platforms exposes the organisation and its customers to the risk of breaches of data confidentiality, integrity, and privacy.
Sensitive personally identifiable information (PII) or protected health information (PHI) shared on these platforms is also at risk as prompt injection can be used to manipulate AI chatbots to release sensitive information. Moreover, sharing systems and application configurations can also lead to a compromise of the organization’s systems.
Another concern is that AI models hallucinate. Due to factors like high model complexity and training data inaccuracies, AI models sometimes generate fabricated, inaccurate, and misleading information that may seem factual but is not. Employees need to exercise due care
when incorporating the results of these bots into their deliverables to avoid spreading misinformation. Furthermore, cybercriminals are jailbreaking AI LLM models to make them dismiss their own guardrails to provide harmful, false, or flawed information.
Organisations are responding to these risks, but not at a fast enough pace. According to the 2024 Littler AI C-Suite Survey Report on balancing risk and opportunity in AI decision-making, fewer than half of executives (44%) say their organizations currently have policies regarding generative AI in place. This number needs to increase significantly.
While some organizations may respond to the risks highlighted by banning the use of generative AI, this may not be an effective response because employees can still use AI discreetly on their personal devices to review work-related content. It is recommended that organisations help employees understand the risks and guide them in using it conscientiously.
This would yield far better results. To tackle this challenge head-on, organisations first need a comprehensive AI strategy comprising proactive risk management, governance frameworks, and robust technical controls.
Organisations need to assess the risk and impact of indiscriminate AI usage for work tasks and then, based on the outcome of the assessment, develop a policy on its acceptable use tailored to their organisational realities.
The policy should be augmented by guidelines that define acceptable and non-acceptable use cases, including AI tools that can be used, those that must not be used, and organisational information that is strictly prohibited from being shared with unauthorised generative AI applications. These should also specify the team within the organisation that can be contacted for additional clarity and guidance. To reinforce the policy,
access monitoring, employee reporting, automated monitoring, and audit controls can be utilised. Organizations will also have to provide training to employees on the organisation’s stance on the use of unauthorized AI for work. These trainings can be incorporated as part of the organization’s ongoing security education program and are important in helping employees understand the risks of shadow AI as well as how to use AI responsibly and safely.
Organisations need to act urgently. Implementing these guardrails will enable employees to leverage this valuable technology to enhance their productivity without jeopardising their organisations and clients’ competitiveness, security, and privacy.
Ofoma, Cybersecurity GRC (Governance, Risk Management, and Compliance) expert Atlanta, USA



