Is AI Casting A Shadow Over Your Employees And Company Practices?
Posts by Alan TaylorSeptember 12, 2023
Artificial Intelligence (AI) is now behind almost every piece of software we use, so by extension, at work or in our leisure time, it’s affecting the way we live.
Imagine you’re working or taking a vacation in an unfamiliar town, and you want to find a restaurant.
If you Google up the phrase ‘restaurants near me’ then straight away, Google will draw upon your previous search history, your social media posts and dozens of other online trails you’ve left over the years to suggest the type of restaurant that Google ‘thinks’ you’ll prefer.
Whether you realize it or not, AI has just heavily influenced where you’re going to eat tonight.
That same process applies equally to employees in the workplace, and if those individuals are using their own devices or their own choice of software to perform whatever tasks they’ve been asked to complete, someone’s third-party AI will probably be part of that process.
From a copywriter using MS Word’s spell check to a marketing intern choosing a stock photo for their company’s social media post, an instance of AI will be influencing your employees’ decision-making trends to some extent.
Shadow Boxing
Accordingly, the presence of such shadow AI within a company’s procedures and systems should be audited and recognized, to ensure that its use isn’t harmful to a company’s systems or, at the least, portraying that organization in a way that they may not desire.
The nomenclature ‘Shadow AI’ was coined from the concept of ‘Shadow IT,’ – referring to the use of IT systems that were not sanctioned by an organization’s IT department and policies.
The risks of Shadow AI include potential data privacy issues, exposure to hacking and phishing attacks together with unintended consequences which might be as seemingly insignificant as spell checking a document for publication in American English, rather than, say, UK English.
Also, employees might be using AI-driven software which might generate misinformation, then those workers could act upon that data with possible far-reaching consequences.
One example of this would be exposing proprietary company information to manipulation from a Large Language Model (LLM) AI such as Bard or Chat GPT and taking the output as the unquestionable truth.
Sometimes LLMs are plain wrong, and if they’re ‘unsure’, they’ve been known to ‘make up’ results and appear to be ‘confidently wrong’.
This is somewhat worrying, on more than one level. Firstly it’s behavior that incompetent humans display when they don’t want to appear stupid; this begs the question that if LLMs don’t ‘wish’ to appear stupid, do they possess some sort of self-awareness? That’s a little scary in itself.
AI Isn’t Infallible By Any Means…
Secondly, if people simply assume that LLMs ‘must be right’ because they’re seen as unquestionably hyper-intelligent, the consequences could be dire when they give out false information.
This worrisome concept has already become sufficiently commonplace that the California Government is considering legal parameters as to where and how AI can be used in certain aspects of people’s daily lives.
Whatever the pros and cons of using Shadow AI in the workplace, what’s most important is to have a policy in place and enforce it.
In small businesses with only a handful of employees, say for example a tech startup, it’s likely that the employees themselves are tech-savvy enough and sufficiently intelligent to agree on which software and procedures to use in their daily tasks.
However, a larger organization should not act in such a democratic way. Irrespective of employees’ fears of job losses due to automation, their board-level employees should agree on a set of regulations about how certain things should be done in the workplace and with what tools, enforcing those regulations with employee disciplinary procedures if necessary.
Another way to help ensure that employees aren’t finding their own ways of doing things in the workplace is to ensure that they all use standardized software and are trained properly to use it.
In turn, an excellent way of ensuring that workers are sufficiently educated on software usage is by their employer installing digital adoption platforms (DAPs).
A DAP is a secondary layer of software, acting as a teaching assistant that constantly runs in the background of the primary software to which the DAP is assigned.
Harnessing ‘Real’ Human Intelligence Using Digital Adoption Platforms
Crucially, DAPs reduce the frequency of mistakes made due to poor digital adoption procedures by using its own AI, which hyper-personalizes individual employees’ reactions to software as they work.
Imagine for example that employee A logged onto any workstation and opened their accounts filing software.
A new and unfamiliar screen appears after a software upgrade, and the employee hesitates for a few moments, as they can’t understand what’s required to be input.
The DAP would be aware that this operator has never seen this particular screen before, and would display tooltips over each field, clarifying or explaining what figures need to be input into which fields.
However, if the employee logs off that workstation and another person takes over the same machine, the DAP would be aware that this fresh operator has already used the new updated screen fields several times and won’t display any potentially irritating tool tips unless the employee asks for help or repeatedly hesitates.
Having a DAP running alongside employees’ daily screens is like seating them with a friendly, helpful and experienced colleague, who only offers help when the worker is either unfamiliar to a new process or appears to have forgotten how to run a task smoothly.
Having AI in the workplace is certainly not a bad thing in itself; but it should be AI that has been specifically trained to work in the company’s closed environment, rather than third party ‘Shadow AI’ being brought into the company by a struggling employee taking procedural matters into their own hands.