Opinions

Understanding And Addressing Staff Concerns About AI

AI is in its infancy in terms of its potential. Its intelligence, and the scope of its abilities, are growing exponentially.

Share this article

Share this article

AI is in its infancy in terms of its potential. Its intelligence, and the scope of its abilities, are growing exponentially.

Opinions

Understanding And Addressing Staff Concerns About AI

AI is in its infancy in terms of its potential. Its intelligence, and the scope of its abilities, are growing exponentially.

Share this article

In the wake of the pandemic, Brexit, geo-political conflicts, climate change and economic turmoil, our businesses are in the midst of a perfect storm, many struggling to survive, let alone thrive.

But fear of AI and its implications is now causing the very people we are all depending on huge concern, uncertainty and stress, just at a time when reports of mental health issues, and reductions in productivity are increasing by the day.

The introduction of Generative AI changed everything.

It is still technology and it’s still a tool, but GAs are a major new generation of AI tool and they come with new abilities. They can now learn, make decisions independent of human input, and increasingly, act upon what they have learned.

They can generate original text and images and can formulate answers to complex questions using their own experience, training and logic - at speed, 24/7. Unlike us they don’t need sleep, or to take a break.

In their letter of 22nd March 2023 more than 1,100 senior technology leaders and researchers including Elon Musk, CEO of Tesla and SpaceX, Steve Wozniak, co-founder of Apple and Tristan Harris, executive director of the Centre for Humane Technology signed an open letter warning about the impact of AI tools.

The letter stated: “Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.”

AI is scary

I have been working in the tech sector for more than 40 years and AI is scary – we are entering the unknown. In addition to the risk for our jobs, AI tech can be used to deceive and defraud. Advances in deep fake technologies are causing us to question if what we see is true.

And AI is in its infancy in terms of its potential. Its intelligence, and the scope of its abilities, are growing – exponentially.

However, on balance, when applied by good people to ethical projects, I believe the evolution of AI will be positive for mankind. A tool cannot be good or bad – as always, that is the domain of the people who use and control it. Companies like Intel are already working on tech to identify video fraud.

Personally, I am excited and a little worried, in equal measure. However, control over the future of AI, and how we and our staff engage with it, is still in our hands. More importantly, we will all play a part in how the balance of risk and reward pans out.

A positive outcome starts with all of us, business leaders, managers and staff, all staying abreast of developments in AI. The way our staff feel about the rapid evolution of AI’s abilities and the way they behave when we ask them to engage with it, will be decided by the example we set, the governance we put in place and the way our businesses approach this increasingly capable and intelligent child.

Getting a handle on staff concerns   

As a matter of urgency, we have to get a handle on staff concerns about AI and help them better understand and recognise the opportunities this new technology brings. We need to encourage them to believe that the risks fuelling their fears can and will be contained.

So how do we reduce staff fears and encourage them to engage with AI so they use it safely, productively and put its abilities to use ethically?

Here is a 5-point plan which will help.

  1. Formulate and publish an AI strategy and AI use policy

A clear, transparent strategy outlining your plans for AI within the business and an AI policy detailing how staff should engage with AI tools, will go a long way to helping them get comfortable making use of the tech while staying in line with company governance.

  1. Recognise AI’s limitations

AI is open to bias and error – as Generative AI models will tell you if you ask them, they are all still continuing to learn. Their reference points are growing but restricted to their original training. AI tools are only as good as the code used to instruct them and the data to which they have access – like all technology they have faults and are prone to error.

  1. Limit AI’s control

AI offers huge potential but it is far from perfect – and it may well never be. AI is getting smarter but that doesn’t necessarily mean it can be trusted to ‘take control’ - particularly over functions or systems which are mission critical to the operation of our businesses. Test and retest, and where possible, simulate, before granting access to key systems.

  1. Continually monitor AI’s use and impact on your business, and on your employees

Even the tech experts are nervous about the potential impact of AI. Your staff will be listening to the news, the warnings and the horror stories.

Publish clear guidelines on ethical, responsible and safe use of AI and monitor its use, highlighting any issues as they arise. Adapt employee surveys to capture employee experiences and feelings towards AI and consider running surveys a little more frequently – as we know, AI is evolving fast.

  1. Educate, explore, motivate and celebrate

Offer staff an AI Bootcamp where they can learn the basics, and provide them with a safe sandbox environment where they can engage and explore its potential. Incentivise and motivate (reward) staff to make suggestions about ways AI could improve the business - and celebrate whenever applying AI results in an improvement in efficiency or financial gain.

If we aren’t engaging with AI to improve the efficiency of our business operations, our customers, suppliers and competitors will be. We have a choice – to be reactive bystanders or to engage and explore.

The former approach risks us being left behind as we lose our ability to digitally engage with our clients and partners - or worse, it exposes us to AI powered cyber attacks we aren’t able to defend.

The latter strategy introduces us and our staff to a whole new world of opportunities and the ability to actively participate in a major new tech driven industrial revolution.

Grant Price is CEO at YOHO Workplace Strategy.

Related Articles
Get news to your inbox
Trending articles on Opinions

Understanding And Addressing Staff Concerns About AI

Share this article