Opinions

How AI And Data Can Be Ethically And Productively Used To Drive Business Strategy

AI is a tool to help drive productivity, however, this needs to be aligned to the risk and ethical culture of the organisation.

Share this article

Share this article

AI is a tool to help drive productivity, however, this needs to be aligned to the risk and ethical culture of the organisation.

Opinions

How AI And Data Can Be Ethically And Productively Used To Drive Business Strategy

AI is a tool to help drive productivity, however, this needs to be aligned to the risk and ethical culture of the organisation.

Share this article

If used, built and implemented in the right way AI will support delivery of outcomes aligned to the business strategy. But it is not a silver bullet.

At the individual level

To be able to use the technology productively, we need to stop talking about ‘AI’. Instead, we should be talking about the specific AI component we are referring to.

Using ChatGPT, Co-Pilot, or any other similar AI tools to help us write a draft response, summarise meeting minutes or define actions discussed in a meeting is NOT the same as an AI model used to understand customer sentiment.

But if it is all AI, what is the problem? We are continuing to talk about AI in a way that does not help us unlock the productivity benefits it can deliver.

The issue is, because AI is often referred to without any real context, many people are scared of it and/or don’t understand it. This is why we need to start looking at AI components as discrete productivity tools.

When using ChatGPT/CoPilot to improve productivity we must be talking about it in terms of productivity gains. For example, using AI writing and summarisation has helped reduce the time I write letters, meeting notes and summaries by 90%. Leading with the benefits helps implement the tools organisation-wide and ultimately drive business strategy. Instead of training on AI, train on how to become more productive in all facets of business communication using AI tools as an enabler.

When assessing the use of general AI productivity tools like ChatGPT/CoPilot, the language and outcomes need to be defined in a positive way to encourage people to share their experiences, understand what it is and why it is beneficial. Furthermore, too often the training and rollout of these tools is seen as a technology activity. It should be understood, communicated and delivered as a way to give employees time to think more and use their knowledge to add value in the organisation. Creating this open forum for discussion around AI use will help to ensure open, ethical and productive implementation.

At the organisational and cultural level

When considering the technology and data-heavy AI models, those that are built, maintained, monitored and implemented in an organisation, to most people these will be opaque. A tool you use to understand customer sentiment or optimise your supply chain, if implemented well, will provide the user with quality information to help them work productively – making better informed decisions or automating some of the decisions in an end-to-end process. It is essential that the mechanics of how it works and the monitoring of its efficacy should be closely managed in the background. This is where the importance of ethics, AI Ops and production integration are critical.

Data culture and understanding are fundamental to both creating these models and ensuring that they are tested, validated and monitored to ensure the results are in line with the business objective they serve. Here, the broader culture is needed not just in the data teams, but across the business. Simple business-focused communication, and inclusion into mandatory and regulatory training are fundamental tools to be used to help educate and get the message across that data culture is nothing more than a component of the broader business culture in how you create, deliver and manage products and services to customers.

This integration of data into the fabric of your general business culture is also important to ensure that risk in all forms is considered and understood. Making decisions based on outcomes that unfairly influence towards a specific group or enhance existing bias is both an operational, reputational and strategic risk. An environment that is focused on understanding how decisions are made with AI that has a lack of knowledge or consideration for bias makes for a dangerous cocktail.

A great example of this is in the book “Weapons of Maths Destruction” in which Cathy O’Neil provides numerous examples of how poorly thought through models using proxies to create outcomes can cause more harm than good. Often the outcomes seem “logical” but are not thought through using the head (logic), heart (does it feel right) and gut (experience and intuition).

Aligning at the strategic level

Data culture and strategy are important and their basis in assisting with delivering the business capabilities of an organisation are critical in creating uniformity and understanding. By understanding the interdependencies of delivering across an organization, it creates a prompt for people to challenge what they understand. Too often singular and narrow focus stops diversity of thinking with people considering only what is in front of them. This is true for the data teams, but also more broadly for the wider organisation.

Every business unit should understand the business capabilities they are responsible for and how they are part of a broader ecosystem of delivering products and services. If they don’t, then helping to build this understanding is a key way to drive unity and understanding at a more general level that is easily understandable across the organisation. Statutory reporting management for finance for example can be explained as “reporting needed to keep regulators and shareholders informed on how we perform”. The detail of how is only relevant here to a section of the organisation, but the broader capability should be understood by all.

In conclusion

To be effective, AI requires the basic building blocks of excellence for process, organisation design and general business management. While the application is specific, ethics, monitoring, testing, communication and management are needed. If implemented without ethics and unproductively, costs increase and the risk of delivery against the business strategy are reduced. Whilst the technology landscape may shift, some things never change: good business needs good people doing the right things.

Barry Green is a future-thinking transformation leader who is passionate about using data, AI and technology to drive digital efficiencies in both our business and personal lives. Alongside Jason Foster, Barry is the co-author of Data Means Business: A practical guide to creating value from data and AI.

Related Articles
Get news to your inbox
Trending articles on Opinions

How AI And Data Can Be Ethically And Productively Used To Drive Business Strategy

Share this article