Guides

How Businesses Can Safeguard Against Poor AI-Based Decision Making

Privacy, data and permissions: will artificial intelligence mean ripping up the rule book and starting again?

Share this article

Share this article

Privacy, data and permissions: will artificial intelligence mean ripping up the rule book and starting again?

Guides

How Businesses Can Safeguard Against Poor AI-Based Decision Making

Privacy, data and permissions: will artificial intelligence mean ripping up the rule book and starting again?

Share this article

The possibilities of AI are vast and its current applications and capabilities are similarly impressive: it can review MRI and PET scans to identify malignant tumours with greater accuracy than doctors, recognise and track endangered wildlife, steer driverless cars and, more mundanely, read and extract key information from thousands of legal contracts in minutes.

However, AI poses new questions of accountability which need to be resolved when something goes wrong, with various actors who may potentially be liable for a bad outcome including the data provider, the software developer, the implementation partner, and the company operating the technology.

AI in particular presents a challenge for the existing legal framework where liability is established based on reasonably expected standards of behaviour and the foreseeability of an outcome. Hence, we counsel businesses to take a cautious and considered approach when rolling out AI solutions.

1.      Vendor and Product Selection

AI breaks new ground, so proper preparation is essential from the very earliest stage of a new implementation. It is important to understand potential suppliers and their offerings, and to match their capabilities against the business’s requirements.

Before starting any project, set clear objectives including strategic and tactical outcomes for the procurement. Where appropriate, run pilots (or proof of concepts) to demonstrate capability and use case, and to verify service performance.

Throughout, bring the right cross-business team together: business owners, external consultants, change management experts, IT, risk, compliance, procurement, legal, finance and HR.

2.      Pricing models

Select the right model. Potential options include:

·         software licence fees – which could be configured in different ways – per bot, per named-user, linked to turnover, or usage-based

·         professional services fees for installation, configuration, implementation and training

·         ongoing maintenance and support fees

Increasingly we are seeing traditional licence fees replaced or supplemented by transaction-based or value-based models with risk/reward elements allowing gain - or pain- sharing. In a competitive landscape, one leading automation vendor offers an innovative bot store (online marketplaces) and a no-cost starter package for new business customers.

Whatever the different components, customers should ensure that pricing metrics are carefully understood, and have the flexibility to scale up or down across the enterprise as business requirements change.

tax money

Fees have to be clear and flexible, in line with demand

3.      Contract Approach

Finalising an AI agreement will involve tried and tested legal drafting, blended with provisions specific to the new technology which, as yet, has few market standards and specific regulation. In such a new environment, deploying a flexible, agile methodology to contract drafting can help ensure the AI solution fits with the required purpose.

Some aspects of contracts may differ dramatically from past service arrangements: when it comes to reporting and service levels, AI could, and probably should, change the game. AI systems know what they did, and why, and to a large degree do not make mistakes. Customers may argue they should expect perfect service.

Particular attention should be paid to providing for knowledge transfer services and exit support in the case of contract expiry or unforeseen early termination – otherwise the ‘black box’ nature of AI’s inner workings risks supplier lock-in.

4.      Trust

A degree of trust in AI is essential because AI’s processes often lack transparency and can’t easily be understood by humans. But trust must be earned. The developers of ‘AlphaGo’, Google DeepMind’s system, could not explain why their system made certain complicated moves in beating the human world champion of the board game ‘Go’.

If we can’t easily comprehend AI’s conclusions, how can we be sure that automated processes are playing fair with their decision-making?

This makes it difficult to accurately assess risk, but businesses still must consider the environment in which the technology is being used: systems running critical infrastructure such as nuclear power stations must set the highest bar for what is considered safe.

Before adopting AI, businesses may need to convince a regulator, perhaps by using software to monitor the technology – algorithmic auditors that will hunt for undue bias and discrimination.

This will likely impact performance, since the system will divert processing power to self-analysis, but it could mean the difference between the system getting rejected or approved for commission. In time, though, we predict that trust in proven and widely adopted AI will become second nature.. “[a]s soon as it works, no one calls it AI anymore …”.

5.      Privacy

Where personal data is involved, data protection and privacy laws and regulations need to be considered in great detail. Big data is the enabler of AI, raising significant concerns about fairness and accountability.

AI’s powerful abilities to recognize patterns that people cannot detect threatens our right to privacy, whilst GDPR will introduce the right for individuals to question automated decisions and contest results. Privacy impact assessments will often be required at the outset of each implementation project.

A smart business will need to think carefully before procuring a system which cannot explain itself.

investigating

Privacy and data management are increasingly sought after

6.      Big Bad Data

In AI, “rubbish in, rubbish out” rules. Outcomes achieved will only ever be as good as the quality of data on which decisions are based. There are many variables to determine the quality of source data: are the data sets ‘big’ enough, is ‘real-world’ data being used, is the data corrupt, biased or discriminatory?

With so much potential uncertainty, businesses should take every effort to minimize the risks.

Where data is sourced from a third party, contracts should require transparency around lineage, acquisition methods, and model assumptions - both initially and on an ongoing basis where the data set is dynamic - and there should be mandated security procedures around the data, to prevent loss, tampering and the introduction of malware - all reinforced by comprehensive rights to audit, seek injunctive relief and terminate.

A common-sense approach means that businesses should not rely too heavily on a limited number of data points and should combine big data analytics with other decision-making tools.

The corollary of these concerns is to give tremendous power to those who own large repositories of accurate personal data. We expect the issue to become a significant focus for regulatory and contractual protection in the coming years.

Isaac Asimov, the famous science fiction writer, once laid down a series of rules to protect humanity from what we would now describe as AI-powered robots. Perhaps it is time businesses did the same through adoption of policies which promote the ethical use of AI.

After all, we can’t know the future, but we can prepare for it. And with AI, the future is now.

Tim Wright is a partner and Antony Bott is global sourcing consultant at Pillsbury Winthrop Shaw Pittman.

Related Articles
Get news to your inbox
Trending articles on Guides

How Businesses Can Safeguard Against Poor AI-Based Decision Making

Share this article