Contact

Emerging risks - Artificial Intelligence

Mar 05, 2019

When you hear the phrases "artificial intelligence", "machine learning" or "autonomous systems", what images do they conjure up?

You might be imagining a world with endless possibility. The eternal optimist?

Or perhaps a dystopian future - a robotic society. The less optimistic?

Or maybe something in-between. The balanced view?

Whatever it is, there is little doubt that artificial intelligence (AI) has entered the mainstream and is not going away.

 
If you use Google to search - and who doesn't - you're already using AI.*
 
*Search engines typically use machine learning, a branch of AI.
 

The opportunities are vast but they come with risk

What are some of the associated risks we need to be aware of (for our careers and for organisational success)?

 

Let's start with some of the organisational risks:

 

1. Customer protection:

First do no harm. Using AI incorrectly can be catastrophic. Just ask the guys who are developing self-driving cars. AI needs to be carefully crafted to respect, and protect, privacy.

 

2. Customer expectations:

Matt Turck, a partner at Firstmark (a VC firm with an impressive portfolio) has said that "Customers expect your AI to be superhuman." Customers expect that your products and services work, so consequently, any AI you use needs to work.

There is tolerance for human mistakes, but not so much for algorithm errors. We all know how sophisticated the Google search is, but if it doesn't work we get frustrated.

 

3. Customer trust:

Your customers are growing in sophistication. If you add too much colour to your AI adoption claims they will likely not translate into better customer outcomes.

Eventually your customers will notice the overstatement and call you on it. The reality is that "calling you on it" may mean talking with their feet - as they exit to competitors that are not exaggerating their claims. Humans prefer honesty.

 

4. Technology:

Traditional technology risk and control practices extend to AI. Among them are:

  • cyber/security protection (e.g., preventing hackers from accessing the AI brain).

  • change control (e.g., ensuring that algorithm changes are tested).

  • 3rd party oversight (e.g., monitoring API performance, data cache retention and access).

5. Data:

AI cannot work well with poorly managed data.

Governance of data is important in general, including quality, privacy, confidentiality and compliance.

But data quality is particularly important - the lower the quality, the higher the risk of failure.

 

6. People:

Your people are not ready and they either know they aren't (morale risk) or don't know they aren't (efficiency risk).

Or your people want to explore, but they are blocked from doing so, and you risk losing them to competitors.

 

7. Failure to adopt:

If you have not yet started to experiment with AI, you are already behind the curve. While there are services and tools that enable acceleration, it takes time to build literacy across the organisation. Sustainable adoption does not happen overnight.

 

And personal/career risks?

Here are some of the key risks and opportunities to consider:

 

1. Leading self:

Have we blocked the possibilities or are we open to understanding and embracing the inevitable changes?

 

2. Leading others:

Are we enabling our teams to understand, explore and adapt?

 

3. Leading the organisation:

How do we ensure that our organisation is preparing?

 

The 10 risk considerations outlined here are clearly not a comprehensive set, but they could help fill gaps in the AI risk profile, or could perhaps start the AI conversation.

 

How are you managing the risks associated with AI?

 

Subscribe to the blog mailing list