Risks Beyond the Inflection Point - The Downside of Artificial Intelligence

Strategy

A funny thing happened at home recently. Not my home but that of the Neitzel family in Texas who recently installed Amazon’s new voice-activated digital assistant, Echo. Six-year-old Brooke wanted to play dolls and so asked the ever so helpful assistant, “can you play dollhouse with me and get me a dollhouse?”

Echo complied, ordering a KidKraft Sparkle Mansion dollhouse. For some inexplicable reason, Echo also ordered “four pounds of sugar cookies”. How good is that? From a 6-year old’s perspective, SENsational (emphasis intended).

But the story goes on…

…to a San Diego TV station where the incident was reported using the key word that activates the digital assistant. That word is ‘Alexa’, as in, “Alexa, tell me the weather forecast”. When the TV announcer uttered the word Alexa in relation to the dollhouse story, several viewers found their own Echo’s waking up and, you guessed it, ordering yet more doll’s houses. A boon for KidKraft and a potential delight for many unsuspecting 6-year olds. Not so for the viewers who called in to complain.

Multiple orderings of unwanted dollhouses was a minor, if humorous, transgression by Alexa; but it could be much more serious. These intelligent digital assistants, forerunners to Artificial Intelligence (AI), are not only listening all the time, but they are also learning. In Arkansas, police sought to use data from a man’s Amazon Echo as evidence in a murder case against him, potentially making Alexa a key witness to murder.

The implications are profound and have not gone unnoticed by well-respected minds.

In the medium-term, Artificial Intelligence is likely to polarise the workforce in ways never before seen. In a visit to Australia last month, Paul Hermelin, CEO of global tech consulting firm Capgemini, told The Australian newspaper, “We will see a polarisation of our society between extremely qualified people and a lot of people doing low-qualified tasks.

“Companies that are at the forefront of tech like Capgemini should be part of the social reflections on all of this.’’

Where technology has previously replaced the low-qualified jobs in society, it is likely that with AI it will be the middle management roles that are replaced. Organising, scheduling, overseeing quality assurance and managing the performance of humans is the perfect domain for machines that can monitor and react instantly with 100% precision, can learn and that never forget.

Machines assuming middle management tasks is already occurring, with poor outcomes. Just before Christmas, an Uber driver in Sydney who had completed more than 2500 trips received an automated message telling him he had been ‘deactivated’ because his ride cancellation rate had hit more than 40 per cent. He tells the Australian Financial Review, “I'm hiring a car at $290 a week, I've got teenage kids. At the end of the year, I've got bills. How else, at short notice, could I pay that bill? At 46 years old, I don't have that option."

This message severing his income a fortnight before Christmas came not from a hard-assed middle manager, but directly from Uber’s computers. Already gone are the middle managers who might previously have wrestled with such a decision.

With computing power and machine intelligence growing at exponential rates, these isolated examples are likely to increase in number and severity at similar rates.

In 2014, in a speech to the AeroAstro Centennial Symposium at MIT, Elon Musk said, “I think we should be very careful about Artificial Intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful,” said Musk. “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.”

He went on to say, “With Artificial Intelligence we are summoning the demon.”

Bill Gates is also concerned about Artificial Intelligence, as is Stephen Hawking and another 8,000 people who have signed an open letter on the future research requirements to ensure humanity can reap the benefits of AI while avoiding the pitfalls. The letter states, “We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do.”

As with any endeavour, the future of AI is a cocktail of opportunity and risk; in this case untold opportunity combined with profound risk.

The President of the Future of Life Institute, Max Tegmark, sums it up thus:

Everything we love about civilisation is a product of intelligence, so amplifying our human intelligence with artificial intelligence has the potential of helping civilisation flourish like never before – as long as we manage to keep the technology beneficial

Add your comments

  1. Enter your comments

     

Your details

Approval

  1. Email me if my comment is published