Why Artificial Intelligence Will Change Everything

Strategy

I am sitting in a New York apartment, looking out across the Manhattan skyline. The sweeping view is impressive, the apartment trendy. Floor-to-ceiling windows, minimalist furniture and some uber-cool high-tech gear create a futuristic ambience. 

In front of the couch is a low-slung coffee table and with a laser pointer I click on the glass top to transform it into two-dimensional tiles that each represent stockmarket industry sectors. 

Clicking on “Tech”, a 3D hologram appears above the table showing the key stocks that are rising, illustrated with green ascending columns. Those that have fallen are shown as descending red bars. Pointing at one column reveals further detail about the selected company’s 12-month performance history.

Welcome to 2017. I could get used to living this way.

But the reality is, this is not reality at all. I was immersed in virtual reality (VR) and for half an hour New York was my world.

From there I travelled across the globe to the Zaatari Refugee Camp in Jordan, home to 130,000 Syrians fleeing violence and war. Children make up half the camp's population and I meet Sidra, a delightful 12-year-old girl who acts as my guide. Standing in the desert, razor wire surrounds me and as children play happily I have a fleeting but all too real sense of what it might be like to be so incarcerated. 

I will reflect on that desert experience when I read my next news story of refugee camps. That story will be a two-dimensional delivery of information into my brain. My brief time in Zaatari will stay with me not as content, but as a memory of somewhere I existed within. This movie was more than 3D, more than a virtual form of reality.

The Development of Virtual Reality

Clouds over Sidra so vividly portrays refugee camp life that it is set to become a classic VR film. Being literally in the movie, I wrote my own script as I could look 360 degrees around me, at any individual or group at any time. Directors Gabo Arora and Chris Milk of immersive story studio Within had no control over where or when I looked around the camp. This immersive storytelling is an entirely new experience for film producers.

It is certainly a new experience for us as consumers. As co-director and co-founder of Within, Chris Milk, says: “Suddenly the challenge is no longer suspending our disbelief, but remembering that what we’re experiencing isn’t real.”

He adds: “I believe that virtual reality marks an inflection point. That this is much more than the latest gizmo or fad, but rather the genesis of a fundamentally new technology platform, one that will change how we communicate, connect and tell stories. VR is just its first manifestation.”1

Milk notes that the last communications shift of similar magnitude to VR was in the 19th century with the invention of technologies that enabled the recording and broadcasting of

moving pictures and sound. From this emanated “radio, cinema, television, the telephone, the internet — spawning a multitude of new industries, media formats and storytelling languages in the process”.

My view is that we are at an inflection point of not just communications, but of technology generally. This fundamental pivot is being driven principally by developments in artificial intelligence (AI). 

The development of artificial intelligence

The first discussion of AI was at Dartmouth College in 1956 and the next 20 years saw significant research investments and some progress. However, lack of computing power, inadequate data and the inability of machines to learn over time, severely limited AI capabilities.

With a lack of progress, and despite predictions or maybe as a result of the unfulfilled promises, funding was substantially scaled back.

When I joined the IT industry in the early 1980s there was a resurgent boom in AI through the new field of expert and knowledge-based systems, principally led by the Japanese with their Fifth Generation Computer Project.

However, by 1990 the ambitious goals set by the project, such as being able to carry on a casual conversation, had not been met.

Nonetheless, the industry did progress. In 1997 IBM’s Deep Blue computer beat reigning world chess champion Garry Kasparov. Deep Blue processed 200 million moves per second, more than twice as many as its predecessor, which had lost the first match. Success followed success and in 2005 a Stanford robot drove autonomously for 210 kilometres across an unknown desert track. Six years later, in 2011, IBM’s question-answering system, Watson, beat two all-time great Jeopardy champions.

These successes were mainly due to the substantial performance improvements in computing power that were available to process massive data sets.

But computing power alone would not deliver on the promise of true artificial intelligence. In what is known as Moravec’s paradox, solving mathematical problems and proving theorems is relatively straightforward for computers, but recognising a face or crossing a room without bumping into furniture is extraordinarily difficult for machines. It turns out both brawn and brain are required. 

A significant breakthrough came in 1988 with the publication of Judea Pearl’s seminal text that introduced probability and decision theory into AI. Pearl developed models for neural networks, which are large collections of neural units working in a similar way to how the human brain solves problems, and evolutionary algorithms that model biological evolution such as reproduction, mutation, recombination and selection.  

These new algorithms, coupled with exponential growth in computing power, enabled machine learning and deep learning to propagate practical AI applications. Deep learning comprises multiple layers of artificial neural networks that solve the previous challenges such as facial recognition, facial expression recognition and speech recognition. 

The power of deep learning is why we don’t have to teach Apple’s Siri to understand our voice, unlike earlier versions of speech recognition software. Readers who experienced the frustrations of early versions of Dragon Naturally Speaking speech-to-text software will identify with how far the technology has progressed.2  Later releases are far more adept at understanding natural language, and Google Assistant, on my new Google Pixel phone, is

extraordinarily accurate even amidst background noise. Assistant remembers my previous questions and its answers, and builds on this knowledge with subsequent responses.

The decision-making algorithms, coupled with the substantial power of today’s smartphones and massive cloud computing capacities, are behind the plethora of useful Apps that employ artificial intelligence and virtual reality. This software is beginning to blur the lines between what is human and what is technology. Such distortion will continue as computing power accelerates at exponential rates.

Developments in computing power Google’s Chief Futurist, Ray Kurzweil, who I quoted in a recent blog, proposes the Law of Accelerating Returns to describe how technology, along with other evolutionary systems, tends to increase exponentially. Kurzweil predicts that paradigm shifts will become increasingly common, leading to "technological change so rapid and profound it represents a rupture in the fabric of human history".3

Thus, Kurzweil says, “we will not experience 100 years of progress in the 21st century; it will be more like 20,000 years of progress (at today's rate)”. The chart4 below shows the exponential growth in computing power.

The dramatic slope of the curve illustrates Kurzweil’s prediction that within the next few years we will be approaching the processing capacity of the human brain.

Quantum leaps

Even before this point is reached, and to a far greater extent than at any previous time, we are increasingly employing and enjoying the benefits bestowed by these quantum leaps in computing capabilities. A simple measure of this advanced technology’s ubiquity is the number of smartphone users in the world, which was forecast to surpass two billion in 2016.5 Exponentially increasing AI capabilities are already in the hands of many of these users. So is advanced virtual reality, with accessible and comfortable VR headsets now available at just over $100.

The New York Times reported late last year that interest in AI had reached a “frenzy”6 with Amazon, Google, Microsoft, IBM and Apple each investing billions of dollars. Two-thirds of IBM’s AI effort is in healthcare, and at the University of North Carolina the Watson system has been trained to read medical literature. 

In preparation to undertake 1,000 cancer diagnoses, it read 25 million published medical papers in approximately one week and scanned the Web for the latest scientific research.  In 99 per cent of the 1,000 cases analysed, Watson recommended the same treatment as oncologists. In 30 per cent of cases, Watson found further treatment options based on its research that were not considered by the medicos. 

This progress in computing power and the application of advanced algorithms has seen AI advances in the past five years being greater than the previous 50 years. It is propelling medicine, education, military technology and business. 

AI is now a reality, but we are only just beginning. After 50 years of preparation, we are now at a pivot point that sees technology interfacing with human intelligence in the same way our very consciousness interacts with the world.

Such a fundamental pivot is too great an opportunity to let pass.

AI’s potential holds profound implications for civilisation in the long run. In coming blogs, I will briefly address some of the existential concerns, before returning to how we may best exploit these exponential developments in technology to counter our cognitive limitations and biases to improve our strategic decision making.

For now, I’ll leave you to find your own reality, virtual or strategic. I’m buckling in to take off in the virtual “slot” position of an F/A-18 Hornet jet with the Blue Angels.

  • John Barrington is a leading strategy and governance expert, and founder of Barrington Consulting Group. For more information, email john.barrington@barrington.com.au
 

[2] For a humorous account of Siri’s limitations, see Ros Thomas on hitting the Panic Button

[4] By Courtesy of Ray Kurzweil and Kurzweil Technologies, Inc., en:PPTExponentialGrowthofComputing.jpg, CC BY 1.0, https://commons.wikimedia.org/w/index.php?curid=3324354

[6] Steve Lohr (October 17, 2016), "IBM Is Counting on Its Bet on Watson, and Paying Big Money for It""New York Times

Add your comments

  1. Enter your comments

     

Your details

Approval

  1. Email me if my comment is published