Artificial Intelligence (AI) isn’t magic, it’s just mathematics - albeit hard mathematics. AI has come roaring out of the research laboratory’s where it was invented and is now dominating the Research and Development (R&D) agendas of all the big tech companies on the planet. From Apple’s virtual assistant (SIRI) to self-driving cars, AI is progressing rapidly. While science fiction often portrays AI as robots with human-like characteristics, AI can encompass anything from Google search algorithms to an Electrocardiogram (ECG) in your watch, right through to autonomous weapons.

 

Artificial intelligence has been around for a long time - the Greek myths contain stories of mechanical men designed to mimic our own behaviour, yet surprisingly, the history of AI is largely a history of failure. This harsh judgement sums up 50 years of trying to get computers to think.

 

The beginning of AI has a clear birthday, in the summer of 1956, when a small group of computer science researchers came together at Dartmouth College in New Hampshire. They brought with them their wild beards, thick glasses, sense of humour, and most importantly their optimistic big minds. They gathered with the explicit goal of programming computers to reason, in other words, to play chess, solve algebra problems or diagnose disease. They reasoned that if they could teach a computer all the discrete parts of human intelligence, representing knowledge about the real world, speaking language and reasoning logically, that a generalised intelligence would emerge with human behaviour that wasn’t explicitly programmed, albeit with emotional intelligence, intuition and creativity.

 

The dream was a fully autonomous, thinking, interacting robot, not unlike C3PO in Star Wars, and although they didn’t create Robbie the robot or HAL, to some degree they were successful in laying down the fundamental research that has inspired so many to continue their visionary work today. 

 

After Dartmouth, there were a series of boom and bust cycles in which researchers would produce a super compelling demo that would attract research and funding, but produce less than impressive results; a discouraging machine learning language translation which failed to capture semantics or meaning and a  talk therapist project which was easily tripped up. This lead to an AI winter (akin to a nuclear winter in which nothing could grow) and funding would dry up for years and years. 

 

This frustrating cycle continued until the late ’80s, in a series of ups and downs, twists and turns, friends and rivals, successes and failures; so difficult to prove, which has now blossomed to become the belle of the ball.

 

(Source: Nvidia)

 

The next approach was then based on the thinking, if we can’t teach computers from the bottom up, like the way we teach our kids, then perhaps we can teach them from the top down. This thinking led to the spectacular recent breakthrough called deep learning which simply put, is a class of machine learning algorithms based on the way a biological nervous system behaves. A lot of the previous techniques were attempts at trying to program computers by trying to figure out how humans and experts behave, codifying those as rules and inputting them into a computer, which yielded limited results. 

 

By contrast, deep learning feeds the network data and that data learns to categorise its own inputs, the computer will then learn how to classify that data without any guidance from an expert and without any rules. This is the heart of the revolution, this ‘data up’ approach is the big breakthrough in artificial intelligence. 

 

So, how is that useful in everyday life? If you followed a BuzzFeed headline to a BuzzFeed article, that headline was tuned using deep learning. If you’re an AirBnb host and saw a price that the system recommended that you list your property at, it’s a product of machine learning. It won’t be long until your beer can drive itself from the brewery to the store and then to your house in an automated delivery cart.

 

Over the past decade, machine learning has given us an ever growing list of things, self-driving cars, practical speech recognition, effective web search, and a vastly improved understanding of the human genome. Using deep learning techniques, diagnosing diseases, tumours from x-rays and cancer in blood is all possible. Speech to text algorithms is now at less than a 4% word error rate, which is less than humans and is why SIRI can understand what you’re saying. YouTube has put subtitles on 1 billion YouTube videos without any human involvement using speech to text algorithms and can even describe sound effects. 

 

We’re in an AI Spring and robots will replace more and more jobs currently performed, in the future. Artificial intelligence expert Kai-Fu Lee writes in his recent book AI Superpowers that as many as 40% of professions will be automated in the next 15 years, yet this change will bring a lot of good. Automation will take over a lot of tedious, mind-numbing, or dangerous chores that few people enjoy doing, but the transition will be disruptive, with some jobs safer than others. Among the safest professions, Lee says, are creative ones because no algorithm can replicate human creativity. 

 

Robots will need programming, mechanical parts, precision processing, dealership networks and maintenance services. While some experts still guess that human-level AI is centuries away, most AI researches at the 2015 Puerto Rico Conference guessed that it would happen before 2060. 

 

But, what about the evil conscious robots with weapons and red eyes I hear you say? Thankfully that scenario is science fiction and doesn’t worry researchers. AI experts agree that competence is the real problem and making sure that AI’s goals are aligned to human ones. Can AI learn human values, who wins medals, who goes to jail and why drinking coffee makes you happy? There is going to be, in some sense, a values industry and there is a huge economic incentive to get it right.

 

Whereas it may be little more than a minor nuisance if your laptop crashes or gets hacked, it becomes all the more important that an AI system does what you want it to do if it controls your car, aeroplanes, pacemakers, automated trading system or power grid. 

 

If you ask an obedient intelligent car to take you to the airport as fast as possible, it might get you there chased by helicopters and you covered in vomit.  Not what you wanted but literally what you asked for. Safety factors will play a role in how quickly new system efficiencies take off. Futurist Lars Thomsen says “New technologies, like autonomous aircraft, will only be allowed if these systems are, by at least a factor of 10, safer than if you would have humans at the wheel, but I believe that if you have automated traffic control and object avoidance and a new paradigm of how you are managing the lower air space, then I think autonomous aircraft will be even safer than driving a car or anything else.”

 

Since it may take decades to complete the required AI safety research, it is prudent to start it now. Our civilisation will flourish if we win the race between the growing power of technology and the wisdom with which we manage it.

 

If we get it right, AI will be less like The Terminator’s Skynet and more like Pinocchio’s little guide on his shoulder, his helpful little friend - Jiminy Cricket. Ultimately, I’m convinced that the development of artificial intelligence will be a good thing for humankind. AI can liberate us from the mundane and enable us to focus on more engaging, creative, and fulfilling work. We just need to prepare ourselves and our children for the future…and we must start right now.

 

 

Please reload

Our Recent Posts

Five Key Takeaways for Defence Industry Cyber Security

August 19, 2019

Management: Your Business is your Business

May 7, 2019

You and A.I.

March 7, 2019

1/1
Please reload

Tags