3 min read

Artificial Intelligence is just around the corner. Of course, it’s been just around the corner for decades, but in part that’s our own tendency to move the goalposts about what ‘intelligence’ is. Once, playing chess was one of the smartest things you could do. Now that a computer can easily beat a Grand Master, we’ve reclassified it as just standard computation, not requiring proper thinking skills.

With the rise of deep learning and the proliferation of machine learning analytics, we edge ever closer to the moment where a computer system will be able to accomplish anything and everything better than a human can. So should we start worrying about SkyNet? Yes and no.

Rule of the Human Overlords

Early use of artificial intelligence will probably look a lot like how we used machine learning today. We’ll see ‘AI empowered humans’ being the Human Overlords to their robot servants. These AI are smart enough to come up with the ‘best options’ to address human problems, but haven’t been given the capability to execute them. Think about Google Maps – there, an extremely ‘intelligent’ artificial program comes up with the quickest route for you to take to get from point A to point B. But it doesn’t force you to take it – you get to decide from the options offered which one will best suit your needs. This is likely what working alongside the first AI will look like.

Rise of the Driverless Car

The problem is that we are almost certainly going to see the power of AI increase exponentially – and any human greenlighting will become an increasingly inefficient part of the system. In much the same way that we’ll let the Google Maps AI start to make decisions for us when we let it drive our driverless cars, we’ll likely start turning more and more of our decisions over for AI to take responsibility for.

Super smart AI will also likely be able to comprehend things that humans just can’t understand. The mass of data that it’s analysed will be beyond any one human to be able to judge effectively. Even today, financial algorithms are making instantaneous choices about the stock market – with humans just clicking ‘yes’ because the computer knows best. We’ve already seen electronic trading glitches leading to economic crises – six years ago! Just how much responsibility might we start turning over to smart machines?

The Need to Solve Ethics

If we’ve given power to an AI to make decisions for us, we’ll want to ensure it has our best interests at heart, right? It’s vital to program some sort of ethical system into our AI – the problem is, humans aren’t very good at deciding what is and isn’t ethical! Think about a simple and seemingly universal rule like ‘Don’t kill people’. Now think about all the ways we disagree about when it’s okay to break that rule – in self-defence, in executing dangerous criminals, to end suffering, in combat. Imagine trying to code all of that into an AI, for every different moral variation. Arguably, it might be beyond human capacity.

And as for right and wrong, well, we’ve had thousands of years of debate about that and we still can’t agree exactly what is and isn’t ethical. So how can we hope to program a morality system we’d be happy to give to an increasingly powerful AI?

Avoiding SkyNet

It may seem a little ridiculous to start worrying about the existential threat of AI when your machine learning algorithms keep bugging out on your constantly. And certainly, the possibilities offered by AI are amazing – more intelligence means faster, cheaper, and more effective solutions to humanity’s problems. So despite the risk of us being outpaced by alien machine minds that have no concept of our human value system, we must always balance that risk against the amazing potential rewards. Perhaps what’s most important is just not to be blase about what super-intelligent means for AI.

And frankly, I can’t remember how I lived before Google Maps.

LEAVE A REPLY

Please enter your comment!
Please enter your name here