Thursday, January 10, 2019

Some Issues with Artificial Intelligence

The Real Problems with Artificial Intelligence

Sabine Hossenfelder as a blog post

January 9, 2019 -- In recent years many prominent people have expressed worries about artificial intelligence (AI). Elon Musk thinks it’s the “biggest existential threat.” Stephen Hawking said it could “be the worst event in the history of our civilization.” Steve Wozniak believes that AIs will “get rid of the slow humans to run companies more efficiently,” and Bill Gates, too, put himself in “the camp that is concerned about super intelligence.”

In 2015, the Future of Life Institute formulated an open letter calling for caution and formulating a list of research priorities. It was signed by more than 8,000 people.

Such worries are not unfounded. Artificial intelligence, as any new technology, brings risks. While we are far from creating machines even remotely as intelligent as humans, it’s only smart to think about how to handle them sooner rather than later.

However, these worries neglect the more immediate problems that AI will bring.

Artificially Intelligent machines won’t get rid of humans any time soon because they’ll need us for quite some while. The human brain may not be the best thinking apparatus, but it has a distinct advantage over all machines we built so far: It functions for decades. It’s robust. It repairs itself.

Some million years of evolution optimized our bodies, and while the result could certainly be further improved (damn those knees), it’s still more durable than any silicon-based thinking apparatuses we created. Some AI researchers have even argued that a body of some kind is necessary to reach human-level intelligence, which – if correct – would vastly increase the problem of AI fragility.

Whenever I bring up this issue with AI enthusiasts, they tell me that AIs will learn to repair themselves, and even if not, they will just upload themselves to another platform. Indeed, much of the perceived AI-threat comes from them replicating quickly and easily, while at the same time being basically immortal. I think that’s not how it will go.

Artificial Intelligences at first will be few and one-of-a-kind, and that’s how it will remain for a long time. It will take large groups of people and many years to build and train an AI. Copying them will not be any easier than copying a human brain. They’ll be difficult to fix once broken, because, as with the human brain, we won’t be able to separate their hardware from the software. The early ones will die quickly for reasons we will not even comprehend.

We see the beginning of this trend already. Your computer isn’t like my computer. Even if you have the same model, even if you run the same software, they’re not the same. Hackers exploit these differences between computers to track your internet activity. Canvas fingerprinting, for example, is a method of asking your computer to render a font and output an image. The exact way your computer performs this task depends both on your hardware and your software, hence the output can be used to identify a device.

Presently, you do not notice these subtle differences between computers all that much (except possibly when you spend hours browsing help forums thinking “someone must have had this problem before” and turn up nothing). But the more complex computers get, the more obvious the differences will become. One day, they will be individuals with irreproducible quirks and bugs – like you and I.

So we have AI fragility plus the trend of increasingly complex hard- and software to become unique. Now extrapolate this some decades into the future. We will have a few large companies, governments, and maybe some billionaires who will be able to afford their own AI. Those AIs will be delicate and need constant attention by a crew of dedicated humans.

This brings up various immediate problems:

  1. Who gets to ask questions and what questions?
This may not be a matter of discussion for privately owned AI, but what about those produced by scientists or bought by governments? Does everyone get a right to a question per month? Do difficult questions have to be approved by the parliament? Who is in charge?

  1. How do you know that you are dealing with an AI?
The moment you start relying on AIs, there’s a risk that humans will use it to push an agenda by passing off their own opinions as that of the AI. This problem will occur well before AIs are intelligent enough to develop their own goals.

  1. How can you tell that an AI is any good at giving answers?
If you only have a few AIs and those are trained for entirely different purposes, it may not be possible to reproduce any of their results. So how do you know you can trust them? It could be a good idea to ask that all AIs have a common area of expertise that can be used to compare their performance.

  1. How do you prevent that limited access to AI increases inequality, both within nations and between nations?
Having an AI to answer difficult questions can be a great advantage, but left to market forces alone it’s likely to make the rich richer and leave the poor behind even farther. If this is not something that we want – and I certainly don’t – we should think about how to deal with it.

No comments:

Post a Comment