Thursday, June 23, 2016

Will machine intelligence threaten life, liberty and the pursuit of happiness?

This post has nothing to do with the influence of political party machines on current election campaigns.

As some readers will already know, Nick Bostrom’s book, Superintelligence, discusses the challenge presented by the potential for machine brains to surpass human brains in general intelligence. Bostrom does not claim that is imminent, but he suggests it is somewhat likely to happen sometime this century. After AI has surpassed human intelligence, the author fears that an initial superintelligence might soon afterwards obtain a decisive strategic advantage and pose a threat to human life. There have been several good reviews of Superintelligence, including one by Ronald Bailey in Reason.

How could a machine programmed by humans come to threaten human life? Some examples mentioned by Bostrom imply that it would be quite easy for that to occur by accident. For example, a machine that was given the simple objective of maximizing the production of paperclips could seek to acquire an unlimited amount of physical resources and to eliminate potential threats, including humans who are likely to try to prevent it from achieving its goal.

People like Bill Gates and Elon Musk, who could not be viewed as technophobes, argue that the threats posed by superintelligence should be taken seriously. That didn’t stop me asking myself why anyone in their right mind would program a machine to maximize the number of paperclips. Any sensible businessman would ensure that the machine was programmed with a profit-making objective, rather than a production objective. I have to acknowledge, however, that would still leave the problem of ensuring that the superintelligence doesn’t use unethical means to eliminate competitors. 

There is also the problem that some of the people developing AI might be crazy, or antipathetic towards humans. For example, it does not seem beyond the bounds of possibility that a group of extreme Greenies might seek to develop a superintelligence that would pursue the selfless goal of restoring the natural environment to its condition prior to the Anthropocene.

Most of Superintelligence is devoted to a discussion of the difficulty of designing superintelligence so that it would not be a threat to humans. While reading the book I felt that at times I was reading about the problems of designing a god – an enormously powerful entity that would govern our lives. For example, if the AI is given the seemingly innocuous goal of making us all happy it might arrange for us to have electrodes implanted in the pleasure centres of our brains, or perhaps even upload our minds to computers and then administer the digital equivalent of a drug to make us ecstatically happy all the time.

At other times I felt the problems being discussed were more like those which might be involved in establishing the characteristics of a good society. Bostrom seems to favour AI being given a goal such as maximizing our coherent extrapolated volition (CEV). As I understand it the CEV concept implies that if we knew more and thought faster our individual views about the nature of a good society would converge, so that a consensus could be discovered. The author explains that the CEV approach does not require that all ways of life, moral codes, or personal values be blended together into a stew. The CEV dynamic “is only supposed to act when our wishes cohere”.

The CEV concept has some appeal to me because it seems consistent with my own efforts to describe the characteristics of a good society in the most popular post on this blog. However, it does not require superintelligence to identify those characteristics. It would not be difficult to establish through existing survey methods that the vast majority of humans want to live in peace, to have opportunities to live for a happy lives and to have some degree of security to protect against misfortunes. The problem is in ensuring that a superintelligence would interpret such objectives in a manner consistent with individual human flourishing.

The main reservation I have about Superintelligence is that it does not contain much discussion about defence against malevolent AI. As I see it, it is probably worthwhile to undertake collaborative efforts to avoid the accidental development of machine intelligence in ways that might not be benign. But such efforts are not likely to prevent the AI being used unethically by people with nefarious objectives. Our defences against cyber-attack will need to be strengthened to protect against malevolent AI.


We need a Superintelligence dedicated to defending our individual rights. But we should be careful what we wish for! Once upon a time, a few centuries ago, some enlightened people set about establishing forms of government dedicated to protecting life, liberty and the pursuit of happiness. We ended up with warfare/welfare states.

No comments: