“I’m sorry, Dave, I’m afraid I can’t do that.” Ever since the release of Stan Kubrick’s film “2001: A Space Odyssey” more than 50 years ago, the sentient supercomputer HAL-9000 has embodied the fear of a superintelligence computer that emancipates itself from human oversight in order to complete its assignment. In this classic movie, HAL was depicted as turning increasingly neurotic. But today, powerful artificial intelligence is no longer a futuristic concept, and the idea that a superintelligent computer might wrest control from humans to fulfill the tasks it has been set with the highest possible efficiency, as perceived by its superhuman “mind”, is beginning to look less like science fiction and more like a very plausible and dangerous scenario.
In a research paper published in the Journal of Artificial Intelligence Research in January 2021, the authors show that recent advances in artificial intelligence (AI) have brought us closer to the scenario of a hypothetical machine intelligence entity that far surpasses the capabilities of humans. Not only is AI becoming pervasive and selectively shaping our outlook on the world, it is also becoming constantly more powerful. This, the authors argue, creates potentially catastrophic risks for all of humanity.
Already today, AI is used in more applications than many are aware, from traffic control and autonomous hunter-killer drones to policymaking. However, the prospect of superintelligent computers creates an even greater risk, according to the authors, since the scope of “ethics engineering” is insufficient to ensure control of such powerful devices: “A superintelligence poses a fundamentally different problem than those typically studied under the banner of ‘robot ethics’. This is because a superintelligence is multi-faceted, and therefore potentially capable of mobilizing a diversity of resources in order to achieve objectives that are potentially incomprehensible to humans, let alone controllable,” the authors note.
One way of controlling an intelligence that far exceeds our ability to comprehend it would be to isolate the AI, eliminating any possibility of communication or restricting it to “yes/no” answers. However, this would defeat the purpose of developing such a powerful artificial brain, which researchers hope could help humankind solve or eliminate problems such as climate change, poverty, or incurable diseases. An AI limited to binary responses would be of little or no use. But unleashing a superintelligent AI, which would be able to encompass the state of the world in its input, carries the risk of catastrophic outcomes that could put the survival of our species itself in jeopardy.
At Supertrends, we are co-creating the future – and you can be a part of it
Powerful algorithms are shaping our lives in more ways than many people realize. When, if ever, will highly advanced AI reach a level of superintelligence that can no longer be contained by humans? Visit the Supertrends App to share your own predictions on milestones like this and many more. Not an App user yet? Visit the Supertrends Pro – page to learn about your benefits and request a trial – for free!
© 2021 Supertrends
Alfonseca, M. et al. (2021). Superintelligence Cannot be Contained: Lessons from Computability Theory. Journal of Artificial Intelligence Research 70:65-76. https://jair.org/index.php/jair/article/view/12202/26642