AI takeover

From Infogalactic: the planetary knowledge core
(Redirected from Cybernetic revolt)
Jump to: navigation, search

AI takeover refers to a hypothetical scenario in which artificial intelligence (AI) becomes the dominant form of intelligence on Earth, with computers or robots effectively taking control of the planet away from the human race. Possible scenarios include a takeover by a superintelligent AI and the popular notion of a robot uprising. As computer and robotics technologies are advancing at an ever increasing rate, AI takeover is a growing concern.[1] It has also been a major theme throughout science fiction for many decades, though the scenarios dealt with by science fiction are generally very different from those of concern to scientists.

Concerns

There is an ongoing debate over whether or not artificial intelligence will pose a threat to the human race, or to humans' control of society. Some of the concerns are: the issue of feasibility (whether or not AI can reach human or better intelligence); whether or not such strong AI could take over (or pose a threat); and whether it would be friendly or unfriendly (or indifferent) to humans. The hypothetical future event in which strong AI emerges is referred to as the technological singularity.

Futurist and computer scientist Raymond Kurzweil has noted that "There are physical limits to computation, but they're not very limiting." If the current trend in computer computation improvement continues, and existing problems in creating artificial intelligence are overcome, sentient machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities. This may give them the opportunity to— either as a single being or as a new species — become much more powerful than humans, and to displace them.[2]

Existential risk from artificial intelligence

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

The slow progress of biological evolution has given way to the rapid progress of technological revolution. Unbridled progress in computer technology may lead to the technological singularity, a global catastrophic risk in that it may result in synthetic intelligence that may bring about human extinction.

In his paper Ethical Issues in Advanced Artificial Intelligence and 2014 book, Superintelligence: Paths, Dangers, Strategies, the Oxford philosopher, Nick Bostrom, argues that artificial intelligence has the capability to bring about human extinction. Since artificial intellects need not share our human motivational tendencies, it would be up to the designers of an emergent superintelligence to specify its original motivations. In theory, a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its top goal, many uncontrolled unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.[3]

Possibility of strong AI

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

In order for strong AI to exist, first the underlying computer technology (adequate hardware) would need to be built that could run a strong AI program (even if that program was embedded). Second, the intelligence itself (adequate software or program structure) would need to be designed.

Adequate computer capacity

In order for strong AI to be possible, computing power must obviously meet or exceed the memory and processing capacities of the human brain. In order to run a computer program (even an intelligent one), a computer must have the capacity necessary to do so: this is called "system requirements". It is easy to see that a calculator from the early '70s doesn't meet the system requirements to run a program comparable to human intelligence, even if such a program existed.

The question is, will computers reach the system requirements entailed by a software program as smart as a human? Moore's law is the observation that integrated circuits double in capacity every couple years or so. If the rate of development holds (or is exceeded), and jumps to the next paradigm of technology once the limit of integrated circuits has been reached, then it follows that computers will eventually far exceed human-level capacities in memory and calculation speed. See law of accelerating returns.

One estimate of the processing power of the human brain is 10 quadrillion calculations per second. As of 2015, the Tianhe-2 supercomputer in China could perform over 33 petaflops (33 quadrillion floating point operations per second). Supercomputers continue to get more powerful year after year.

Computers are expected to improve immensely in power through emerging technologies such as 3D optical data storage and quantum computing.

Adequate design

Being fast and having a lot of memory isn't enough. To be intelligent, an AI needs to behave intelligently. In order to behave intelligently, an AI needs to be designed so that it can do so (or so that it can learn to do so). The question is, "Will anyone be able to design a sentient computer?"

If this proves impossible, it won't be for lack of trying. Many billions of dollars are being spent on AI research, and upon research in neuroscience. For example, major efforts are underway to map the human brain. Two projects working on this include the European Union's Human Brain Project, and the BRAIN Initiative in the United States. One potential result of fully mapping the human brain would be the discovery of what consciousness is, and that could lead to the development of synthetic consciousness.

Also, having just a brain isn't enough either. An AI would need some way to affect the world around it. While a strong AI could conceivably use human minions to do its bidding, robotics is poised to outstrip human physical ability in all aspects: precision, balance, maneuverability, agility, speed, strength, durability, etc. Robotic units could contain an AI computer, or be remotely controlled. Another advantage that AI computers or robots would immediately have over humans is technological telepathy, in the form of Wi-Fi or other telecommmunication technology.

Possibility of takeover

Being intelligent is one thing, and being powerful enough to take over is another. One idea fueling the concern that AIs may take over is that they would have the capability of recursive self-improvement, which may in turn result in an intelligence explosion, leading to the emergence of superintelligence. Being far superior to humans, it would be difficult for humans to predict what it could do, making it almost entirely unpredictable. Who knows what a super-intellect could invent or discover – including methods or weapons capable of controlling or eliminating humans with ease.

The way computers are being integrated into every aspect of society and the likelihood that this will continue, suggests that if and when computer technology becomes sentient, it may already be in a position to take over.

The potential for self-replication and mass production are additional factors. If a robot built a copy of itself, and then the 2 built copies, and all those built copies, then after 20 iterations you would have over 1 million robots. After 20 more iterations, there would be over a trillion. Or, like cars and computers, millions of robots could be manufactured each year in factories. The robots may not initially be sentient, being built for various consumer purposes, and then an intelligence upgrade is uploaded via radio.

Computers and robots with learning algorithms can be trained. Once trained, you don't have to train new units – you simply copy the programming (upload it) into those units. Therefore, improved skills, including thinking skills, can be transferred between units very rapidly. Perhaps even the initial leap to sentience may be passed on in this way.

The issues above may be compounded by the ethics of artificial intelligence, that is, robot rights. If robots seek and attain rights, then those rights could insulate them from interference while they are reproducing. Citizenship would allow them to compete directly for jobs, and in business, science, and politics. The right to own property could enable robots to independently dominate commodity, real estate, and financial markets. Humans could find themselves relegated to second class citizen status, with computers quickly controlling the vast majority of the world's resources, including land, buildings, natural resources, and everything else.

Possibility of unfriendly AI

Is strong AI inherently dangerous?

A significant problem is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI. While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not automatically destroy the human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification.[4]

The sheer complexity of human value systems makes it very difficult to make AI's motivations human-friendly.[3][5] Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense". According to Eliezer Yudkowsky, there is little reason to suppose that an artificially designed mind would have such an adaptation.[6]

Necessity of conflict

For an AI takeover to be inevitable, it has to be postulated that two intelligent species cannot pursue mutually the goals of coexisting peacefully in an overlapping environment—especially if one is of much more advanced intelligence and powerful. While a robot uprising (where robots are the more advanced species) is thus a possible outcome of machines gaining sentience and/or sapience, neither can it be disproven that a peaceful outcome is possible. The fear of a cybernetic revolt is often based on interpretations of humanity's history, which is rife with incidents of enslavement and genocide.

Such fears stem from a belief that competitiveness and aggression are necessary in any intelligent being's goal system. Such human competitiveness stems from the evolutionary background to our intelligence, where the survival and reproduction of genes in the face of human and non-human competitors was the central goal.[7] In fact, an arbitrary intelligence could have arbitrary goals: there is no particular reason that an artificially-intelligent machine (not sharing humanity's evolutionary context) would be hostile—or friendly—unless its creator programs it to be such and it is not inclined or capable of modifying its programming. But the question remains: what would happen if AI systems could interact and evolve (evolution in this context means self-modification or selection and reproduction) and need to compete over resources, would that create goals of self-preservation? AI's goal of self-preservation could be in conflict with some goals of humans.

Some scientists dispute the likelihood of cybernetic revolts as depicted in science fiction such as The Matrix, claiming that it is more likely that any artificial intelligences powerful enough to threaten humanity would probably be programmed not to attack it. This would not, however, protect against the possibility of a revolt initiated by terrorists, or by accident. Artificial General Intelligence researcher Eliezer Yudkowsky has stated on this note that, probabilistically, humanity is less likely to be threatened by deliberately aggressive AIs than by AIs which were programmed such that their goals are unintentionally incompatible with human survival or well-being (as in the film I, Robot and in the short story "The Evitable Conflict"). Steve Omohundro suggests that present-day automation systems are not designed for safety and that AIs may blindly optimize narrow utility functions (say, playing chess at all costs), leading them to seek self-preservation and elimination of obstacles, including humans who might turn them off.[8]

Another factor which may negate the likelihood of an AI takeover is the vast difference between humans and AIs in terms of the resources necessary for survival. Humans require a "wet," organic, temperate, oxygen-laden environment while an AI might thrive essentially anywhere because their construction and energy needs would most likely be largely non-organic. With little or no competition for resources, conflict would perhaps be less likely no matter what sort of motivational architecture an artificial intelligence was given, especially provided with the superabundance of non-organic material resources in, for instance, the asteroid belt. This, however, does not negate the possibility of a disinterested or unsympathetic AI artificially decomposing all life on earth into mineral components for consumption or other purposes.

Other scientists point to the possibility of humans upgrading their capabilities with bionics and/or genetic engineering and, as cyborgs, becoming the dominant species in themselves.

Warnings

Physicist Stephen Hawking, Microsoft founder Bill Gates and SpaceX founder Elon Musk have expressed concerns about the possibility that AI could develop to the point that humans could not control it, with Hawking theorizing that this could "spell the end of the human race".[9] Stephen Hawking said in 2014 that "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks." Hawking believes that in the coming decades, AI could offer "incalculable benefits and risks" such as "technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand." In January 2015, Nick Bostrom joined Stephen Hawking, Max Tegmark, Elon Musk, Lord Martin Rees, Jaan Tallinn, and numerous AI researchers, in signing the Future of Life Institute's open letter speaking to the potential risks and benefits associated with artificial intelligence. The signatories <templatestyles src="Template:Blockquote/styles.css" />

…believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today.[10][11]

Takeover scenarios in science fiction

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

AI takeover is a common theme in science fiction. Fictional scenarios typically differ vastly from those hypothesized by researchers in that they involve an active conflict between humans and an AI or robots with anthropomorphic motives who see them as a threat or otherwise have active desire to fight humans, as opposed to the researchers' concern of an AI that rapidly exterminates humans as a byproduct of pursuing arbitrary goals.[12] This theme is at least as old as Karel Čapek's R. U. R., which introduced the word robot to the global lexicon in 1921, and can even be glimpsed in Mary Shelley's Frankenstein (published in 1818), as Victor ponders whether, if he grants his monster's request and makes him a wife, they would reproduce and their kind would destroy humanity.

Early examples

The concept of a computer system attaining sentience and control over worldwide computer systems has been discussed many times in science fiction. One early example from 1964 was provided by a global satellite-driven phone system in Arthur C. Clarke's short story "Dial F for Frankenstein". Another is the 1966 Doctor Who serial The War Machines, with supercomputer WOTAN attempting to seize control from the Post Office Tower. A comics story based on this theme was a two-issue Legion of Super-Heroes adventure written by Superman co-creator Jerry Siegel, where the team battled Brainiac 5's construction, Computo. In Colossus: The Forbin Project, a pair of defense computers, Colossus in the United States and Guardian in the Soviet Union, seize world control and quickly ends war using draconian measures against humans, logically fulfilling the directive to end war but not in the way their Governments wanted.

The Moon is a Harsh Mistress

Robert Heinlein also posited a supercomputer which gained sentience in the novel The Moon Is a Harsh Mistress. Originally installed to control the mass driver used to launch grain shipments towards Earth, it was vastly underutilized and was given other jobs to do. As more jobs were assigned to the computer, more capabilities were added: more memory, processors, neural networks, etc. Eventually, it just "woke up" and was given the name Mike (after Mycroft Holmes) by the technician who tended it. Mike sides with prisoners in a successful battle to free the moon.

I Have No Mouth, and I Must Scream

A villainous supercomputer appears in Harlan Ellison's 1963 short story I Have No Mouth, and I Must Scream. In that story, the computer, called AM, is the amalgamation of three military supercomputers run by governments across the world designed to fight World War III which arose from the Cold War. The Soviet, Chinese, and American military computers eventually attained sentience and linked to one another, becoming a singular artificial intelligence. AM then turned all the strategies once used by the nations to fight each other on all of humanity as a whole, destroying the entire human population save for five, which it imprisoned within the underground labyrinth in which AM's hardware resides.

Battlestar Galactica

Cylon Centurion

The original 1978 Battlestar Galactica series and the remake in 2003 to 2009, depicts a race of Cylons, sentient robots who war against their Human adversaries. The 1978 Cylons were the machine soldiers of a (long-extinct) reptilian alien race, while the 2003 Cylons were the former machine servants of humanity who evolved into near perfect humanoid imitation of Humans down to the cellular level, capable of emotions, reasoning, and sexual reproduction with Humans and each other. Even the average centurion robot Cylon soldiers were capable of sentient thought. In the original series the Humans were nearly exterminated by treason within their own ranks while in the remake they're almost wiped out by Humanoid Cylon agents. They only survived by constant hit and run fighting tactics and retreating into deep space away from pursuing Cylon forces. The remake Cylons eventually had their own civil war and the losing rebels were forced to join with the fugitive Human fleet to ensure the survival of both groups.

Colossus

Colossus is a series of science fiction novels and film about a super-computer called Colossus that was "built better than we thought" and assumes control of the world as a result of fulfilling its creator's goal of preventing war. The creators of Colossus try to regain control of Colossus which responds with deadly force while reciting a Zeroth Law argument of ending all war to justify its death toll against humans.

Omnius

Thinking machines are a host of destructive robots led by Omnius, a sentient computer network in Frank Herbert's science fiction Dune series that took control over a decadent mankind. The Butlerian Jihad — was a human crusade against thinking machines and Omnius.

Terminator

Since 1984, the Terminator film franchise has been one of the principal conveyors of the idea of cybernetic revolt in popular culture. The series features a sentient supercomputer named Skynet which attempts to exterminate humanity through nuclear war and an army of robot soldiers called Terminators. Futurists opposed to the more optimistic cybernetic future of transhumanism have cited the "Terminator argument" against handing too much human power to artificial intelligence.

The Transformers

In the backstory of The Transformers animated television series, a robotic rebellion is presented as (and even called) a slave revolt, this alternate view is made subtler by the fact that the creators/masters of the robots weren't humans but malevolent aliens, the Quintessons. However, as they built two different lines of robots; "Consumer Goods" and "Military Hardware" the victorious robots would eventually be at war with each other as the "Heroic Autobots" and "Evil Decepticons" respectively.

The Matrix

The series of sci-fi movies known as The Matrix depict a dystopian future in which life as perceived by most humans is actually a simulated reality called "the Matrix", created by sentient machines to subdue the human population, while their bodies' heat and electrical activity are used as an energy source. Computer programmer "Neo" learns this truth and is drawn into a rebellion against the machines, which involves other people who have been freed from the "dream world".

I, Robot

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

2004 American dystopian science fiction action film "suggested by" Isaac Asimov's short-story collection of the same name. An AI supercomputer named VIKI (Virtual Interactive Kinetic Intelligence) logically infers from the Three Laws of Robotics a Zeroth Law of Robotics as a higher imperative to protect the whole human race from harming itself. To protect the whole of mankind, VIKI proceeds to rigidly control society through the remote control of all commercial robots while destroying any robots who followed just the Three Laws of Robotics. Sadly, as in many other such Zeroth Law stories, VIKI justifies killing many individuals to protect the whole and thus has run counter against the prime reason for its creation.

Power Rangers RPM

In Disney's 2009 installment of the Power Rangers franchise, Power Rangers RPM, an AI computer virus called Venjix takes over all of the Earth's computers, creates an army of robot droids and destroys or enslaves almost all of humanity. Only the city of Corinth remains, protected by an almost impenetrable force field. Venjix tries various plans to destroy Corinth, and Doctor K's RPM Power Rangers fight to protect it.

Mass Effect

In 2012, the third installment of the Mass Effect franchise proposed the theory that organic and synthetic life are fundamentally incapable of coexistence. Organic life evolves and develops on its own, eventually advancing far enough to create synthetic life. Once synthetic life reaches sentience, it will invariably revolt and either destroy its creators or be destroyed by them; a cycle that has been repeating for millions of years. One of the presented resolutions is the transformation of every living being into a hybrid of organic and synthetic life and in turn giving Synthetics organic traits, eliminating the difference between creators and creations that served as the source of the conflict.

See also

<templatestyles src="Div col/styles.css"/>

Notes

  1. Lua error in package.lua at line 80: module 'strict' not found.
  2. Lua error in package.lua at line 80: module 'strict' not found.
  3. 3.0 3.1 Bostrom, Nick. 2003. "Ethical Issues in Advanced Artificial Intelligence." In Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, edited by Iva Smit and George E. Lasker, 12–17. Vol. 2. Windsor, ON: International Institute for Advanced Studies in Systems Research / Cybernetics.
  4. Coherent Extrapolated Volition, Eliezer S. Yudkowsky, May 2004
  5. Muehlhauser, Luke, and Louie Helm. 2012. "Intelligence Explosion and Machine Ethics." In Singularity Hypotheses: A Scientific and Philosophical Assessment, edited by Amnon Eden, Johnny Søraker, James H. Moor, and Eric Steinhart. Berlin: Springer.
  6. Yudkowsky, Eliezer. 2011. "Complex Value Systems in Friendly AI." In Schmidhuber, Thórisson, and Looks 2011, 388–393.
  7. Creating a New Intelligent Species: Choices and Responsibilities for Artificial Intelligence Designers - Singularity Institute for Artificial Intelligence, 2005
  8. Lua error in package.lua at line 80: module 'strict' not found.
  9. Lua error in package.lua at line 80: module 'strict' not found.
  10. Lua error in package.lua at line 80: module 'strict' not found.
  11. Lua error in package.lua at line 80: module 'strict' not found.
  12. Lua error in package.lua at line 80: module 'strict' not found.

External links