Recursive self-improvement

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

Lua error in package.lua at line 80: module 'strict' not found.

Recursive self-improvement is the speculative ability of a strong artificial intelligence computer program to program its own software, recursively.

This is sometimes also referred to as Seed AI because if an AI were created with engineering capabilities that matched or surpassed those of its human creators, it would have the potential to autonomously improve the design of its constituent software and hardware. Having undergone these improvements, it would then be better able to find ways of optimizing its structure and improving its abilities further. It is speculated that over many iterations, such an AI would far surpass human cognitive abilities.

History

This notion of an "intelligence explosion" was first described thus by Good (1965), who speculated on the effects of superhuman machines:

<templatestyles src="Template:Blockquote/styles.css" />

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

Compilers

A limited example is that program language compilers are often used to compile themselves. As compilers become more optimized, they can re-compile themselves and so be faster at compiling.

However, they cannot then produce faster code and so this can only provide a very limited one step self-improvement. Existing optimizers can transform code into a functionally equivalent, more efficient form, but cannot identify the intent of an algorithm and rewrite it for more effective results. The optimized version of a given compiler may compile faster, but it cannot compile better. That is, an optimized version of a compiler will never spot new optimization tricks that earlier versions failed to see or innovate new ways of improving its own program.

Seed AI must be able to understand the purpose behind the various elements of its design, and design entirely new modules that will make it genuinely more intelligent and more effective in fulfilling its purpose.

Hard vs. soft takeoff

A "hard takeoff" refers to the scenario in which a single AI project rapidly self-improves, on a timescale of a few years or even days. A "soft takeoff" refers to a longer-term process of integrating gradual AI improvements into society more broadly.[1] Eliezer Yudkowsky and Robin Hanson have extensively debated these positions, with Yudkowsky arguing for the realistic possibility of hard takeoff, while Hanson believes its probability is less than 1%.[2]

Ramez Naam argues against a hard takeoff by pointing out that we already see recursive self-improvement by superintelligences, such as corporations. For instance, Intel has "the collective brainpower of tens of thousands of humans and probably millions of CPU cores to.. design better CPUs!" However, this has not led to a hard takeoff; rather, it has led to a soft takeoff in the form of Moore's law.[3] Naam further points out that the computational complexity of higher intelligence may be much greater than linear, such that "creating a mind of intelligence 2 is probably more than twice as hard as creating a mind of intelligence 1."[4] William Hertling replies that while he agrees there won't be a hard takeoff, he expects that Moore's law and the ability to copy computers may still thoroughly change the world sooner than most people are expecting. He suggests that when we postpone the predicted arrival date of these changes, "we're less likely as a society to examine both AI progress and take steps to reduce the risks of AGI."[5]

J. Storrs Hall believes that "many of the more commonly seen scenarios for overnight hard takeoff are circular – they seem to assume hyperhuman capabilities at the starting point of the self-improvement process" in order for an AI to be able to make the dramatic, domain-general improvements required for takeoff. Hall suggests that rather than recursively self-improving its hardware, software, and infrastructure all on its own, a fledgling AI would be better off specializing in one area where it was most effective and then buying the remaining components on the marketplace, because the quality of products on the marketplace continually improves, and the AI would have a hard time keeping up with the cutting-edge technology used by the rest of the world.[6]

Ben Goertzel agrees with Hall's suggestion that a new human-level AI would do well to use its intelligence to accumulate wealth.[7] The AI's talents might inspire companies and governments to disperse its software throughout society.[7] The AI might buy out a country like Azerbaijan and use that as its base to build power and improve its algorithms.[7] Goertzel is skeptical of a very hard, 5-minute takeoff but thinks a takeoff from human to superhuman level on the order of 5 years is reasonable. He calls this a "semihard takeoff".[7] Elsewhere Goertzel has argued that his OpenCog architecture "very likely possesses the needed properties to enable hard takeoff."[8]

In a 1993 article, Vernor Vinge discussed the concept of a "singularity", i.e., a hard takeoff:[9]

When greater-than-human intelligence drives progress, that progress will be much more rapid. In fact, there seems no reason why progress itself would not involve the creation of still more intelligent entities -- on a still-shorter time scale.

Vinge notes that humans can "solve many problems thousands of times faster than natural selection" because we can perform quick simulations of the world in our heads.[9] Robin Hanson collected 13 replies to Vinge, some agreeing with his singularity notion and others disputing it.[10]

In one of those replies, Max More argues that if there were only a few superfast human-level AIs, they wouldn't radically change the world, because they would still depend on other people to get things done and would still have human cognitive constraints.[11] Even if all superfast AIs worked on intelligence augmentation, it's not clear why they would better in a discontinuous way than existing human cognitive scientists at producing super-human intelligence, although the rate of progress would increase.[11] More also argues that a superintelligence would not transform the world overnight, because a superintelligence would need to engage with existing, slow human systems to accomplish physical impacts on the world.[11] "The need for collaboration, for organization, and for putting ideas into physical changes will ensure that all the old rules are not thrown out overnight or even within years."[11]

Organizations

Creating seed AI is the goal of several organizations. The Machine Intelligence Research Institute is the most prominent of those explicitly working to create seed AI[12] and ensure its safety.[13] Others include the Artificial General Intelligence Research Institute, creator of the Novamente AI engine, Adaptive Artificial Intelligence Incorporated, Texai.org, and Consolidated Robotics.

See also

References

  1. Lua error in package.lua at line 80: module 'strict' not found.
  2. Lua error in package.lua at line 80: module 'strict' not found.
  3. Lua error in package.lua at line 80: module 'strict' not found.
  4. Lua error in package.lua at line 80: module 'strict' not found.
  5. Lua error in package.lua at line 80: module 'strict' not found.
  6. Lua error in package.lua at line 80: module 'strict' not found.
  7. 7.0 7.1 7.2 7.3 Lua error in package.lua at line 80: module 'strict' not found.
  8. Lua error in package.lua at line 80: module 'strict' not found.
  9. 9.0 9.1 Lua error in package.lua at line 80: module 'strict' not found.
  10. Lua error in package.lua at line 80: module 'strict' not found.
  11. 11.0 11.1 11.2 11.3 Lua error in package.lua at line 80: module 'strict' not found.
  12. Lua error in package.lua at line 80: module 'strict' not found.
  13. Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.[dead link]

External links