Singleton (global governance)

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

In futurology, a singleton is a hypothetical world order in which there is a single decision-making agency at the highest level, capable of exerting effective control over its domain, and permanently preventing both internal and external threats to its supremacy. The term has first been defined by Nick Bostrom.[1][2][3][4][5][6][7][8]

An artificial general intelligence having undergone an intelligence explosion could form a singleton, as could a world government armed with mind control and social surveillance technologies. A singleton need not directly micromanage everything in its domain; it could allow diverse forms of organization within itself, albeit guaranteed to function within strict parameters. A singleton need not support a civilization, and in fact could obliterate it upon coming to power.

A singleton has both potential risks and potential benefits. Notably, a suitable singleton could solve world coordination problems that would not otherwise be solvable, opening up otherwise unavailable developmental trajectories for civilization. For example, Ben Goertzel, an AGI researcher, suggests humans may instead decide to create an "AI Nanny" with "mildly superhuman intelligence and surveillance powers", to protect the human race from existential risks like nanotechnology and to delay the development of other (unfriendly) artificial intelligences until and unless the safety issues are solved.[9] Furthermore, Bostrom suggests that a singleton could hold Darwinian evolutionary pressures in check, preventing agents interested only in reproduction from coming to dominate.[10]

Yet Bostrom also regards the possibility of a stable, repressive, totalitarian global regime as a serious existential risk.[11] The very stability of a singleton makes the installation of a bad singleton especially catastrophic, since the consequences can never be undone. Bryan Caplan writes that "perhaps an eternity of totalitarianism would be worse than extinction".[12]

See also

References

  1. Nick Bostrom (2006). "What is a Singleton?". Linguistic and Philosophical Investigations 5(2): 48-54.
  2. Lua error in package.lua at line 80: module 'strict' not found.
  3. Lua error in package.lua at line 80: module 'strict' not found.
  4. Lua error in package.lua at line 80: module 'strict' not found.
  5. Lua error in package.lua at line 80: module 'strict' not found.
  6. Lua error in package.lua at line 80: module 'strict' not found.
  7. Lua error in package.lua at line 80: module 'strict' not found.
  8. Lua error in package.lua at line 80: module 'strict' not found.
  9. Goertzel, Ben. "Should Humanity Build a Global AI Nanny to Delay the Singularity Until It’s Better Understood?", Journal of consciousness studies 19.1-2 (2012): 1-2.
  10. Nick Bostrom (2004). "The Future of Human Evolution". Death and Anti-Death: Two Hundred Years After Kant, Fifty Years After Turing, ed. Charles Tandy (Ria University Press: Palo Alto, California): 339-371.
  11. Nick Bostrom (2002). "Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards". Journal of Evolution and Technology 9(1).
  12. Bryan Caplan (2008). "The totalitarian threat". Global Catastrophic Risks, eds. Bostrom & Cirkovic (Oxford University Press): 504-519. ISBN 9780198570509