Machine Intelligence Research Institute

From Infogalactic: the planetary knowledge core
Jump to: navigation, search
Machine Intelligence Research Institute
File:MIRI logo.png
Formation 2000; 24 years ago (2000)
Type Nonprofit research institute
Legal status 501(c)(3) tax exempt charity
Purpose Research into friendly artificial intelligence
Location
Edwin Evans
Nate Soares
Key people
Eliezer Yudkowsky
Revenue
$1.7 million (2013)[1]
Staff
9[1]
Website intelligence.org
Formerly called
Singularity Institute, Singularity Institute for Artificial Intelligence

The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit organization founded in 2000 to research safety issues related to the development of Strong AI. Nate Soares is the current Executive Director, having taken over from Luke Muehlhauser in May 2015.[2]

MIRI's technical agenda states that new formal tools are needed in order to ensure the safe operation of future generations of AI software (friendly artificial intelligence).[3] The organization hosts regular research workshops to develop mathematical foundations for this project,[4] and has been cited as one of several academic and nonprofit groups studying long-term AI outcomes.[5][6][7]

History

In 2000, AI theorist Eliezer Yudkowsky and Internet entrepreneurs Brian and Sabine Atkins founded the Singularity Institute for Artificial Intelligence to "help humanity prepare for the moment when machine intelligence exceeded human intelligence".[8][9][10] In early 2005, SIAI relocated from Atlanta, Georgia to Silicon Valley. From 2006 to 2012, the Institute collaborated with Singularity University to produce the Singularity Summit, a science and technology conference. Speakers included Steven Pinker, Peter Norvig, Stephen Wolfram, John Tooby, James Randi, and Douglas Hofstadter.[11][12][13]

In mid-2012, the Institute spun off a new organization called the Center for Applied Rationality, whose focus is on using ideas from cognitive science to improve people's effectiveness in their daily lives.[14][15][16] Having previously shortened its name to "Singularity Institute", in January 2013 SIAI changed its name to the "Machine Intelligence Research Institute" in order to avoid confusion with Singularity University. MIRI gave control of the Singularity Summit to Singularity University and shifted its focus toward research in mathematics and theoretical computer science.[17]

In mid-2014, Nick Bostrom's book Superintelligence: Paths, Dangers, Strategies helped spark public discussion about AI's long-run social impact, receiving endorsements from Bill Gates and Elon Musk.[18][19][20][21] Stephen Hawking and AI pioneer Stuart Russell co-authored a Huffington Post article citing the work of MIRI and other organizations in the area:

Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all. [...] Although we are facing potentially the best or worst thing ever to happen to humanity, little serious research is devoted to these issues outside small non-profit institutes such as the Cambridge Center for Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute.[6]

In early 2015, MIRI's research was cited in a research priorities document accompanying an open letter on AI that called for "expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial".[22] Musk responded by funding a large AI safety grant program, with grant recipients including Bostrom, Russell, Bart Selman, Francesca Rossi, Thomas Dietterich, Manuela M. Veloso, and researchers at MIRI.[7][23]

Research

Forecasting

In addition to mathematical research, MIRI studies strategic questions related to AI, such as: What can (and can't) we predict about future AI technology? How can we improve our forecasting ability? Which interventions available today appear to be the most beneficial, given what little we do know?[24]

Beginning in 2014, MIRI has funded forecasting work through the independent AI Impacts project. AI Impacts studies historical instances of discontinuous technological change, and has developed new measures of the relative computational power of humans and computer hardware.[25][26]

MIRI researchers' interest in discontinuous AI progress stems from I. J. Good's argument that sufficiently advanced AI systems will eventually outperform humans in software engineering tasks, leading to a feedback loop of increasingly capable AI systems:[3][27][22][21]

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.[28]

Writers like Bostrom use the term superintelligence in place of Good's ultraintelligence.[18] Following Vernor Vinge, Good's idea of intelligence explosion has come to be associated with the idea of a "technological singularity".[29][30][31] Bostrom and researchers at MIRI have expressed skepticism about the views of singularity advocates like Ray Kurzweil that superintelligence is "just around the corner". MIRI researchers have advocated early safety work as a precautionary measure, while arguing that past predictions of AI progress have not been reliable.[32][21][18]

Eliezer Yudkowsky, MIRI's co-founder and senior researcher, is frequently cited for his writing on the long-term social impact of progress in AI. Russell and Norvig's Artificial Intelligence: A Modern Approach, the standard textbook in the field of AI, summarizes Yudkowsky's thesis:

If ultraintelligent machines are a possibility, we humans would do well to make sure that we design their predecessors in such a way that they design themselves to treat us well. [...] Yudkowsky (2008)[27] goes into more detail about how to design a Friendly AI. He asserts that friendliness (a desire not to harm humans) should be designed in from the start, but that the designers should recognize both that their own designs may be flawed, and that the robot will learn and evolve over time. Thus the problem is one of mechanism design—to define a mechanism for evolving AI systems under a system of checks and balances, and to give the systems utility functions that will remain friendly in the face of such changes.[30]

Yudkowsky writes on the importance of friendly artificial intelligence in smarter-than-human systems.[33] This informal goal is reflected in MIRI's recent publications as the requirement that AI systems be "aligned with human interests".[3] Following Bostrom and Steve Omohundro, MIRI researchers believe that autonomous generally intelligent AI systems will have default incentives to treat human operators as competitors, obstacles, or threats if they are not specifically designed to promote their operators' goals.[34][35][18][7]

High reliability and error tolerance in AI

The Future of Life Institute (FLI) research priorities document states:

Mathematical tools such as formal logic, probability, and decision theory have yielded significant insight into the foundations of reasoning and decision-making. However, there are still many open problems in the foundations of reasoning and decision. Solutions to these problems may make the behavior of very capable systems much more reliable and predictable. Example research topics in this area include reasoning and decision under bounded computational resources à la Horvitz and Russell, how to take into account correlations between AI systems’ behaviors and those of their environments or of other agents, how agents that are embedded in their environments should reason, and how to reason about uncertainty over logical consequences of beliefs or other deterministic computations. These topics may benefit from being considered together, since they appear deeply linked.[22]

The priorities document cites MIRI publications in the relevant areas: formalizing cooperation in the prisoner's dilemma between "superrational" software agents;[36] defining alternatives to causal decision theory and evidential decision theory in Newcomb's problem;[37] and developing alternatives to Solomonoff's theory of inductive inference for agents embedded in physical environments[38] and agents reasoning without logical omniscience.[39][21]

Standard decision procedures are not well-specified enough (e.g., with regard to counterfactuals) to be instantiated as algorithms. MIRI researcher Benja Fallenstein and then-researcher Nate Soares write that causal decision theory is "unstable under reflection" in the sense that a rational agent following causal decision theory "correctly identifies that the agent should modify itself to stop using CDT [causal decision theory] to make decisions". MIRI researchers identify "logical decision theories" as alternatives that perform better in general decision-making tasks.[37]

MIRI also studies self-monitoring and self-verifying software. The FLI research priorities document notes that "a formal system that is sufficiently powerful cannot use formal methods in the obvious way to gain assurance about the accuracy of functionally similar formal systems, on pain of inconsistency via Gödel's incompleteness theorems".[22] MIRI's publications on Vingean reflection attempt to model the Gödelian limits on self-referential reasoning and identify practically useful exceptions.[40]

Soares and Fallenstein classify the above research programs as aimed at high reliability and transparency in agent behavior. They separately recommend research into "error-tolerant" software systems, citing human error and default incentives as sources of serious risk.[34][7] The FLI research priorities document adds:

If an AI system is selecting the actions that best allow it to complete a given task, then avoiding conditions that prevent the system from continuing to pursue the task is a natural subgoal (and conversely, seeking unconstrained situations is sometimes a useful heuristic). This could become problematic, however, if we wish to repurpose the system, to deactivate it, or to significantly alter its decision-making process; such a system would rationally avoid these changes. Systems that do not exhibit these behaviors have been termed corrigible systems, and both theoretical and practical work in this area appears tractable and useful.

MIRI's priorities in these areas are summarized in their 2015 technical agenda.[3]

Value specification

In defining correct goals for autonomous systems, Soares and Fallenstein write, "the 'intentions' of the operators are a complex, vague, fuzzy, context-dependent notion (Yudkowsky 2011).[41] Concretely writing out the full intentions of the operators in a machine-readable format is implausible if not impossible, even for simple tasks." Soares and Fallenstein propose that autonomous AI systems instead be designed to inductively learn the values of humans from observational data.[3]

Soares discusses several technical obstacles to value learning in AI: changes in the agent's beliefs may result in a mismatch between the agent's values and its ontology; agents that are well-behaved in training data may induct incorrect values in new domains; and human operators' moral uncertainty may make it difficult to identify or anticipate incorrect inductions.[42][21] Bostrom's Superintelligence discusses the philosophical problems raised by value learning at greater length.[18]

See also

<templatestyles src="Div col/styles.css"/>

References

  1. 1.0 1.1 Lua error in package.lua at line 80: module 'strict' not found.
  2. Lua error in package.lua at line 80: module 'strict' not found.
  3. 3.0 3.1 3.2 3.3 3.4 Lua error in package.lua at line 80: module 'strict' not found.
  4. Lua error in package.lua at line 80: module 'strict' not found.
  5. Lua error in package.lua at line 80: module 'strict' not found.
  6. 6.0 6.1 Lua error in package.lua at line 80: module 'strict' not found.
  7. 7.0 7.1 7.2 7.3 Lua error in package.lua at line 80: module 'strict' not found.
  8. Lua error in package.lua at line 80: module 'strict' not found.
  9. Lua error in package.lua at line 80: module 'strict' not found.
  10. Lua error in package.lua at line 80: module 'strict' not found.
  11. Lua error in package.lua at line 80: module 'strict' not found.
  12. Lua error in package.lua at line 80: module 'strict' not found.
  13. Lua error in package.lua at line 80: module 'strict' not found.
  14. Lua error in package.lua at line 80: module 'strict' not found.
  15. Lua error in package.lua at line 80: module 'strict' not found.
  16. Lua error in package.lua at line 80: module 'strict' not found.
  17. Lua error in package.lua at line 80: module 'strict' not found.
  18. 18.0 18.1 18.2 18.3 18.4 Lua error in package.lua at line 80: module 'strict' not found.
  19. Lua error in package.lua at line 80: module 'strict' not found.
  20. Lua error in package.lua at line 80: module 'strict' not found.
  21. 21.0 21.1 21.2 21.3 21.4 Lua error in package.lua at line 80: module 'strict' not found.
  22. 22.0 22.1 22.2 22.3 Lua error in package.lua at line 80: module 'strict' not found.
  23. Lua error in package.lua at line 80: module 'strict' not found.
  24. Lua error in package.lua at line 80: module 'strict' not found.
  25. Lua error in package.lua at line 80: module 'strict' not found.
  26. Lua error in package.lua at line 80: module 'strict' not found.
  27. 27.0 27.1 Lua error in package.lua at line 80: module 'strict' not found.
  28. Lua error in package.lua at line 80: module 'strict' not found.
  29. Lua error in package.lua at line 80: module 'strict' not found.
  30. 30.0 30.1 Lua error in package.lua at line 80: module 'strict' not found.
  31. Lua error in package.lua at line 80: module 'strict' not found.
  32. Lua error in package.lua at line 80: module 'strict' not found.
  33. Lua error in package.lua at line 80: module 'strict' not found.
  34. 34.0 34.1 Lua error in package.lua at line 80: module 'strict' not found.
  35. Lua error in package.lua at line 80: module 'strict' not found.
  36. Lua error in package.lua at line 80: module 'strict' not found.
  37. 37.0 37.1 Lua error in package.lua at line 80: module 'strict' not found.
  38. Lua error in package.lua at line 80: module 'strict' not found.
  39. Lua error in package.lua at line 80: module 'strict' not found.
  40. Lua error in package.lua at line 80: module 'strict' not found.
  41. Lua error in package.lua at line 80: module 'strict' not found.
  42. Lua error in package.lua at line 80: module 'strict' not found.

External links