From Infogalactic: the planetary knowledge core
Jump to: navigation, search

Roboethics is a short expression for ethics of robotics. It is often used in the sense that it is concerned with the behavior of humans, how humans design, construct, use and treat robots and other artificially intelligent beings, whereas machine ethics is concerned with the behavior of robots themselves, whether or not they are considered artificial moral agents (AMAs).

While the issue is as old as the word robot, the short word roboethics was probably first used by roboticist Gianmarco Veruggio in 2002, who also served as chair of an Atelier funded by the European Robotics Research Network to outline areas where research may be needed. The road map effectively divided ethics of artificial intelligence into two sub-fields to accommodate researchers' differing interests:[1]

Main positions on roboethics

Since the First International Symposium on Roboethics (Sanremo, Italy, 2004), three main ethical positions emerged from the robotics community (D. Cerqui, 2004):

  • Not interested in ethics (This is the attitude of those who consider that their actions are strictly technical, and do not think they have a social or a moral responsibility in their work)
  • Interested in short-term ethical questions (This is the attitude of those who express their ethical concern in terms of “good” or “bad,” and who refer to some cultural values and social conventions)
  • Interested in long-term ethical concerns (This is the attitude of those who express their ethical concern in terms of global, long-term questions)

Disciplines involved in roboethics

The design of Roboethics requires the combined commitment of experts of several disciplines, who, working in transnational projects, committees, commissions, have to adjust laws and regulations to the problems resulting from the scientific and technological achievements in Robotics and AI.

In all likelihood, it is to be expected that the birth of new curricula studiorum and specialties, necessary to manage a subject so complex, just as it happened with Forensic Medicine. In particular, the main fields involved in Roboethics are: robotics, computer science, artificial intelligence, philosophy, ethics, theology, biology, physiology, cognitive science, neurosciences, law, sociology, psychology, and industrial design.


Since antiquity, the discussion of ethics in relation to the treatment of non-human and even non-living things and their potential "spirituality" have been discussed. With the development of machinery and eventually robots, this philosophy was also applied to robotics. The first publication directly addressing roboethics was developed by Isaac Asimov as his Three Laws of Robotics in 1942, in the context of his science fiction works, although the term "roboethics" was created by Gianmarco Veruggio in 2002.

The Roboethic guidelines were developed during some important robotics events and projects:

In popular culture

Roboethics as a science or philosophical topic has not made any strong cultural impact,[citation needed] but is a common theme in science fiction literature and films. One of the most popular films depicting the potential misuse of robotic and AI technology is The Matrix, depicting a future where the lack of roboethics brought about the destruction of the human race. An animated film based on The Matrix, the Animatrix, focused heavily on the potential ethical issues between humans and robots. Many of the Animatrix's animated shorts are also named after Isaac Asimov's fictional stories.

Although not a part of roboethics per se, the ethical behavior of robots themselves has also been a joining issue in roboethics in popular culture. The Terminator series focuses on robots run by an uncontrolled AI program with no restraint on the termination of its enemies. This series too has the same futuristic plot as The Matrix series, where robots have taken control. The most famous case of robots or computers without programmed ethics is HAL 9000 in the Space Odyssey series, where HAL (a computer with advance AI capabilities who monitors and assists humans on a space station) kills all the humans on board to ensure the success of the assigned mission after his own life is threatened.


  1. Veruggio, Gianmarco (2007). "The Roboethics Roadmap" (PDF). Scuola di Robotica: 2. Retrieved 2011-04-28. Cite journal requires |journal= (help)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>


Further reading

  • Lin, Patrick/Abney, Keith/Bekey, George A. (December, 2011). Robot Ethics: The Ethical and Social Implications of Robotics. MIT Press.
  • Gunkel, David J. (July, 2012). The Machine Question: Critical Perspectives on AI, Robotics, and Ethics. MIT Press.

External links

Academic centers