Normative ethics

From Infogalactic: the planetary knowledge core
Jump to: navigation, search


Normative ethics is the study of ethical action. It is the branch of philosophical ethics that investigates the set of questions that arise when considering how one ought to act, morally speaking. Normative ethics is distinct from meta-ethics because it examines standards for the rightness and wrongness of actions, while meta-ethics studies the meaning of moral language and the metaphysics of moral facts. Normative ethics is also distinct from descriptive ethics, as the latter is an empirical investigation of people’s moral beliefs. To put it another way, descriptive ethics would be concerned with determining what proportion of people believe that killing is always wrong, while normative ethics is concerned with whether it is correct to hold such a belief. Hence, normative ethics is sometimes called prescriptive, rather than descriptive. However, on certain versions of the meta-ethical view called moral realism, moral facts are both descriptive and prescriptive at the same time.[1]

Most traditional moral theories rest on principles that determine whether an action is right or wrong. Classical theories in this vein include utilitarianism, Kantianism, and some forms of contractarianism. These theories mainly offered overarching moral principles to use to resolve difficult moral decisions.[citation needed]

Normative ethical theories

There are disagreements about what precisely gives an action, rule, or disposition its ethical force. Broadly speaking, there are three competing views on how moral questions should be answered, along with hybrid positions that combine some elements of each. Virtue ethics focuses on the character of those who are acting, while both deontological ethics and consequentialism focus on the status of the action, rule, or disposition itself. The latter two conceptions of ethics themselves come in various forms.

  • Deontology argues that decisions should be made considering the factors of one's duties and others' rights. Some deontological theories include:
  • Consequentialism (Teleology) argues that the morality of an action is contingent on the action's outcome or result. Consequentialist theories, differing in what they consider valuable (Axiology), include:
    • Utilitarianism, which holds that an action is right if it leads to the most happiness for the greatest number of people. (Historical Note: Prior to the coining of the term "consequentialism" by Anscombe in 1958 and the adoption of that term in the literature that followed, "utilitarianism" was the generic term for consequentialism, referring to all theories that promoted maximizing any form of utility, not just those that promoted maximizing happiness.)
    • State consequentialism or Mohist consequentialism, which holds that an action is right if it leads to state welfare, through order, material wealth, and population growth
    • Egoism, the belief that the moral person is the self-interested person, holds that an action is right if it maximizes good for the self.
    • Situation Ethics, which holds that the correct action is the one that creates the most loving result, and that love should always be our goal.
    • Intellectualism, which dictates that the best action is the one that best fosters and promotes knowledge.
    • Welfarism, which argues that the best action is the one that most increases economic well-being or welfare.
    • Preference utilitarianism, which holds that the best action is the one that leads to the most overall preference satisfaction.
  • Ethics of care or relational ethics, founded by feminist theorists, notably Carol Gilligan, argues that morality arises out of the experiences of empathy and compassion. It emphasizes the importance of interdependence and relationships in achieving ethical goals.
  • Pragmatic ethics is difficult to classify fully within any of the four preceding conceptions. This view argues that moral correctness evolves similarly to scientific knowledge: socially over the course of many lifetimes. Thus, we should prioritize social reform over concern with consequences, individual virtue or duty (although these may be worthwhile concerns, provided social reform is also addressed). Charles Sanders Peirce, William James, and John Dewey, are known as the founders of pragmatism.

Binding force

It can be unclear what it means to say that a person "ought to do X because it is moral, whether they like it or not". Morality is sometimes presumed to have some kind of special binding force on behaviour, but some philosophers think that, used this way, the word "ought" seems to wrongly attribute magic powers to morality. For instance, G. E. M. Anscombe worries that "ought" has become "a word of mere mesmeric force".[2] British ethicist Philippa Foot elaborates that morality does not seem to have any special binding force, and she clarifies that people only behave morally when motivated by other factors.

<templatestyles src="Template:Quote_box/styles.css" />

If he is an amoral man he may deny that he has any reason to trouble his head over this or any other moral demand. Of course, he may be mistaken, and his life as well as others' lives may be most sadly spoiled by his selfishness. But this is not what is urged by those who think they can close the matter by an emphatic use of 'ought'. My argument is that they are relying on an illusion, as if trying to give the moral 'ought' a magic force.

-Philippa Foot [3]

Foot says "People talk, for instance, about the 'binding force' of morality, but it is not clear what this means if not that we feel ourselves unable to escape."[3] The idea is that, faced with an opportunity to steal a book because we can get away with it, moral obligation itself has no power to stop us unless we feel an obligation. Morality may therefore have no binding force beyond regular human motivations, and people must be motivated to behave morally. The question then arises: what role does reason play in motivating moral behaviour?

Motivating morality

See also Causes of good behaviour

The categorical imperative perspective suggests that proper reason always leads to particular moral behaviour. As mentioned above, Foot instead believes that humans are actually motivated by desires. Proper reason, on this view, allows humans to discover actions that get them what they want (i.e., hypothetical imperatives)—not necessarily actions that are moral.

Social structure and motivation can make morality binding in a sense, but only because it makes moral norms feel inescapable, according to Foot.[3] John Stuart Mill adds that external pressures, to please others for instance, also influence this felt binding force, which he calls human "conscience". Mill says that humans must first reason about what is moral, then try to bring the feelings of our conscience in line with our reason.[4] At the same time, Mill says that a good moral system (in his case, utilitarianism) ultimately appeals to aspects of human nature — which, must themselves be nurtured during upbringing. Mill explains:

This firm foundation is that of the social feelings of mankind; the desire to be in unity with our fellow creatures, which is already a powerful principle in human nature, and happily one of those which tend to become stronger, even without express inculcation, from the influences of advancing civilisation.

Mill thus believes that it is important to appreciate that it is feelings that drive moral behavior, but also that they may not be present in some people (e.g. psychopaths). Mill goes on to describe factors that help ensure people develop a conscience and behave morally, and thinkers like Joseph Daleiden describe how societies can use science to figure out how to make people more likely to be good.

References

  1. Lua error in package.lua at line 80: module 'strict' not found.
  2. Elizabeth Anscombe. (1958). "Modern Moral Philosophy". Philosophy 33, No 24.
  3. 3.0 3.1 3.2 Foot, Philippa. (2009). Morality as a System of Hypothetical Imperatives. In S. M. Cahn, & P. Markie,Ethics: History, Theory, and Contemporary Issues (pp. 556-561). New York: Oxford University Press.
  4. John Stuart Mill (1863). Utilitarianism. Chapter 3: Of the Ultimate Sanction of the Principle of Utility.

See also

de:Normative Ethik