Multi-objective optimization

From Infogalactic: the planetary knowledge core
(Redirected from Multiobjective optimization)
Jump to: navigation, search

Multi-objective optimization (also known as multi-objective programming, vector optimization, multicriteria optimization, multiattribute optimization or Pareto optimization) is an area of multiple criteria decision making, that is concerned with mathematical optimization problems involving more than one objective function to be optimized simultaneously. Multi-objective optimization has been applied in many fields of science, including engineering, economics and logistics (see the section on applications for detailed examples) where optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives. Minimizing cost while maximizing comfort while buying a car, and maximizing performance whilst minimizing fuel consumption and emission of pollutants of a vehicle are examples of multi-objective optimization problems involving two and three objectives, respectively. In practical problems, there can be more than three objectives.

For a nontrivial multi-objective optimization problem, there does not exist a single solution that simultaneously optimizes each objective. In that case, the objective functions are said to be conflicting, and there exists a (possibly infinite) number of Pareto optimal solutions. A solution is called nondominated, Pareto optimal, Pareto efficient or noninferior, if none of the objective functions can be improved in value without degrading some of the other objective values. Without additional subjective preference information, all Pareto optimal solutions are considered equally good (as vectors cannot be ordered completely). Researchers study multi-objective optimization problems from different viewpoints and, thus, there exist different solution philosophies and goals when setting and solving them. The goal may be to find a representative set of Pareto optimal solutions, and/or quantify the trade-offs in satisfying the different objectives, and/or finding a single solution that satisfies the subjective preferences of a human decision maker (DM).

Introduction

A multi-objective optimization problem is an optimization problem that involves multiple objective functions.[1][2] In mathematical terms, a multi-objective optimization problem can be formulated as


\begin{align}
\min &\left(f_1(x), f_2(x),\ldots, f_k(x) \right)  \\
\text{s.t. } &x\in X,
\end{align}

where the integer k\geq 2 is the number of objectives and the set X is the feasible set of decision vectors. The feasible set is typically defined by some constraint functions. In addition, the vector-valued objective function is often defined as

f:X\to\mathbb R^k, \ f(x)= (f_1(x),\ldots,f_k(x))^T. If some objective function is to be maximized, it is equivalent to minimize its negative. The image of X is denoted by Y \in \mathbb R^k

An element x^*\in X is called a feasible solution or a feasible decision. A vector z^* := f(x^*)\in \mathbb R^k for a feasible solution x^* is called an objective vector or an outcome. In multi-objective optimization, there does not typically exist a feasible solution that minimizes all objective functions simultaneously. Therefore, attention is paid to Pareto optimal solutions; that is, solutions that cannot be improved in any of the objectives without degrading at least one of the other objectives. In mathematical terms, a feasible solution x^1\in X is said to (Pareto) dominate another solution x^2\in X, if

  1. f_i(x^1)\leq f_i(x^2) for all indices i \in \left\{ {1,2,\dots,k } \right\} and
  2. f_j(x^1) < f_j(x^2) for at least one index j \in \left\{ {1,2,\dots,k } \right\}.

A solution x^1\in X (and the corresponding outcome f(x^*)) is called Pareto optimal, if there does not exist another solution that dominates it. The set of Pareto optimal outcomes is often called the Pareto front or Pareto boundary.

The Pareto front of a multi-objective optimization problem is bounded by a so-called nadir objective vector  z^{nad} and an ideal objective vector  z^{ideal} , if these are finite. The nadir objective vector is defined as

 z^{nad}_i= \sup_{x\in X\text{ is Pareto optimal}} f_i(x) \text{ for all } i=1,\ldots,k

and the ideal objective vector as

 z^{ideal}_i=\inf_{x\in X}{f_i(x)}\text{ for all } i=1,\ldots,k.

In other words, the components of a nadir and an ideal objective vector define upper and lower bounds for the objective function values of Pareto optimal solutions, respectively. In practice, the nadir objective vector can only be approximated as, typically, the whole Pareto optimal set is unknown. In addition, a utopian objective vector z^{utopian} with

 z^{utopian}_i = z^{ideal}_{i}-\epsilon \text{ for all } i=1,\ldots,k,

where \epsilon>0 is a small constant, is often defined because of numerical reasons.

Examples of multi-objective optimization applications

Economics

In economics, many problems involve multiple objectives along with constraints on what combinations of those objectives are attainable. For example, consumer's demand for various goods is determined by the process of maximization of the utilities derived from those goods, subject to a constraint based on how much income is available to spend on those goods and on the prices of those goods. This constraint allows more of one good to be purchased only at the sacrifice of consuming less of another good; therefore, the various objectives (more consumption of each good is preferred) are in conflict with each other. A common method for analyzing such a problem is to use a graph of indifference curves, representing preferences, and a budget constraint, representing the trade-offs that the consumer is faced with.

Another example involves the production possibilities frontier, which specifies what combinations of various types of goods can be produced by a society with certain amounts of various resources. The frontier specifies the trade-offs that the society is faced with — if the society is fully utilizing its resources, more of one good can be produced only at the expense of producing less of another good. A society must then use some process to choose among the possibilities on the frontier.

Macroeconomic policy-making is a context requiring multi-objective optimization. Typically a central bank must choose a stance for monetary policy that balances competing objectives — low inflation, low unemployment, low balance of trade deficit, etc. To do this, the central bank uses a model of the economy that quantitatively describes the various causal linkages in the economy; it simulates the model repeatedly under various possible stances of monetary policy, in order to obtain a menu of possible predicted outcomes for the various variables of interest. Then in principle it can use an aggregate objective function to rate the alternative sets of predicted outcomes, although in practice central banks use a non-quantitative, judgement-based, process for ranking the alternatives and making the policy choice.

Finance

In finance, a common problem is to choose a portfolio when there are two conflicting objectives — the desire to have the expected value of portfolio returns be as high as possible, and the desire to have risk, often measured by the standard deviation of portfolio returns, be as low as possible. This problem is often represented by a graph in which the efficient frontier shows the best combinations of risk and expected return that are available, and in which indifference curves show the investor's preferences for various risk-expected return combinations. The problem of optimizing a function of the expected value (first moment) and the standard deviation (square root of the second central moment) of portfolio return is called a two-moment decision model.

Optimal control

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

In engineering and economics, many problems involve multiple objectives which are not describable as the-more-the-better or the-less-the-better; instead, there is an ideal target value for each objective, and the desire is to get as close as possible to the desired value of each objective. For example, energy systems typically have a trade-off between performance and cost[3][4] or one might want to adjust a rocket's fuel usage and orientation so that it arrives both at a specified place and at a specified time; or one might want to conduct open market operations so that both the inflation rate and the unemployment rate are as close as possible to their desired values.

Often such problems are subject to linear equality constraints that prevent all objectives from being simultaneously perfectly met, especially when the number of controllable variables is less than the number of objectives and when the presence of random shocks generates uncertainty. Commonly a multi-objective quadratic objective function is used, with the cost associated with an objective rising quadratically with the distance of the objective from its ideal value. Since these problems typically involve adjusting the controlled variables at various points in time and/or evaluating the objectives at various points in time, intertemporal optimization techniques are employed.[5][6]

Optimal design

Product and process design can be largely improved using modern modeling, simulation and optimization techniques. The key question in optimal design is the measure of what is good or desirable about a design. Before looking for optimal designs it is important to identify characteristics which contribute the most to the overall value of the design. A good design typically involves multiple criteria/objectives such as capital cost/investment, operating cost, profit, quality and/or recovery of the product, efficiency, process safety, operation time etc. Therefore, in practical applications, the performance of process and product design is often measured with respect to multiple objectives. These objectives typically are conflicting, i.e. achieving the optimal value for one objective requires some compromise on one or more of other objectives.

For example, in paper industry when designing a paper mill, one can seek to decrease the amount of capital invested in a paper mill and enhance the quality of paper simultaneously. If the design of a paper mill is defined by large storage volumes and paper quality is defined by quality parameters, then the problem of optimal design of a paper mill can include objectives such as: i) minimization of expected variation of those quality parameter from their nominal values, ii) minimization of expected time of breaks and iii) minimization of investment cost of storage volumes. Here, maximum volume of towers are design variables. This example of optimal design of a paper mill is a simplification of the model used in.[7] Multi-objective design optimization have also been implemented in engineering systems - e.g. design of nano-CMOS semiconductors,[8] design of solar-powered irrigation systems,[9] optimization of sand mould systems,[10][11] engine design,[12][13] optimal sensor deployment[14] and optimal controller design.[15][16]

Process Optimization

Multi-objective optimization has also been increasingly employed in chemical engineering. In Fiandaca and Fraga, (2009),[17] the multi-objective generic algorithm (MOGA) was used to optimize the design of the pressure swing adsorption process (cyclic separation process). The design problem considered in that work involves the bi-objective maximization of nitrogen recovery and nitrogen purity. The results obtained in that work provided a good approximation of the Pareto frontier with acceptable trade-offs between the objectives. A multi-objective chemical process problem for the thermal processing of food was solved by Sendín et al.,(2010).[18] In that work, two case studies (bi-objective and triple objective problems) with nonlinear dynamic models were tackled. A hybrid approach consisting of the weighted Tchebycheff and the Normal Boundary Intersection approaches were utilized. This novel hybrid approach was successful in constructing a Pareto optimal set for the thermal processing of foods. The multi-objective optimization of the combined carbon dioxide reforming and partial-oxidation of methane was carried out in the work of Ganesan et al.,(2013).[19] In that work, the objective functions were: methane conversion, carbon monoxide selectivity and the hydrogen to carbon monoxide ratio. The problem was tackled using the Normal Boundary Intersection (NBI) method in conjunction with two swarm-based techniques (Gravitational Search Algorithm (GSA) and Particle Swarm Optimization (PSO)). Similar multi-objective problems were encountered and solved for applications involving chemical extraction[20] and bioethanol production processes.[21]

Radio resource management

The purpose of radio resource management is to satisfy the data rates that are requested by the users of a cellular network.[22] The main resources are time intervals, frequency blocks, and transmit powers. Each user has its own objective function that, for example, can represent some combination of the data rate, latency, and energy efficiency. These objectives are conflicting since the frequency resources are very scarce, thus there is a need for tight spatial frequency reuse which causes immense inter-user interference if not properly controlled. Multi-user MIMO techniques are nowadays used to reduce the interference by adaptive precoding. The network operator would like to both bring great coverage and high data rates, thus the operator would like to find a Pareto optimal solution that balance the total network data throughput and the user fairness in an appropriate subjective manner.

Radio resource management is often solved by scalarization; that is, selection of a network utility function that tries to balance throughput and user fairness. The choice of utility function has a large impact on the computational complexity of the resulting single-objective optimization problem.[22] For example, the common utility of weighted sum rate gives an NP-hard problem with a complexity that scales exponentially with the number of users, while the weighted max-min fairness utility results in a quasi-convex optimization problem with only a polynomial scaling with the number of users.[23]

Electric power systems

Reconfiguration, by exchanging the functional links between the elements of the system, represents one of the most important measures which can improve the operational performance of a distribution system. The problem of optimization through the reconfiguration of a power distribution system, in terms of its definition, is a historical single objective problem with constraints. Since 1975, when Merlin and Back [24] introduced the idea of distribution system reconfiguration for active power loss reduction, until nowadays, a lot of researchers have proposed diverse methods and algorithms to solve the reconfiguration problem as a single objective problem. Some authors have proposed Pareto optimality based approaches (including active power losses and reliability indices as objectives). For this purpose, different artificial intelligence based methods have been used: microgenetic,[25] branch exchange,[26] particle swarm optimization [27] and non-dominated sorting genetic algorithm.[28]

Solving a multi-objective optimization problem

As there usually exist multiple Pareto optimal solutions for multi-objective optimization problems, what it means to solve such a problem is not as straightforward as it is for a conventional single-objective optimization problem. Therefore, different researchers have defined the term "solving a multi-objective optimization problem" in various ways. This section summarizes some of them and the contexts in which they are used. Many methods convert the original problem with multiple objectives into a single-objective optimization problem. This is called a scalarized problem. If scalarization is done carefully, Pareto optimality of the solutions obtained can be guaranteed.

Solving a multi-objective optimization problem is sometimes understood as approximating or computing all or a representative set of Pareto optimal solutions.[29][30]

When decision making is emphasized, the objective of solving a multi-objective optimization problem is referred to supporting a decision maker in finding the most preferred Pareto optimal solution according to his/her subjective preferences.[1][31] The underlying assumption is that one solution to the problem must be identified to be implemented in practice. Here, a human decision maker (DM) plays an important role. The DM is expected to be an expert in the problem domain.

The most preferred solution can be found using different philosophies. Multi-objective optimization methods can be divided into four classes.[2] In so-called no preference methods, no DM is expected to be available, but a neutral compromise solution is identified without preference information.[1] The other classes are so-called a priori, a posteriori and interactive methods and they all involve preference information from the DM in different ways.

In a priori methods, preference information is first asked from the DM and then a solution best satisfying these preferences is found. In a posteriori methods, a representative set of Pareto optimal solutions is first found and then the DM must choose one of them. In interactive methods, the decision maker is allowed to iteratively search for the most preferred solution. In each iteration of the interactive method, the DM is shown Pareto optimal solution(s) and describes how the solution(s) could be improved. The information given by the decision maker is then taken into account while generating new Pareto optimal solution(s) for the DM to study in the next iteration. In this way, the DM learns about the feasibility of his/her wishes and can concentrate on solutions that are interesting to him/her. The DM may stop the search whenever he/she wants to. More information and examples of different methods in the four classes are given in the following sections.

Scalarizing multi-objective optimization problems

Scalarizing a multi-objective optimization problem means formulating a single-objective optimization problem such that optimal solutions to the single-objective optimization problem are Pareto optimal solutions to the multi-objective optimization problem.[2] In addition, it is often required that every Pareto optimal solution can be reached with some parameters of the scalarization.[2] With different parameters for the scalarization, different Pareto optimal solutions are produced. A general formulation for a scalarization of a multiobjective optimization is thus


\begin{array}{ll}
\min & g(f_1(x),\ldots,f_k(x),\theta)\\
\text{s.t }x\in X_\theta,
\end{array}

where \theta is a vector parameter, the set X_\theta\subseteq X is a set depending on the parameter \theta and g:\mathbb R^{k+1}\mapsto \mathbb R is a function.

Very well-known examples are the so-called

  • linear scalarization

\min_{x\in X} \sum_{i=1}^k w_if_i(x),
where the weights of the objectives w_i>0 are the parameters of the scalarization, and the
  • \epsilon-constraint method (see, e.g.[1])

\begin{array}{ll}
\min & f_j(x)\\
\text{s.t. }&x \in X\\
            &f_i(x)\leq \epsilon_j \text{ for }i\in\{1,\ldots,k\}\setminus\{j\},
\end{array}
where upper bounds  \epsilon_j are parameters as above and  f_j is the objective to be minimized.

Somewhat more advanced examples are the achievement scalarizing problems of Wierzbicki.[32] One example of the Achievement scalarizing problems can be formulated as


\begin{array}{ll}
\min & \max_{i=1,\ldots,k} \left[ \frac{f_i(x)-\bar z_i}{z^{\text{nad}}_i-z_i^{\text{utopia}}}\right] + \rho\sum_{i=1}^k\frac{f_i(x)}{z_i^{nad}-z_i^{\text{utopian}}}\\
\text{subject to }& x\in S,
\end{array}

where the term \rho\sum_{i=1}^k\frac{f_i(x)}{z_i^{nad}-z_i^{\text{utopia}}} is called the augmentation term, \rho>0 is a small constant, and z^{\text{nad}} and z^{\text{utopian}} are the nadir vector and a utopian vectors, respectively. In the above problem, the parameter is the so-called reference point \bar z which represents objective function values preferred by the decision maker.

For example, portfolio optimization is often conducted in terms of mean-variance analysis. In this context, the efficient set is a subset of the portfolios parametrized by the portfolio mean return \mu_P in the problem of choosing portfolio shares so as to minimize the portfolio's variance of return \sigma_P subject to a given value of \mu_P; see Mutual fund separation theorem for details. Alternatively, the efficient set can be specified by choosing the portfolio shares so as to maximize the function \mu_P - b \sigma_P ; the set of efficient portfolios consists of the solutions as b ranges from zero to infinity.

No-preference methods

Multi-objective optimization methods that do not require any preference information to be explicitly articulated by a decision maker can be classified as no-preference methods.[2] A well-known example is the method of global criterion,[33] in which a scalarized problem of the form


\begin{align}
\min&\|f(x)-z^{ideal}\|\\
\text{s.t. }&x\in X
\end{align}

is solved. In the above problem, \|\cdot\| can be any L_p norm, with common choices including L_1, L_2 and L_\infty.[1] The method of global criterion is sensitive to the scaling of the objective functions, and thus, it is recommended that the objectives are normalized into a uniform, dimensionless scale.[1][31]

A priori methods

A priori methods require that sufficient preference information is expressed before the solution process.[2] Well-known examples of a priori methods include the utility function method, lexicographic method, and goal programming.

In the utility function method, it is assumed that the decision maker's utility function is available. A mapping  u\colon Y\rightarrow\mathbb{R} is a utility function if for all \mathbf{y}^1,\mathbf{y}^2\in Y if it holds that u(\mathbf{y}^1)>u(\mathbf{y}^2) if the decision maker prefers \mathbf{y}^1 to \mathbf{y}^2, and u(\mathbf{y}^1)=u(\mathbf{y}^2) if the decision maker is indifferent between \mathbf{y}^1 and \mathbf{y}^2. The utility function specifies an ordering of the decision vectors (recall that vectors can be ordered in many different ways). Once u is obtained, it suffices to solve

    \max\;u(\mathbf{f}(\mathbf{x}))\text{ subject to }\mathbf{x}\in X,

but in practice it is very difficult to construct a utility function that would accurately represent the decision maker's preferences[1] - particularly since the Pareto front is unknown before the optimization begins.

Lexicographic method assumes that the objectives can be ranked in the order of importance. We can assume, without loss of generality, that the objective functions are in the order of importance so that f_1 is the most important and f_k the least important to the decision maker. The lexicographic method consists of solving a sequence of single-objective optimization problems of the form


\begin{align}
    \min&f_l(\mathbf{x})\\
    \text{s.t. }&f_j(\mathbf{x})\leq\mathbf{y}^*_j,\;j=1,\dotsc,l-1,\\
    &\mathbf{x}\in X,
\end{align}

where \mathbf{y}^*_j is the optimal value of the above problem with l=j. Thus, \mathbf{y}^*_1:=\min\{f_1(\mathbf{x})\mid\mathbf{x}\in X\} and each new problem of the form in the above problem in the sequence adds one new constraint as l goes from 1 to k.

A posteriori methods

A posteriori methods aim at producing all the Pareto optimal solutions or a representative subset of the Pareto optimal solutions. Most a posteriori methods fall into either one of the following two classes: mathematical programming -based a posteriori methods, where an algorithm is repeated and each run of the algorithm produces one Pareto optimal solution, and evolutionary algorithms where one run of the algorithm produces a set of Pareto optimal solutions.

Well-known examples of mathematical programming -based a posteriori methods are the Normal Boundary Intersection (NBI),[34] Modified Normal Boundary Intersection (NBIm) [35] Normal Constraint (NC),[36][37] Successive Pareto Optimization (SPO)[38] and Directed Search Domain (DSD)[39] methods that solve the multi-objective optimization problem by constructing several scalarizations. The solution to each scalarization yields a Pareto optimal solution, whether locally or globally. The scalarizations of the NBI, NBIm, NC and DSD methods are constructed with the target of obtaining evenly distributed Pareto points that give a good evenly distributed approximation of the real set of Pareto points.

Evolutionary algorithms are popular approaches to generating Pareto optimal solutions to a multi-objective optimization problem. Currently, most evolutionary multi-objective optimization (EMO) algorithms apply Pareto-based ranking schemes. Evolutionary algorithms such as the Non-dominated Sorting Genetic Algorithm-II (NSGA-II) [40] and Strength Pareto Evolutionary Algorithm 2 (SPEA-2)[41] have become standard approaches, although some schemes based on particle swarm optimization and simulated annealing[42] are significant. The main advantage of evolutionary algorithms, when applied to solve multi-objective optimization problems, is the fact that they typically generate sets of solutions, allowing computation of an approximation of the entire Pareto front. The main disadvantage of evolutionary algorithms is their lower speed and the Pareto optimality of the solutions cannot be guaranteed. It is only known that none of the generated solutions dominates the others.

Commonly known a posteriori methods are listed below:

Interactive methods

In interactive methods, the solution process is iterative and the decision maker continuously interacts with the method when searching for the most preferred solution (see e.g.[1][47]). In other words, the decision maker is expected to express preferences at each iteration in order to get Pareto optimal solutions that are of interest to him/her and learn what kind of solutions are attainable. The following steps are commonly present in interactive methods:[47]

  1. initialize (e.g. calculate ideal and approximated nadir objective vectors and show them to the decision maker)
  2. generate a Pareto optimal starting point (by using e.g. some no-preference method or solution given by the decision maker)
  3. ask for preference information from the decision maker (e.g. aspiration levels or number of new solutions to be generated)
  4. generate new Pareto optimal solution(s) according to the preferences and show it/them and possibly some other information about the problem to the decision maker
  5. if several solutions were generated, ask the decision maker to select the best solution so far
  6. stop, if the decision maker wants to; otherwise, go to step 3).

Above, aspiration levels refer to desirable objective function values forming a reference point. Instead of mathematical convergence that is often used as a stopping criterion in mathematical optimization methods, a psychological convergence is emphasized in interactive methods. Generally speaking, a method is terminated when the decision maker is confident that (s)he has found the most preferred solution available.

Different interactive methods involve different types of preference information. For example, three types can be identified: methods based on 1) trade-off information, 2) reference points and 3) classification of objective functions.[47] On the other hand, a fourth type of generating a small sample of solutions is included in[48] and.[49] An example of interactive method utilizing trade-off information is the Zionts-Wallenius method,[50] where the decision maker is shown several objective trade-offs at each iteration, and (s)he is expected to say whether (s)he likes, dislikes or is indifferent with respect to each trade-off. In reference point based methods (see e.g.[51][52]), the decision maker is expected at each iteration to specify a reference point consisting of desired values for each objective and a corresponding Pareto optimal solution(s) is then computed and shown to him/her for analysis. In classification based interactive methods, the decision maker is assumed to give preferences in the form of classifying objectives at the current Pareto optimal solution into different classes indicating how the values of the objectives should be changed to get a more preferred solution. Then, the classification information given is taken into account when new (more preferred) Pareto optimal solution(s) are computed. In the satisficing trade-off method (STOM)[53] three classes are used: objectives whose values 1) should be improved, 2) can be relaxed, and 3) are acceptable as such. In the NIMBUS method,[54][55] two additional classes are also used: objectives whose values 4) should be improved until a given bound and 5) can be relaxed until a given bound.

Hybrid methods

Different hybrid methods exist, but here we consider hybridizing MCDM (multi-criteria decision making) and EMO (evolutionary multi-objective optimization). A hybrid algorithm in the context of multi-objective optimization is a combination of algorithms/approaches from these two fields (see e.g.[47]). Hybrid algorithms of EMO and MCDM are mainly used to overcome shortcomings by utilizing strengths. Several types of hybrid algorithms have been proposed in the literature, e.g. incorporating MCDM approaches into EMO algorithms as a local search operator and to lead a DM to the most preferred solution(s) etc. A local search operator is mainly used to enhance the rate of convergence of EMO algorithms.

The roots for hybrid multi-objective optimization can be traced to the first Dagstuhl seminar organized in November 2004 (see, here). Here some of the best minds in EMO (Professor Kalyanmoy Deb, Professor Jürgen Branke etc.) and MCDM (Professor Kaisa Miettinen, Professor Ralph E. Steuer etc.) realized the potential in combining ideas and approaches of MCDM and EMO fields to prepare hybrids of them. Subsequently many more Dagstuhl seminars have been arranged to foster collaboration. Recently, hybrid multi-objective optimization has become an important theme in several international conferences in the area of EMO and MCDM (see e.g.[56] and.[57])

Visualization of the Pareto front

Visualization of the Pareto front is one of the a posteriori preference techniques of multi-objective optimization. The a posteriori preference techniques (see, for example,[1]) provide an important class of multi-objective optimization techniques. Usually the a posteriori preference techniques include four steps: (1) computer approximates the Pareto front, i.e. the Pareto optimal set in the objective space; (2) the decision maker studies the Pareto front approximation; (3) the decision maker identifies the preferred point at the Pareto front; (4) computer provides the Pareto optimal decision, which output coincides with the objective point identified by the decision maker. From the point of view of the decision maker, the second step of the a posteriori preference techniques is the most complicated one. There are two main approaches to informing the decision maker. First, a number of points of the Pareto front can be provided in the form of a list (interesting discussion and references are given in[58]) or using Heatmaps.[59] Alternative idea consists in visualizing the Pareto front.

Visualization in bi-objective problems: tradeoff curve

In the case of bi-objective problems, informing the decision maker concerning the Pareto front is usually carried out by its visualization: the Pareto front, often named the tradeoff curve in this case, can be drawn at the objective plane. The tradeoff curve gives full information on objective values and on objective tradeoffs, which inform how improving one objective is related to deteriorating the second one while moving along the tradeoff curve. The decision maker takes this information into account while specifying the preferred Pareto optimal objective point. The idea to approximate and visualize the Pareto front was introduced for linear bi-objective decision problems by S.Gass and T.Saaty.[60] This idea was developed and applied in environmental problems by J.L. Cohon.[61] A review of methods for approximating the Pareto front for various decision problems with a small number of objectives (mainly, two) is provided in.[62]

Visualization in high-order multi-objective optimization problems

There are two generic ideas how to visualize the Pareto front in high-order multi-objective decision problems (problems with more than two objectives). One of them, which is applicable in the case of a relatively small number of objective points that represent the Pareto front, is based on using the visualization techniques developed in statistics (various diagrams, etc. – see the corresponding subsection below). The second idea proposes the display of bi-objective cross-sections (slices) of the Pareto front. It was introduced by W.S. Meisel in 1973[63] who argued that such slices inform the decision maker on objective tradeoffs. The figures that display a series of bi-objective slices of the Pareto front for three-objective problems are known as the decision maps. They give a clear picture of tradeoffs between three criteria. Disadvantages of such an approach are related to two following facts. First, the computational procedures for constructing the bi-objective slices of the Pareto front are not stable since the Pareto front is usually not stable. Secondly, it is applicable in the case of only three objectives. In the 1980s, the idea W.S. Meisel of implemented in a different form – in the form of the Interactive Decision Maps (IDM) technique.[64]

Multi-objective optimization software

Name (alphabetically) License Brief info
BENSOLVE GPL free VLP (vector linear programs) solver - implementation of Benson's algorithm for solving vector linear programs,[65] in particular, multiobjective linear programs
Distributed Evolutionary Algorithms in Python LGPL A novel evolutionary computation framework for rapid prototyping and testing of ideas. It seeks to make algorithms explicit and data structures transparent. It works in perfect harmony with different parallelisation mechanisms.
MOEA Framework LGPL Java-based framework for multi-objective optimization with real, discrete, grammatical, or program representations.
Decisionarium free for academic global space for decision support (for academic use)
GUIMOO LGPL Graphical User Interface for Multi Objective Optimization from INRIA
IDSS Software free for non-profit activities MCDM software of the Laboratory of Intelligent Decision Support Systems (University of Poznan, Poland)
IND-NIMBUS proprietary implementation of the interactive NIMBUS method that can be connected with different simulation and modelling tools
interalg BSD solver with specifiable accuracy from OpenOpt - free universal cross-platform numerical optimization framework written in Python language using NumPy arrays, see its MOP page and other problems involved.
jMetal LGPL Java-based framework for multi-objecive optimization with metaheuristics.
MakeItRational proprietary AHP based decision software
Midacomo proprietary/free Multi-Objective extension for MIDACO (in Matlab, Python, C/C++ and Fortran), solves (constrained) problems with continuous, discrete and mixed integer variables.
MOIP_AIRA free for academic An improved recursive algorithm for multi-objective integer programming which uses an extended LP file format with arbitrary number of objectives and returns the set of nondominated solutions.[66]
Collection of Multiple Criteria Decision Support Software different by Dr. Roland Weistroffer
MGO (Multiple Goal Optimization) GNU Affero GPLv3 software licence MGO is a scala library based on the cake pattern for multi-objective Evolutionary Algorithms, work also with OpenMOLE using Netlogo and other agent based model software, see tutorial EA using Netlogo
WWW-NIMBUS free for academic for solving nonlinear (and even nondifferentiable) multiobjective optimization problems in an interactive way. Operates via the Internet.
ParadisEO-MOEO CeCill module specifically devoted to multiobjective optimization in ParadisEO, software framework for the design and implementation of metaheuristics, hybrid methods as well as parallel and distributed models from INRIA
PaGMO / PyGMO free Parallel Global Multiobjective Optimizer (and its Python alter ego PyGMO) offers a user-friendly access to a wide array of global and local optimization algorithms and problems. The main purpose of the software is to provide a parallelization engine common to all algorithms through the 'generalized island model'. Initially developed within the European Space Agency, the code was intended to help the automated design of interplanetary trajectories and spacecraft transfers in general. The user can implement his own problem and algorithm both in C++ and in Python.

Weistroffer et al. have written a book chapter[67] on multi-objective optimization software.

See also

References

  1. 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 Lua error in package.lua at line 80: module 'strict' not found.
  2. 2.0 2.1 2.2 2.3 2.4 2.5 Lua error in package.lua at line 80: module 'strict' not found.
  3. Lua error in package.lua at line 80: module 'strict' not found.
  4. Lua error in package.lua at line 80: module 'strict' not found.
  5. Lua error in package.lua at line 80: module 'strict' not found.
  6. Lua error in package.lua at line 80: module 'strict' not found.
  7. Lua error in package.lua at line 80: module 'strict' not found.
  8. Lua error in package.lua at line 80: module 'strict' not found.
  9. Lua error in package.lua at line 80: module 'strict' not found.
  10. Lua error in package.lua at line 80: module 'strict' not found.
  11. Lua error in package.lua at line 80: module 'strict' not found.
  12. Lua error in package.lua at line 80: module 'strict' not found.
  13. Lua error in package.lua at line 80: module 'strict' not found.
  14. Lua error in package.lua at line 80: module 'strict' not found.
  15. Lua error in package.lua at line 80: module 'strict' not found.
  16. Lua error in package.lua at line 80: module 'strict' not found.
  17. Lua error in package.lua at line 80: module 'strict' not found.
  18. Lua error in package.lua at line 80: module 'strict' not found.
  19. Lua error in package.lua at line 80: module 'strict' not found.
  20. Lua error in package.lua at line 80: module 'strict' not found.
  21. Lua error in package.lua at line 80: module 'strict' not found.
  22. 22.0 22.1 E. Björnson and E. Jorswieck, Optimal Resource Allocation in Coordinated Multi-Cell Systems, Foundations and Trends in Communications and Information Theory, vol. 9, no. 2-3, pp. 113-381, 2013.
  23. Z.-Q. Luo and S. Zhang, Dynamic spectrum management: Complexity and duality, IEEE Journal of Selected Topics in Signal Processing, vol. 2, no. 1, pp. 57–73, 2008.
  24. Merlin, A.; Back, H. Search for a Minimal-Loss Operating Spanning Tree Configuration in an Urban Power Distribution System. In Proceedings of the 1975 Fifth Power Systems Computer Conference (PSCC), Cambridge, UK, 1–5 September 1975; pp. 1–18.
  25. Mendoza, J.E.; Lopez, M.E.; Coello, C.A.; Lopez, E.A. Microgenetic multiobjective reconfiguration algorithm considering power losses and reliability indices for medium voltage distribution network. IET Gener. Transm. Distrib. 2009, 3, 825–840.
  26. Bernardon, D.P.; Garcia, V.J.; Ferreira, A.S.Q.; Canha, L.N. Multicriteria distribution network reconfiguration considering subtransmission analysis. IEEE Trans. Power Deliv. 2010, 25, 2684–2691.
  27. Amanulla, B.; Chakrabarti, S.; Singh, S.N. Reconfiguration of power distribution systems considering reliability and power loss. IEEE Trans. Power Deliv. 2012, 27, 918–926.
  28. Tomoiagă, B.; Chindriş, M.; Sumper, A.; Sudria-Andreu, A.; Villafafila-Robles, R. Pareto Optimal Reconfiguration of Power Distribution Systems Using a Genetic Algorithm Based on NSGA-II. Energies 2013, 6, 1439-1455.
  29. Lua error in package.lua at line 80: module 'strict' not found.
  30. Lua error in package.lua at line 80: module 'strict' not found.
  31. 31.0 31.1 Lua error in package.lua at line 80: module 'strict' not found.
  32. Lua error in package.lua at line 80: module 'strict' not found.
  33. Lua error in package.lua at line 80: module 'strict' not found.
  34. 34.0 34.1 Lua error in package.lua at line 80: module 'strict' not found.
  35. 35.0 35.1 Lua error in package.lua at line 80: module 'strict' not found.
  36. 36.0 36.1 Lua error in package.lua at line 80: module 'strict' not found.
  37. 37.0 37.1 Lua error in package.lua at line 80: module 'strict' not found.
  38. 38.0 38.1 Lua error in package.lua at line 80: module 'strict' not found.
  39. 39.0 39.1 Lua error in package.lua at line 80: module 'strict' not found.
  40. 40.0 40.1 Lua error in package.lua at line 80: module 'strict' not found.
  41. Zitzler, E., Laumanns, M., Thiele, L.: SPEA2: Improving the Performance of the Strength Pareto Evolutionary Algorithm, Technical Report 103, Computer Engineering and Communication Networks Lab (TIK), Swiss Federal Institute of Technology (ETH) Zurich (2001) [1]
  42. Lua error in package.lua at line 80: module 'strict' not found.
  43. Lua error in package.lua at line 80: module 'strict' not found.
  44. Lua error in package.lua at line 80: module 'strict' not found.
  45. Lua error in package.lua at line 80: module 'strict' not found.
  46. Lua error in package.lua at line 80: module 'strict' not found.
  47. 47.0 47.1 47.2 47.3 Lua error in package.lua at line 80: module 'strict' not found.
  48. Lua error in package.lua at line 80: module 'strict' not found.
  49. Lua error in package.lua at line 80: module 'strict' not found.
  50. Lua error in package.lua at line 80: module 'strict' not found.
  51. Lua error in package.lua at line 80: module 'strict' not found.
  52. Lua error in package.lua at line 80: module 'strict' not found.
  53. Lua error in package.lua at line 80: module 'strict' not found.
  54. Lua error in package.lua at line 80: module 'strict' not found.
  55. Lua error in package.lua at line 80: module 'strict' not found.
  56. Lua error in package.lua at line 80: module 'strict' not found.
  57. Lua error in package.lua at line 80: module 'strict' not found.
  58. Lua error in package.lua at line 80: module 'strict' not found.
  59. Lua error in package.lua at line 80: module 'strict' not found.
  60. Lua error in package.lua at line 80: module 'strict' not found.
  61. Lua error in package.lua at line 80: module 'strict' not found.
  62. Lua error in package.lua at line 80: module 'strict' not found.
  63. Lua error in package.lua at line 80: module 'strict' not found.
  64. Lua error in package.lua at line 80: module 'strict' not found.
  65. Lua error in package.lua at line 80: module 'strict' not found.
  66. Lua error in package.lua at line 80: module 'strict' not found.
  67. Lua error in package.lua at line 80: module 'strict' not found.

External links