Linear separability

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

In Euclidean geometry, linear separability is a geometric property of a pair of sets of points. This is most easily visualized in two dimensions (the Euclidean plane) by thinking of one set of points as being colored blue and the other set of points as being colored red. These two sets are linearly separable if there exists at least one line in the plane with all of the blue points on one side of the line and all the red points on the other side. This idea immediately generalizes to higher-dimensional Euclidean spaces if line is replaced by hyperplane.

The problem of determining if a pair of sets is linearly separable and finding a separating hyperplane if they are arises in several areas. In statistics and machine learning, classifying certain types of data is a problem for which good algorithms exist that are based on this concept.

Mathematical definition

Let X_{0} and X_{1} be two sets of points in an n-dimensional Euclidean space. Then X_{0} and X_{1} are linearly separable if there exists n + 1 real numbers w_{1}, w_{2},..,w_{n}, k, such that every point x \in X_{0} satisfies \sum^{n}_{i=1} w_{i}x_{i} > k and every point x \in X_{1} satisfies \sum^{n}_{i=1} w_{i}x_{i} < k, where x_{i} is the i-th component of x.

Equivalently, two sets are linearly separable precisely when their respective convex hulls are disjoint (colloquially, do not overlap).

Examples

Three non-collinear points in two classes ('+' and '-') are always linearly separable in two dimensions. This is illustrated by the three examples in the following figure (the all '+' case is not shown, but is similar to the all '-' case):

File:VC1.svg File:VC2.svg File:VC3.svg

However, not all sets of four points, no three collinear, are linearly separable in two dimensions. The following example would need two straight lines and thus is not linearly separable:

File:VC4.svg

Notice that three points which are collinear and of the form "+ ⋅⋅⋅ — ⋅⋅⋅ +" are also not linearly separable.

Linear separability of Boolean functions in n variables

A Boolean function in n variables can be thought of as an assignment of 0 or 1 to each vertex of a Boolean hypercube in n dimensions. This gives a natural division of the vertices into two sets. The Boolean function is said to be linearly separable provided these two sets of points are linearly separable.

Number of linearly separable Boolean functions in each dimension[1] (sequence A000609 in OEIS)
Number of variables Linearly separable Boolean functions
2 14
3 104
4 1882
5 94572
6 15028134
7 8378070864
8 17561539552946
9 144130531453121108

Support vector machines

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

H1 does not separate the sets. H2 does, but only with a small margin. H3 separates them with the maximum margin.

Classifying data is a common task in machine learning. Suppose some data points, each belonging to one of two sets, are given and we wish to create a model that will decide which set a new data point will be in. In the case of support vector machines, a data point is viewed as a p-dimensional vector (a list of p numbers), and we want to know whether we can separate such points with a (p − 1)-dimensional hyperplane. This is called a linear classifier. There are many hyperplanes that might classify (separate) the data. One reasonable choice as the best hyperplane is the one that represents the largest separation, or margin, between the two sets. So we choose the hyperplane so that the distance from it to the nearest data point on each side is maximized. If such a hyperplane exists, it is known as the maximum-margin hyperplane and the linear classifier it defines is known as a maximum margin classifier.

More formally, given some training data \mathcal{D}, a set of n points of the form

\mathcal{D} = \left\{ (\mathbf{x}_i, y_i)\mid\mathbf{x}_i \in \mathbb{R}^p,\, y_i \in \{-1,1\}\right\}_{i=1}^n

where the yi is either 1 or −1, indicating the set to which the point \mathbf{x}_i belongs. Each  \mathbf{x}_i is a p-dimensional real vector. We want to find the maximum-margin hyperplane that divides the points having y_i=1 from those having y_i=-1. Any hyperplane can be written as the set of points \mathbf{x} satisfying

\mathbf{w}\cdot\mathbf{x} - b=0,\,

where \cdot denotes the dot product and {\mathbf{w}} the (not necessarily normalized) normal vector to the hyperplane. The parameter \tfrac{b}{\|\mathbf{w}\|} determines the offset of the hyperplane from the origin along the normal vector {\mathbf{w}}.

If the training data are linearly separable, we can select two hyperplanes in such a way that they separate the data and there are no points between them, and then try to maximize their distance.

See also

References

  1. Lua error in package.lua at line 80: module 'strict' not found.