Transitive reduction

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

In mathematics, a transitive reduction of a directed graph is a graph with as few edges as possible that has the same reachability relation as the given graph. Equivalently, the given graph and its transitive reduction should have the same transitive closure as each other, and its transitive reduction should have as few edges as possible among all graphs with this property. Transitive reductions were introduced by Aho, Garey & Ullman (1972), who provided tight bounds on the computational complexity of constructing them.

If a given graph is a finite directed acyclic graph, its transitive reduction is unique, and is a subgraph of the given graph. However, uniqueness is not guaranteed for graphs with cycles, and for infinite graphs not even existence is guaranteed. The closely related concept of a minimum equivalent graph is a subgraph of the given graph that has the same reachability relation and as few edges as possible.[1] For finite directed acyclic graphs, the minimum equivalent graph is the same as the transitive reduction. However, for graphs that may contain cycles, minimum equivalent graphs are NP-hard to construct, while transitive reductions can still be constructed in polynomial time. Transitive reductions can also be defined for more abstract binary relations on sets, by interpreting the pairs of the relation as arcs in a graph.

In directed acyclic graphs

The transitive reduction of a finite directed graph G is a graph with the fewest possible edges that has the same reachability relation as the original graph. That is, if there is a path from a vertex x to a vertex y in graph G, there must also be a path from x to y in the transitive reduction of G, and vice versa. The following image displays drawings of graphs corresponding to a non-transitive binary relation (on the left) and its transitive reduction (on the right).

The transitive reduction of a finite directed acyclic graph G is unique, and consists of the edges of G that form the only path between their endpoints. In particular, it is always a subgraph of the given graph. For this reason, the transitive reduction coincides with the minimum equivalent graph in this case.

In the mathematical theory of binary relations, any relation R on a set X may be thought of as a directed graph that has the set X as its vertex set and that has an arc xy for every ordered pair of elements that are related in R. In particular, this method lets partially ordered sets be reinterpreted as directed acyclic graphs, in which there is an arc xy in the graph whenever there is an order relation x < y between the given pair of elements of the partial order. When the transitive reduction operation is applied to a directed acyclic graph that has been constructed in this way, it generates the covering relation of the partial order, which is frequently given visual expression by means of a Hasse diagram.

Transitive reduction has been used on networks which can be represented as directed acyclic graphs (e.g. citation networks) to reveal structural differences between networks.[2]

In graphs with cycles

In a finite graph that may have cycles, the transitive reduction is not uniquely defined: there may be more than one graph on the same vertex set that has a minimal number of edges and has the same reachability relation as the given graph. Additionally, it may be the case that none of these minimal graphs is a subgraph of the given graph. Nevertheless, it is straightforward to characterize the minimal graphs with the same reachability relation as the given graph G.[3] If G is an arbitrary directed graph, and H is a graph with the minimum possible number of edges having the same reachability relation as G, then H consists of

  • A directed cycle for each strongly connected component of G, connecting together the vertices in this component
  • An edge xy for each edge XY of the transitive reduction of the condensation of G, where X and Y are two strongly connected components of G that are connected by an edge in the condensation, x is any vertex in component X, and y is any vertex in component Y. The condensation of G is a directed acyclic graph that has a vertex for every strongly connected component of G and an edge for every two components that are connected by an edge in G. In particular, because it is acyclic, its transitive reduction can be defined as in the previous section.

The total number of edges in this type of transitive reduction is then equal to the number of edges in the transitive reduction of the condensation, plus the number of vertices in nontrivial strongly connected components (components with more than one vertex).

The edges of the transitive reduction that correspond to condensation edges can always be chosen to be a subgraph of the given graph G. However, the cycle within each strongly connected component can only be chosen to be a subgraph of G if that component has a Hamiltonian cycle, something that is not always true and is difficult to check. Because of this difficulty, it is NP-hard to find the smallest subgraph of a given graph G with the same reachability (its minimum equivalent graph).[3]

Computational complexity

As Aho et al. show,[3] when the time complexity of graph algorithms is measured only as a function of the number n of vertices in the graph, and not as a function of the number of edges, transitive closure and transitive reduction of directed acyclic graphs have the same complexity. It had already been shown that transitive closure and multiplication of Boolean matrices of size n × n had the same complexity as each other,[4] so this result put transitive reduction into the same class. The fastest known exact algorithms for matrix multiplication, as of 2015, take time O(n2.3729),[5] and this gives the fastest known worst-case time bound for transitive reduction in dense graphs.

Computing the reduction using the closure

To prove that transitive reduction is as easy as transitive closure, Aho et al. rely on the already-known equivalence with Boolean matrix multiplication. They let A be the adjacency matrix of the given graph, and B be the adjacency matrix of its transitive closure (computed using any standard transitive closure algorithm). Then an edge uv belongs to the transitive reduction if and only if there is a nonzero entry in row u and column v of matrix A, and there is not a nonzero entry in the same position of the matrix product AB. In this construction, the nonzero elements of the matrix AB represent pairs of vertices connected by paths of length two or more.[3]

Computing the closure using the reduction

To prove that transitive reduction is as hard as transitive closure, Aho et al. construct from a given directed acyclic graph G another graph H, in which each vertex of G is replaced by a path of three vertices, and each edge of G corresponds to an edge in H connecting the corresponding middle vertices of these paths. In addition, in the graph H, Aho et al. add an edge from every path start to every path end. In the transitive reduction of H, there is an edge from the path start for u to the path end for v, if and only if edge uv does not belong to the transitive closure of G. Therefore, if the transitive reduction of H can be computed efficiently, the transitive closure of G can be read off directly from it.[3]

Computing the reduction in sparse graphs

When measured both in terms of the number n of vertices and the number m of edges in a directed acyclic graph, transitive reductions can also be found in time O(nm), a bound that may be faster than the matrix multiplication methods for sparse graphs. To do so, collect edges (u,v) such that the longest-path distance from u to v is one, calculating those distances by linear-time search from each possible starting vertex, u. This O(nm) time bound matches the complexity of constructing transitive closures by using depth first search or breadth first search to find the vertices reachable from every choice of starting vertex, so again with these assumptions transitive closures and transitive reductions can be found in the same amount of time.

Notes

  1. Moyles & Thompson (1969).
  2. Clough et al. (2014).
  3. 3.0 3.1 3.2 3.3 3.4 Aho, Garey & Ullman (1972)
  4. Aho et al. credit this result to an unpublished 1971 manuscript of Ian Munro, and to a 1970 Russian-language paper by M. E. Furman.
  5. Le Gall (2014).

References

  • Lua error in package.lua at line 80: module 'strict' not found..
  • Lua error in package.lua at line 80: module 'strict' not found..
  • Lua error in package.lua at line 80: module 'strict' not found..
  • Lua error in package.lua at line 80: module 'strict' not found..

External links