Knuth–Bendix completion algorithm

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

The Knuth–Bendix completion algorithm (named after Donald Knuth and Peter Bendix[1]) is a semi-decision[2][3] algorithm for transforming a set of equations (over terms) into a confluent term rewriting system. When the algorithm succeeds, it effectively solves the word problem for the specified algebra.

Buchberger's algorithm for computing Gröbner bases is a very similar algorithm. Although developed independently, it may also be seen as the instantiation of Knuth–Bendix algorithm in the theory of polynomial rings.

Introduction

For a set E of equations, its deductive closure (E) is the set of all equations that can be derived by applying equations from E in any order. Formally, E is considered a binary relation, (E) is its rewrite closure, and (E) is the equivalence closure of (E). For a set R of rewrite rules, its deductive closure (EE) is the set of all equations than can be confirmed by applying rules from R left-to-right to both sides until they are literally equal. Formally, R is again viewed as binary relation, (R) is its rewrite closure, (R) is its converse, and (EE) is the relation composition of their reflexive transitive closures (E and E).

For example, if E = {1⋅x = x, x−1x = 1, (xy)⋅z = x⋅(yz)} are the group axioms, the derivation chain

a−1⋅(ab)   E   (a−1a)⋅b   E   1⋅b   E   b

demonstrates that a−1⋅(ab) E b is a member of E's deductive closure. If R = { 1⋅xx, x−1x → 1, (xy)⋅zx⋅(yz) } is a "rewrite rule" version of E, the derivation chains

(a−1a)⋅b   E   1⋅b   E   b       and       b   E   b⋅1

demonstrate that (a−1a)⋅b EE b⋅1 is a member of R's deductive closure. However, there is no way to derive a−1⋅(ab) EE b similar to above, since a right-to-left application of the rule (xy)⋅zx⋅(yz) is not allowed.

The Knuth–Bendix algorithm takes a set E of equations between terms, and a reduction ordering (>) on the set of all terms, and attempts to construct a confluent and terminating term rewriting system R that has the same deductive closure as E. While proving consequences from E often requires human intuition, proving consequences from R does not. For more details, see Confluence (abstract rewriting)#Motivating examples, which gives an example proof from group theory, performed both using E and using R.

Rules

Given a set E of equations between terms, the following inference rules can be used to transform it into an equivalent convergent term rewrite system (if possible):[4][5] They are based on a user-given reduction ordering (>) on the set of all terms; it is lifted to a well-founded ordering (▻) on the set of rewrite rules by defining (st) ▻ (lr) if

Delete E∪{s = s} , R E , R
Compose         E , R∪{st} ›         ⊢         E , R∪{su} ›         if t R u
Simplify E∪{s = t} , R E∪{s = u} , R if t R u
Orient E∪{s = t} , R E , R∪{st} › if s > t
Collapse E , R∪{st} › E∪{u = t} , R if s R u by lr with (st) ▻ (lr)
Deduce E , R E∪{s = t} , R if (s,t) is a critical pair of R

Example

The following example run, obtained from the E theorem prover, computes a completion of the (additive) group axioms as in Knuth, Bendix (1970). It starts with the three initial equations for the group (neutral element 0, inverse elements, associativity), using f(X,Y) for X+Y, and i(X) for −X. The 10 equations marked with "final" represent the resulting convergent rewrite system. "pm" is short for "paramodulation", implementing deduce. Critical pair computation is an instance of paramodulation for equational unit clauses. "rw" is rewriting, implementing compose, collapse, and simplify. Orienting of equations is done implicitly and not recorded.

      1 :  : [++equal(f(X1,0), X1)] : initial("GROUP.lop", at_line_9_column_1)
      2 :  : [++equal(f(X1,i(X1)), 0)] : initial("GROUP.lop", at_line_12_column_1)
      3 :  : [++equal(f(f(X1,X2),X3), f(X1,f(X2,X3)))] : initial("GROUP.lop", at_line_15_column_1)
      5 :  : [++equal(f(X1,X2), f(X1,f(0,X2)))] : pm(3,1)
      6 :  : [++equal(f(X1,f(X2,i(f(X1,X2)))), 0)] : pm(2,3)
      7 :  : [++equal(f(0,X2), f(X1,f(i(X1),X2)))] : pm(3,2)
     27 :  : [++equal(f(X1,0), f(0,i(i(X1))))] : pm(7,2)
     36 :  : [++equal(X1, f(0,i(i(X1))))] : rw(27,1)
     46 :  : [++equal(f(X1,X2), f(X1,i(i(X2))))] : pm(5,36)
     52 :  : [++equal(f(0,X1), X1)] : rw(36,46)
     60 :  : [++equal(i(0), 0)] : pm(2,52)
     63 :  : [++equal(i(i(X1)), f(0,X1))] : pm(46,52)
     64 :  : [++equal(f(X1,f(i(X1),X2)), X2)] : rw(7,52)
     67 :  : [++equal(i(i(X1)), X1)] : rw(63,52)
     74 :  : [++equal(f(i(X1),X1), 0)] : pm(2,67)
     79 :  : [++equal(f(0,X2), f(i(X1),f(X1,X2)))] : pm(3,74)
     83 :  : [++equal(X2, f(i(X1),f(X1,X2)))] : rw(79,52)
    134 :  : [++equal(f(i(X1),0), f(X2,i(f(X1,X2))))] : pm(83,6)
    151 :  : [++equal(i(X1), f(X2,i(f(X1,X2))))] : rw(134,1)
    165 :  : [++equal(f(i(X1),i(X2)), i(f(X2,X1)))] : pm(83,151)
    239 :  : [++equal(f(X1,0), X1)] : 1 : 'final'
    240 :  : [++equal(f(X1,i(X1)), 0)] : 2 : 'final'
    241 :  : [++equal(f(f(X1,X2),X3), f(X1,f(X2,X3)))] : 3 : 'final'
    242 :  : [++equal(f(0,X1), X1)] : 52 : 'final'
    243 :  : [++equal(i(0), 0)] : 60 : 'final'
    244 :  : [++equal(i(i(X1)), X1)] : 67 : 'final'
    245 :  : [++equal(f(i(X1),X1), 0)] : 74 : 'final'
    246 :  : [++equal(f(X1,f(i(X1),X2)), X2)] : 64 : 'final'
    247 :  : [++equal(f(i(X1),f(X1,X2)), X2)] : 83 : 'final'
    248 :  : [++equal(i(f(X1,X2)), f(i(X2),i(X1)))] : 165 : 'final'

See also Word problem (mathematics) for another presentation of this example.

String rewriting systems in group theory

An important case in computational group theory are string rewriting systems which can be used to give canonical labels to elements or cosets of a finitely presented group as products of the generators. This special case is the focus of this section.

Motivation in group theory

The critical pair lemma states that a term rewriting system is locally confluent (or weakly confluent) if and only if all its critical pairs are convergent. Furthermore, we have Newman's lemma which states that if an (abstract) rewriting system is strongly normalizing and weakly confluent, then the rewriting system is confluent. So, if we can add rules to the term rewriting system in order to force all critical pairs to be convergent while maintaining the strong normalizing property, then this will force the resultant rewriting system to be confluent.

Consider a finitely presented monoid M = \langle X \mid R \rangle where X is a finite set of generators and R is a set of defining relations on X. Let X* be the set of all words in X (i.e. the free monoid generated by X). Since the relations R generate an equivalence relation on X*, one can consider elements of M to be the equivalence classes of X* under R. For each class {w1, w2, ... } it is desirable to choose a standard representative wk. This representative is called the canonical or normal form for each word wk in the class. If there is a computable method to determine for each wk its normal form wi then the word problem is easily solved. A confluent rewriting system allows one to do precisely this.

Although the choice of a canonical form can theoretically be made in an arbitrary fashion this approach is generally not computable. (Consider that an equivalence relation on a language can produce an infinite number of infinite classes.) If the language is well-ordered then the order < gives a consistent method for defining minimal representatives, however computing these representatives may still not be possible. In particular, if a rewriting system is used to calculate minimal representatives then the order < should also have the property:

A < B → XAY < XBY for all words A,B,X,Y

This property is called translation invariance. An order that is both translation-invariant and a well-order is called a reduction order.

From the presentation of the monoid it is possible to define a rewriting system given by the relations R. If A x B is in R then either A < B in which case B → A is a rule in the rewriting system, otherwise A > B and A → B. Since < is a reduction order a given word W can be reduced W > W_1 > ... > W_n where W_n is irreducible under the rewriting system. However, depending on the rules that are applied at each Wi → Wi+1 it is possible to end up with two different irreducible reductions Wn ≠ W'm of W. However, if the rewriting system given by the relations is converted to a confluent rewriting system via the Knuth–Bendix algorithm, then all reductions are guaranteed to produce the same irreducible word, namely the normal form for that word.

Description of the algorithm for finitely presented monoids

Suppose we are given a presentation  \langle X \mid R \rangle , where  X is a set of generators and  R is a set of relations giving the rewriting system. Suppose further that we have a reduction ordering  < among the words generated by  X (e.g., shortlex order). For each relation  P_i = Q_i in  R , suppose  Q_i < P_i . Thus we begin with the set of reductions  P_i \rightarrow Q_i .

First, if any relation  P_i = Q_i can be reduced, replace  P_i and  Q_i with the reductions.

Next, we add more reductions (that is, rewriting rules) to eliminate possible exceptions of confluence. Suppose that  P_i and  P_j , where  i \neq j , overlap.

  1. Case 1: either the prefix of  P_i equals the suffix of  P_j , or vice versa. In the former case, we can write  P_i = BC and  P_j = AB ; in the latter case,  P_i = AB and  P_j = BC .
  2. Case 2: either  P_i is completely contained in (surrounded by)  P_j , or vice versa. In the former case, we can write  P_i = B and  P_j = ABC ; in the latter case,  P_i = ABC and  P_j = B .

Reduce the word  ABC using  P_i first, then using  P_j first. Call the results  r_1, r_2 , respectively. If  r_1 \neq r_2 , then we have an instance where confluence could fail. Hence, add the reduction  \max r_1, r_2 \rightarrow \min r_1, r_2 to  R .

After adding a rule to  R , remove any rules in  R that might have reducible left sides.

Repeat the procedure until all overlapping left sides have been checked.

Examples

A terminating example

Consider the monoid:  \langle x, y \mid x^3 = y^3 = (xy)^3 = 1 \rangle . We use the shortlex order. This is an infinite monoid but nevertheless, the Knuth–Bendix algorithm is able to solve the word problem.

Our beginning three reductions are therefore (1)  x^3 \rightarrow 1 , (2)  y^3 \rightarrow 1 , and (3)  (xy)^3 \rightarrow 1 .

Consider the word  x^3yxyxy . Reducing using (1), we get  yxyxy . Reducing using (3), we get  x^2 . Hence, we get  yxyxy = x^2 , giving the reduction rule (4)  yxyxy \rightarrow x^2 .

Similarly, using  xyxyxy^3 and reducing using (2) and (3), we get  xyxyx = y^2 . Hence the reduction (5)  xyxyx \rightarrow y^2 .

Both of these rules obsolete (3), so we remove it.

Next, consider  x^3yxyx by overlapping (1) and (5). Reducing we get  yxyx = x^2y^2 , so we add the rule (6)  yxyx \rightarrow x^2y^2. Considering  xyxyx^3 by overlapping (1) and (5), we get  xyxy = y^2x^2 , so we add the rule (7)  y^2x^2 \rightarrow xyxy . These obsolete rules (4) and (5), so we remove them.

Now, we are left with the rewriting system

  • (1)  x^3 \rightarrow 1
  • (2)  y^3 \rightarrow 1
  • (6)  yxyx \rightarrow x^2y^2
  • (7)  y^2 x^2 \rightarrow xyxy

Checking the overlaps of these rules, we find no potential failures of confluence. Therefore, we have a confluent rewriting system, and the algorithm terminates successfully.

A non-terminating example

The order of the generators may crucially affect whether the Knuth–Bendix completion terminates. As an example, consider the free Abelian group by the monoid presentation:

\langle x, y, x^{-1}, y^{-1}\, |\, xy=yx, xx^{-1} = x^{-1}x = yy^{-1} = y^{-1}y = 1 \rangle  .

The Knuth–Bendix completion with respect to lexicographic order x < x^{-1} < y < y^ {-1} finishes with a convergent system, however considering the length-lexicographic order x < y < x^{-1} < y^{-1} it does not finish for there are no finite convergent systems compatible with this latter order.[6]

Generalizations

If Knuth–Bendix does not succeed, it will either run forever, or fail when it encounters an unorientable equation (i.e. an equation that it cannot turn into a rewrite rule). The enhanced completion without failure will not fail on unorientable equations and provides a semi-decision procedure for the word problem.

The notion of logged rewriting discussed in the paper by Heyworth and Wensley listed below allows some recording or logging of the rewriting process as it proceeds. This is useful for computing identities among relations for presentations of groups.

References

  1. D. Knuth, "The Genesis of Attribute Grammars"
  2. Lua error in package.lua at line 80: module 'strict' not found.
  3. Lua error in package.lua at line 80: module 'strict' not found., p. 55
  4. Lua error in package.lua at line 80: module 'strict' not found.
  5. Lua error in package.lua at line 80: module 'strict' not found. Here: sect.8.1, p.293
  6. Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • C. Sims. 'Computations with finitely presented groups.' Cambridge, 1994.
  • Anne Heyworth and C.D. Wensley. "Logged rewriting and identities among relators." Groups St. Andrews 2001 in Oxford. Vol. I, 256–276, London Math. Soc. Lecture Note Ser., 304, Cambridge Univ. Press, Cambridge, 2003.

External links