/export/starexec/sandbox2/solver/bin/starexec_run_FirstOrder /export/starexec/sandbox2/benchmark/theBenchmark.xml /export/starexec/sandbox2/output/output_files -------------------------------------------------------------------------------- YES We consider the system theBenchmark. We are asked to determine termination of the following first-order TRS. 0 : [] --> o app : [o * o] --> o cons : [o * o] --> o nil : [] --> o plus : [o * o] --> o pred : [o] --> o s : [o] --> o sum : [o] --> o app(nil, X) => X app(X, nil) => X app(cons(X, Y), Z) => cons(X, app(Y, Z)) sum(cons(X, nil)) => cons(X, nil) sum(cons(X, cons(Y, Z))) => sum(cons(plus(X, Y), Z)) sum(app(X, cons(Y, cons(Z, U)))) => sum(app(X, sum(cons(Y, cons(Z, U))))) plus(0, X) => X plus(s(X), Y) => s(plus(X, Y)) sum(plus(cons(0, X), cons(Y, Z))) => pred(sum(cons(s(X), cons(Y, Z)))) pred(cons(s(X), nil)) => cons(X, nil) We use rule removal, following [Kop12, Theorem 2.23]. This gives the following requirements (possibly using Theorems 2.25 and 2.26 in [Kop12]): app(nil, X) >? X app(X, nil) >? X app(cons(X, Y), Z) >? cons(X, app(Y, Z)) sum(cons(X, nil)) >? cons(X, nil) sum(cons(X, cons(Y, Z))) >? sum(cons(plus(X, Y), Z)) sum(app(X, cons(Y, cons(Z, U)))) >? sum(app(X, sum(cons(Y, cons(Z, U))))) plus(0, X) >? X plus(s(X), Y) >? s(plus(X, Y)) sum(plus(cons(0, X), cons(Y, Z))) >? pred(sum(cons(s(X), cons(Y, Z)))) pred(cons(s(X), nil)) >? cons(X, nil) We orient these requirements with a polynomial interpretation in the natural numbers. The following interpretation satisfies the requirements: 0 = 3 app = \y0y1.y1 + 3y0 cons = \y0y1.y0 + y1 nil = 0 plus = \y0y1.y0 + y1 pred = \y0.y0 s = \y0.y0 sum = \y0.y0 Using this interpretation, the requirements translate to: [[app(nil, _x0)]] = x0 >= x0 = [[_x0]] [[app(_x0, nil)]] = 3x0 >= x0 = [[_x0]] [[app(cons(_x0, _x1), _x2)]] = x2 + 3x0 + 3x1 >= x0 + x2 + 3x1 = [[cons(_x0, app(_x1, _x2))]] [[sum(cons(_x0, nil))]] = x0 >= x0 = [[cons(_x0, nil)]] [[sum(cons(_x0, cons(_x1, _x2)))]] = x0 + x1 + x2 >= x0 + x1 + x2 = [[sum(cons(plus(_x0, _x1), _x2))]] [[sum(app(_x0, cons(_x1, cons(_x2, _x3))))]] = x1 + x2 + x3 + 3x0 >= x1 + x2 + x3 + 3x0 = [[sum(app(_x0, sum(cons(_x1, cons(_x2, _x3)))))]] [[plus(0, _x0)]] = 3 + x0 > x0 = [[_x0]] [[plus(s(_x0), _x1)]] = x0 + x1 >= x0 + x1 = [[s(plus(_x0, _x1))]] [[sum(plus(cons(0, _x0), cons(_x1, _x2)))]] = 3 + x0 + x1 + x2 > x0 + x1 + x2 = [[pred(sum(cons(s(_x0), cons(_x1, _x2))))]] [[pred(cons(s(_x0), nil))]] = x0 >= x0 = [[cons(_x0, nil)]] We can thus remove the following rules: plus(0, X) => X sum(plus(cons(0, X), cons(Y, Z))) => pred(sum(cons(s(X), cons(Y, Z)))) We use rule removal, following [Kop12, Theorem 2.23]. This gives the following requirements (possibly using Theorems 2.25 and 2.26 in [Kop12]): app(nil, X) >? X app(X, nil) >? X app(cons(X, Y), Z) >? cons(X, app(Y, Z)) sum(cons(X, nil)) >? cons(X, nil) sum(cons(X, cons(Y, Z))) >? sum(cons(plus(X, Y), Z)) sum(app(X, cons(Y, cons(Z, U)))) >? sum(app(X, sum(cons(Y, cons(Z, U))))) plus(s(X), Y) >? s(plus(X, Y)) pred(cons(s(X), nil)) >? cons(X, nil) We orient these requirements with a polynomial interpretation in the natural numbers. The following interpretation satisfies the requirements: app = \y0y1.y1 + 3y0 cons = \y0y1.y0 + y1 nil = 1 plus = \y0y1.y0 + y1 pred = \y0.3 + 3y0 s = \y0.y0 sum = \y0.y0 Using this interpretation, the requirements translate to: [[app(nil, _x0)]] = 3 + x0 > x0 = [[_x0]] [[app(_x0, nil)]] = 1 + 3x0 > x0 = [[_x0]] [[app(cons(_x0, _x1), _x2)]] = x2 + 3x0 + 3x1 >= x0 + x2 + 3x1 = [[cons(_x0, app(_x1, _x2))]] [[sum(cons(_x0, nil))]] = 1 + x0 >= 1 + x0 = [[cons(_x0, nil)]] [[sum(cons(_x0, cons(_x1, _x2)))]] = x0 + x1 + x2 >= x0 + x1 + x2 = [[sum(cons(plus(_x0, _x1), _x2))]] [[sum(app(_x0, cons(_x1, cons(_x2, _x3))))]] = x1 + x2 + x3 + 3x0 >= x1 + x2 + x3 + 3x0 = [[sum(app(_x0, sum(cons(_x1, cons(_x2, _x3)))))]] [[plus(s(_x0), _x1)]] = x0 + x1 >= x0 + x1 = [[s(plus(_x0, _x1))]] [[pred(cons(s(_x0), nil))]] = 6 + 3x0 > 1 + x0 = [[cons(_x0, nil)]] We can thus remove the following rules: app(nil, X) => X app(X, nil) => X pred(cons(s(X), nil)) => cons(X, nil) We use rule removal, following [Kop12, Theorem 2.23]. This gives the following requirements (possibly using Theorems 2.25 and 2.26 in [Kop12]): app(cons(X, Y), Z) >? cons(X, app(Y, Z)) sum(cons(X, nil)) >? cons(X, nil) sum(cons(X, cons(Y, Z))) >? sum(cons(plus(X, Y), Z)) sum(app(X, cons(Y, cons(Z, U)))) >? sum(app(X, sum(cons(Y, cons(Z, U))))) plus(s(X), Y) >? s(plus(X, Y)) We orient these requirements with a polynomial interpretation in the natural numbers. The following interpretation satisfies the requirements: app = \y0y1.2y1 + 3y0 cons = \y0y1.2 + y0 + y1 nil = 0 plus = \y0y1.y0 + y1 s = \y0.y0 sum = \y0.y0 Using this interpretation, the requirements translate to: [[app(cons(_x0, _x1), _x2)]] = 6 + 2x2 + 3x0 + 3x1 > 2 + x0 + 2x2 + 3x1 = [[cons(_x0, app(_x1, _x2))]] [[sum(cons(_x0, nil))]] = 2 + x0 >= 2 + x0 = [[cons(_x0, nil)]] [[sum(cons(_x0, cons(_x1, _x2)))]] = 4 + x0 + x1 + x2 > 2 + x0 + x1 + x2 = [[sum(cons(plus(_x0, _x1), _x2))]] [[sum(app(_x0, cons(_x1, cons(_x2, _x3))))]] = 8 + 2x1 + 2x2 + 2x3 + 3x0 >= 8 + 2x1 + 2x2 + 2x3 + 3x0 = [[sum(app(_x0, sum(cons(_x1, cons(_x2, _x3)))))]] [[plus(s(_x0), _x1)]] = x0 + x1 >= x0 + x1 = [[s(plus(_x0, _x1))]] We can thus remove the following rules: app(cons(X, Y), Z) => cons(X, app(Y, Z)) sum(cons(X, cons(Y, Z))) => sum(cons(plus(X, Y), Z)) We use rule removal, following [Kop12, Theorem 2.23]. This gives the following requirements (possibly using Theorems 2.25 and 2.26 in [Kop12]): sum(cons(X, nil)) >? cons(X, nil) sum(app(X, cons(Y, cons(Z, U)))) >? sum(app(X, sum(cons(Y, cons(Z, U))))) plus(s(X), Y) >? s(plus(X, Y)) We orient these requirements with a polynomial interpretation in the natural numbers. The following interpretation satisfies the requirements: app = \y0y1.y0 + 2y1 cons = \y0y1.2y0 + 2y1 nil = 0 plus = \y0y1.2y1 + 3y0 s = \y0.3 + y0 sum = \y0.y0 Using this interpretation, the requirements translate to: [[sum(cons(_x0, nil))]] = 2x0 >= 2x0 = [[cons(_x0, nil)]] [[sum(app(_x0, cons(_x1, cons(_x2, _x3))))]] = x0 + 4x1 + 8x2 + 8x3 >= x0 + 4x1 + 8x2 + 8x3 = [[sum(app(_x0, sum(cons(_x1, cons(_x2, _x3)))))]] [[plus(s(_x0), _x1)]] = 9 + 2x1 + 3x0 > 3 + 2x1 + 3x0 = [[s(plus(_x0, _x1))]] We can thus remove the following rules: plus(s(X), Y) => s(plus(X, Y)) We use the dependency pair framework as described in [Kop12, Ch. 6/7], with static dependency pairs (see [KusIsoSakBla09] and the adaptation for AFSMs in [Kop12, Ch. 7.8]). We thus obtain the following dependency pair problem (P_0, R_0, minimal, formative): Dependency Pairs P_0: 0] sum#(app(X, cons(Y, cons(Z, U)))) =#> sum#(app(X, sum(cons(Y, cons(Z, U))))) 1] sum#(app(X, cons(Y, cons(Z, U)))) =#> sum#(cons(Y, cons(Z, U))) Rules R_0: sum(cons(X, nil)) => cons(X, nil) sum(app(X, cons(Y, cons(Z, U)))) => sum(app(X, sum(cons(Y, cons(Z, U))))) Thus, the original system is terminating if (P_0, R_0, minimal, formative) is finite. We consider the dependency pair problem (P_0, R_0, minimal, formative). We place the elements of P in a dependency graph approximation G (see e.g. [Kop12, Thm. 7.27, 7.29], as follows: * 0 : 0, 1 * 1 : This graph has the following strongly connected components: P_1: sum#(app(X, cons(Y, cons(Z, U)))) =#> sum#(app(X, sum(cons(Y, cons(Z, U))))) By [Kop12, Thm. 7.31], we may replace any dependency pair problem (P_0, R_0, m, f) by (P_1, R_0, m, f). Thus, the original system is terminating if (P_1, R_0, minimal, formative) is finite. We consider the dependency pair problem (P_1, R_0, minimal, formative). We will use the reduction pair processor [Kop12, Thm. 7.16]. It suffices to find a standard reduction pair [Kop12, Def. 6.69]. Thus, we must orient: sum#(app(X, cons(Y, cons(Z, U)))) >? sum#(app(X, sum(cons(Y, cons(Z, U))))) sum(cons(X, nil)) >= cons(X, nil) sum(app(X, cons(Y, cons(Z, U)))) >= sum(app(X, sum(cons(Y, cons(Z, U))))) We orient these requirements with a polynomial interpretation in the natural numbers. We consider usable_rules with respect to the following argument filtering: This leaves the following ordering requirements: sum#(app(X, cons(Y, cons(Z, U)))) > sum#(app(X, sum(cons(Y, cons(Z, U))))) The following interpretation satisfies the requirements: app = \y0y1.3y1 cons = \y0y1.3 + 3y0 + 3y1 nil = 0 sum = \y0.0 sum# = \y0.3y0 Using this interpretation, the requirements translate to: [[sum#(app(_x0, cons(_x1, cons(_x2, _x3))))]] = 108 + 27x1 + 81x2 + 81x3 > 0 = [[sum#(app(_x0, sum(cons(_x1, cons(_x2, _x3)))))]] By the observations in [Kop12, Sec. 6.6], this reduction pair suffices; we may thus replace a dependency pair problem (P_1, R_0) by ({}, R_0). By the empty set processor [Kop12, Thm. 7.15] this problem may be immediately removed. As all dependency pair problems were succesfully simplified with sound (and complete) processors until nothing remained, we conclude termination. +++ Citations +++ [Kop12] C. Kop. Higher Order Termination. PhD Thesis, 2012. [KusIsoSakBla09] K. Kusakari, Y. Isogai, M. Sakai, and F. Blanqui. Static Dependency Pair Method Based On Strong Computability for Higher-Order Rewrite Systems. In volume 92(10) of IEICE Transactions on Information and Systems. 2007--2015, 2009.