/export/starexec/sandbox/solver/bin/starexec_run_FirstOrder /export/starexec/sandbox/benchmark/theBenchmark.xml /export/starexec/sandbox/output/output_files -------------------------------------------------------------------------------- YES We consider the system theBenchmark. We are asked to determine termination of the following first-order TRS. and : [o * o] --> o apply : [o * o] --> o cons : [o * o] --> o eq : [o * o] --> o false : [] --> o if : [o * o * o] --> o lambda : [o * o] --> o nil : [] --> o ren : [o * o * o] --> o true : [] --> o var : [o] --> o and(true, X) => X and(false, X) => false eq(nil, nil) => true eq(cons(X, Y), nil) => false eq(nil, cons(X, Y)) => false eq(cons(X, Y), cons(Z, U)) => and(eq(X, Z), eq(Y, U)) eq(var(X), var(Y)) => eq(X, Y) eq(var(X), apply(Y, Z)) => false eq(var(X), lambda(Y, Z)) => false eq(apply(X, Y), var(Z)) => false eq(apply(X, Y), apply(Z, U)) => and(eq(X, Z), eq(Y, U)) eq(apply(X, Y), lambda(Z, X)) => false eq(lambda(X, Y), var(Z)) => false eq(lambda(X, Y), apply(Y, Z)) => false eq(lambda(X, Y), lambda(Z, U)) => and(eq(X, Z), eq(Y, U)) if(true, var(X), var(Y)) => var(X) if(false, var(X), var(Y)) => var(Y) ren(var(X), var(Y), var(Z)) => if(eq(X, Z), var(Y), var(Z)) ren(X, Y, apply(Z, U)) => apply(ren(X, Y, Z), ren(X, Y, U)) ren(X, Y, lambda(Z, U)) => lambda(var(cons(X, cons(Y, cons(lambda(Z, U), nil)))), ren(X, Y, ren(Z, var(cons(X, cons(Y, cons(lambda(Z, U), nil)))), U))) We use the dependency pair framework as described in [Kop12, Ch. 6/7], with static dependency pairs (see [KusIsoSakBla09] and the adaptation for AFSMs in [Kop12, Ch. 7.8]). We thus obtain the following dependency pair problem (P_0, R_0, minimal, all): Dependency Pairs P_0: 0] eq#(cons(X, Y), cons(Z, U)) =#> and#(eq(X, Z), eq(Y, U)) 1] eq#(cons(X, Y), cons(Z, U)) =#> eq#(X, Z) 2] eq#(cons(X, Y), cons(Z, U)) =#> eq#(Y, U) 3] eq#(var(X), var(Y)) =#> eq#(X, Y) 4] eq#(apply(X, Y), apply(Z, U)) =#> and#(eq(X, Z), eq(Y, U)) 5] eq#(apply(X, Y), apply(Z, U)) =#> eq#(X, Z) 6] eq#(apply(X, Y), apply(Z, U)) =#> eq#(Y, U) 7] eq#(lambda(X, Y), lambda(Z, U)) =#> and#(eq(X, Z), eq(Y, U)) 8] eq#(lambda(X, Y), lambda(Z, U)) =#> eq#(X, Z) 9] eq#(lambda(X, Y), lambda(Z, U)) =#> eq#(Y, U) 10] ren#(var(X), var(Y), var(Z)) =#> if#(eq(X, Z), var(Y), var(Z)) 11] ren#(var(X), var(Y), var(Z)) =#> eq#(X, Z) 12] ren#(X, Y, apply(Z, U)) =#> ren#(X, Y, Z) 13] ren#(X, Y, apply(Z, U)) =#> ren#(X, Y, U) 14] ren#(X, Y, lambda(Z, U)) =#> ren#(X, Y, ren(Z, var(cons(X, cons(Y, cons(lambda(Z, U), nil)))), U)) 15] ren#(X, Y, lambda(Z, U)) =#> ren#(Z, var(cons(X, cons(Y, cons(lambda(Z, U), nil)))), U) Rules R_0: and(true, X) => X and(false, X) => false eq(nil, nil) => true eq(cons(X, Y), nil) => false eq(nil, cons(X, Y)) => false eq(cons(X, Y), cons(Z, U)) => and(eq(X, Z), eq(Y, U)) eq(var(X), var(Y)) => eq(X, Y) eq(var(X), apply(Y, Z)) => false eq(var(X), lambda(Y, Z)) => false eq(apply(X, Y), var(Z)) => false eq(apply(X, Y), apply(Z, U)) => and(eq(X, Z), eq(Y, U)) eq(apply(X, Y), lambda(Z, X)) => false eq(lambda(X, Y), var(Z)) => false eq(lambda(X, Y), apply(Y, Z)) => false eq(lambda(X, Y), lambda(Z, U)) => and(eq(X, Z), eq(Y, U)) if(true, var(X), var(Y)) => var(X) if(false, var(X), var(Y)) => var(Y) ren(var(X), var(Y), var(Z)) => if(eq(X, Z), var(Y), var(Z)) ren(X, Y, apply(Z, U)) => apply(ren(X, Y, Z), ren(X, Y, U)) ren(X, Y, lambda(Z, U)) => lambda(var(cons(X, cons(Y, cons(lambda(Z, U), nil)))), ren(X, Y, ren(Z, var(cons(X, cons(Y, cons(lambda(Z, U), nil)))), U))) Thus, the original system is terminating if (P_0, R_0, minimal, all) is finite. We consider the dependency pair problem (P_0, R_0, minimal, all). We place the elements of P in a dependency graph approximation G (see e.g. [Kop12, Thm. 7.27, 7.29], as follows: * 0 : * 1 : 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 * 2 : 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 * 3 : 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 * 4 : * 5 : 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 * 6 : 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 * 7 : * 8 : 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 * 9 : 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 * 10 : * 11 : 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 * 12 : 10, 11, 12, 13, 14, 15 * 13 : 10, 11, 12, 13, 14, 15 * 14 : 10, 11, 12, 13, 14, 15 * 15 : 10, 11, 12, 13, 14, 15 This graph has the following strongly connected components: P_1: eq#(cons(X, Y), cons(Z, U)) =#> eq#(X, Z) eq#(cons(X, Y), cons(Z, U)) =#> eq#(Y, U) eq#(var(X), var(Y)) =#> eq#(X, Y) eq#(apply(X, Y), apply(Z, U)) =#> eq#(X, Z) eq#(apply(X, Y), apply(Z, U)) =#> eq#(Y, U) eq#(lambda(X, Y), lambda(Z, U)) =#> eq#(X, Z) eq#(lambda(X, Y), lambda(Z, U)) =#> eq#(Y, U) P_2: ren#(X, Y, apply(Z, U)) =#> ren#(X, Y, Z) ren#(X, Y, apply(Z, U)) =#> ren#(X, Y, U) ren#(X, Y, lambda(Z, U)) =#> ren#(X, Y, ren(Z, var(cons(X, cons(Y, cons(lambda(Z, U), nil)))), U)) ren#(X, Y, lambda(Z, U)) =#> ren#(Z, var(cons(X, cons(Y, cons(lambda(Z, U), nil)))), U) By [Kop12, Thm. 7.31], we may replace any dependency pair problem (P_0, R_0, m, f) by (P_1, R_0, m, f) and (P_2, R_0, m, f). Thus, the original system is terminating if each of (P_1, R_0, minimal, all) and (P_2, R_0, minimal, all) is finite. We consider the dependency pair problem (P_2, R_0, minimal, all). We will use the reduction pair processor [Kop12, Thm. 7.16]. It suffices to find a standard reduction pair [Kop12, Def. 6.69]. Thus, we must orient: ren#(X, Y, apply(Z, U)) >? ren#(X, Y, Z) ren#(X, Y, apply(Z, U)) >? ren#(X, Y, U) ren#(X, Y, lambda(Z, U)) >? ren#(X, Y, ren(Z, var(cons(X, cons(Y, cons(lambda(Z, U), nil)))), U)) ren#(X, Y, lambda(Z, U)) >? ren#(Z, var(cons(X, cons(Y, cons(lambda(Z, U), nil)))), U) and(true, X) >= X and(false, X) >= false eq(nil, nil) >= true eq(cons(X, Y), nil) >= false eq(nil, cons(X, Y)) >= false eq(cons(X, Y), cons(Z, U)) >= and(eq(X, Z), eq(Y, U)) eq(var(X), var(Y)) >= eq(X, Y) eq(var(X), apply(Y, Z)) >= false eq(var(X), lambda(Y, Z)) >= false eq(apply(X, Y), var(Z)) >= false eq(apply(X, Y), apply(Z, U)) >= and(eq(X, Z), eq(Y, U)) eq(apply(X, Y), lambda(Z, X)) >= false eq(lambda(X, Y), var(Z)) >= false eq(lambda(X, Y), apply(Y, Z)) >= false eq(lambda(X, Y), lambda(Z, U)) >= and(eq(X, Z), eq(Y, U)) if(true, var(X), var(Y)) >= var(X) if(false, var(X), var(Y)) >= var(Y) ren(var(X), var(Y), var(Z)) >= if(eq(X, Z), var(Y), var(Z)) ren(X, Y, apply(Z, U)) >= apply(ren(X, Y, Z), ren(X, Y, U)) ren(X, Y, lambda(Z, U)) >= lambda(var(cons(X, cons(Y, cons(lambda(Z, U), nil)))), ren(X, Y, ren(Z, var(cons(X, cons(Y, cons(lambda(Z, U), nil)))), U))) We orient these requirements with a polynomial interpretation in the natural numbers. We consider usable_rules with respect to the following argument filtering: if(x_1,x_2,x_3) = if(x_2x_3) This leaves the following ordering requirements: ren#(X, Y, apply(Z, U)) >= ren#(X, Y, Z) ren#(X, Y, apply(Z, U)) > ren#(X, Y, U) ren#(X, Y, lambda(Z, U)) >= ren#(X, Y, ren(Z, var(cons(X, cons(Y, cons(lambda(Z, U), nil)))), U)) ren#(X, Y, lambda(Z, U)) >= ren#(Z, var(cons(X, cons(Y, cons(lambda(Z, U), nil)))), U) if(true, var(X), var(Y)) >= var(X) if(false, var(X), var(Y)) >= var(Y) ren(var(X), var(Y), var(Z)) >= if(eq(X, Z), var(Y), var(Z)) ren(X, Y, apply(Z, U)) >= apply(ren(X, Y, Z), ren(X, Y, U)) ren(X, Y, lambda(Z, U)) >= lambda(var(cons(X, cons(Y, cons(lambda(Z, U), nil)))), ren(X, Y, ren(Z, var(cons(X, cons(Y, cons(lambda(Z, U), nil)))), U))) The following interpretation satisfies the requirements: and = \y0y1.0 apply = \y0y1.1 + y0 + y1 cons = \y0y1.0 eq = \y0y1.0 false = 0 if = \y0y1y2.0 lambda = \y0y1.2y1 nil = 0 ren = \y0y1y2.y2 ren# = \y0y1y2.2y1 + 2y2 true = 0 var = \y0.0 Using this interpretation, the requirements translate to: [[ren#(_x0, _x1, apply(_x2, _x3))]] = 2 + 2x1 + 2x2 + 2x3 > 2x1 + 2x2 = [[ren#(_x0, _x1, _x2)]] [[ren#(_x0, _x1, apply(_x2, _x3))]] = 2 + 2x1 + 2x2 + 2x3 > 2x1 + 2x3 = [[ren#(_x0, _x1, _x3)]] [[ren#(_x0, _x1, lambda(_x2, _x3))]] = 2x1 + 4x3 >= 2x1 + 2x3 = [[ren#(_x0, _x1, ren(_x2, var(cons(_x0, cons(_x1, cons(lambda(_x2, _x3), nil)))), _x3))]] [[ren#(_x0, _x1, lambda(_x2, _x3))]] = 2x1 + 4x3 >= 2x3 = [[ren#(_x2, var(cons(_x0, cons(_x1, cons(lambda(_x2, _x3), nil)))), _x3)]] [[if(true, var(_x0), var(_x1))]] = 0 >= 0 = [[var(_x0)]] [[if(false, var(_x0), var(_x1))]] = 0 >= 0 = [[var(_x1)]] [[ren(var(_x0), var(_x1), var(_x2))]] = 0 >= 0 = [[if(eq(_x0, _x2), var(_x1), var(_x2))]] [[ren(_x0, _x1, apply(_x2, _x3))]] = 1 + x2 + x3 >= 1 + x2 + x3 = [[apply(ren(_x0, _x1, _x2), ren(_x0, _x1, _x3))]] [[ren(_x0, _x1, lambda(_x2, _x3))]] = 2x3 >= 2x3 = [[lambda(var(cons(_x0, cons(_x1, cons(lambda(_x2, _x3), nil)))), ren(_x0, _x1, ren(_x2, var(cons(_x0, cons(_x1, cons(lambda(_x2, _x3), nil)))), _x3)))]] By the observations in [Kop12, Sec. 6.6], this reduction pair suffices; we may thus replace the dependency pair problem (P_2, R_0, minimal, all) by (P_3, R_0, minimal, all), where P_3 consists of: ren#(X, Y, lambda(Z, U)) =#> ren#(X, Y, ren(Z, var(cons(X, cons(Y, cons(lambda(Z, U), nil)))), U)) ren#(X, Y, lambda(Z, U)) =#> ren#(Z, var(cons(X, cons(Y, cons(lambda(Z, U), nil)))), U) Thus, the original system is terminating if each of (P_1, R_0, minimal, all) and (P_3, R_0, minimal, all) is finite. We consider the dependency pair problem (P_3, R_0, minimal, all). We will use the reduction pair processor [Kop12, Thm. 7.16]. It suffices to find a standard reduction pair [Kop12, Def. 6.69]. Thus, we must orient: ren#(X, Y, lambda(Z, U)) >? ren#(X, Y, ren(Z, var(cons(X, cons(Y, cons(lambda(Z, U), nil)))), U)) ren#(X, Y, lambda(Z, U)) >? ren#(Z, var(cons(X, cons(Y, cons(lambda(Z, U), nil)))), U) and(true, X) >= X and(false, X) >= false eq(nil, nil) >= true eq(cons(X, Y), nil) >= false eq(nil, cons(X, Y)) >= false eq(cons(X, Y), cons(Z, U)) >= and(eq(X, Z), eq(Y, U)) eq(var(X), var(Y)) >= eq(X, Y) eq(var(X), apply(Y, Z)) >= false eq(var(X), lambda(Y, Z)) >= false eq(apply(X, Y), var(Z)) >= false eq(apply(X, Y), apply(Z, U)) >= and(eq(X, Z), eq(Y, U)) eq(apply(X, Y), lambda(Z, X)) >= false eq(lambda(X, Y), var(Z)) >= false eq(lambda(X, Y), apply(Y, Z)) >= false eq(lambda(X, Y), lambda(Z, U)) >= and(eq(X, Z), eq(Y, U)) if(true, var(X), var(Y)) >= var(X) if(false, var(X), var(Y)) >= var(Y) ren(var(X), var(Y), var(Z)) >= if(eq(X, Z), var(Y), var(Z)) ren(X, Y, apply(Z, U)) >= apply(ren(X, Y, Z), ren(X, Y, U)) ren(X, Y, lambda(Z, U)) >= lambda(var(cons(X, cons(Y, cons(lambda(Z, U), nil)))), ren(X, Y, ren(Z, var(cons(X, cons(Y, cons(lambda(Z, U), nil)))), U))) We orient these requirements with a polynomial interpretation in the natural numbers. The following interpretation satisfies the requirements: and = \y0y1.y1 apply = \y0y1.0 cons = \y0y1.0 eq = \y0y1.0 false = 0 if = \y0y1y2.0 lambda = \y0y1.2 + 2y1 nil = 0 ren = \y0y1y2.y2 ren# = \y0y1y2.y2 + 2y1 true = 0 var = \y0.0 Using this interpretation, the requirements translate to: [[ren#(_x0, _x1, lambda(_x2, _x3))]] = 2 + 2x1 + 2x3 > x3 + 2x1 = [[ren#(_x0, _x1, ren(_x2, var(cons(_x0, cons(_x1, cons(lambda(_x2, _x3), nil)))), _x3))]] [[ren#(_x0, _x1, lambda(_x2, _x3))]] = 2 + 2x1 + 2x3 > x3 = [[ren#(_x2, var(cons(_x0, cons(_x1, cons(lambda(_x2, _x3), nil)))), _x3)]] [[and(true, _x0)]] = x0 >= x0 = [[_x0]] [[and(false, _x0)]] = x0 >= 0 = [[false]] [[eq(nil, nil)]] = 0 >= 0 = [[true]] [[eq(cons(_x0, _x1), nil)]] = 0 >= 0 = [[false]] [[eq(nil, cons(_x0, _x1))]] = 0 >= 0 = [[false]] [[eq(cons(_x0, _x1), cons(_x2, _x3))]] = 0 >= 0 = [[and(eq(_x0, _x2), eq(_x1, _x3))]] [[eq(var(_x0), var(_x1))]] = 0 >= 0 = [[eq(_x0, _x1)]] [[eq(var(_x0), apply(_x1, _x2))]] = 0 >= 0 = [[false]] [[eq(var(_x0), lambda(_x1, _x2))]] = 0 >= 0 = [[false]] [[eq(apply(_x0, _x1), var(_x2))]] = 0 >= 0 = [[false]] [[eq(apply(_x0, _x1), apply(_x2, _x3))]] = 0 >= 0 = [[and(eq(_x0, _x2), eq(_x1, _x3))]] [[eq(apply(_x0, _x1), lambda(_x2, _x0))]] = 0 >= 0 = [[false]] [[eq(lambda(_x0, _x1), var(_x2))]] = 0 >= 0 = [[false]] [[eq(lambda(_x0, _x1), apply(_x1, _x2))]] = 0 >= 0 = [[false]] [[eq(lambda(_x0, _x1), lambda(_x2, _x3))]] = 0 >= 0 = [[and(eq(_x0, _x2), eq(_x1, _x3))]] [[if(true, var(_x0), var(_x1))]] = 0 >= 0 = [[var(_x0)]] [[if(false, var(_x0), var(_x1))]] = 0 >= 0 = [[var(_x1)]] [[ren(var(_x0), var(_x1), var(_x2))]] = 0 >= 0 = [[if(eq(_x0, _x2), var(_x1), var(_x2))]] [[ren(_x0, _x1, apply(_x2, _x3))]] = 0 >= 0 = [[apply(ren(_x0, _x1, _x2), ren(_x0, _x1, _x3))]] [[ren(_x0, _x1, lambda(_x2, _x3))]] = 2 + 2x3 >= 2 + 2x3 = [[lambda(var(cons(_x0, cons(_x1, cons(lambda(_x2, _x3), nil)))), ren(_x0, _x1, ren(_x2, var(cons(_x0, cons(_x1, cons(lambda(_x2, _x3), nil)))), _x3)))]] By the observations in [Kop12, Sec. 6.6], this reduction pair suffices; we may thus replace a dependency pair problem (P_3, R_0) by ({}, R_0). By the empty set processor [Kop12, Thm. 7.15] this problem may be immediately removed. Thus, the original system is terminating if (P_1, R_0, minimal, all) is finite. We consider the dependency pair problem (P_1, R_0, minimal, all). We apply the subterm criterion with the following projection function: nu(eq#) = 1 Thus, we can orient the dependency pairs as follows: nu(eq#(cons(X, Y), cons(Z, U))) = cons(X, Y) |> X = nu(eq#(X, Z)) nu(eq#(cons(X, Y), cons(Z, U))) = cons(X, Y) |> Y = nu(eq#(Y, U)) nu(eq#(var(X), var(Y))) = var(X) |> X = nu(eq#(X, Y)) nu(eq#(apply(X, Y), apply(Z, U))) = apply(X, Y) |> X = nu(eq#(X, Z)) nu(eq#(apply(X, Y), apply(Z, U))) = apply(X, Y) |> Y = nu(eq#(Y, U)) nu(eq#(lambda(X, Y), lambda(Z, U))) = lambda(X, Y) |> X = nu(eq#(X, Z)) nu(eq#(lambda(X, Y), lambda(Z, U))) = lambda(X, Y) |> Y = nu(eq#(Y, U)) By [Kop12, Thm. 7.35], we may replace a dependency pair problem (P_1, R_0, minimal, f) by ({}, R_0, minimal, f). By the empty set processor [Kop12, Thm. 7.15] this problem may be immediately removed. As all dependency pair problems were succesfully simplified with sound (and complete) processors until nothing remained, we conclude termination. +++ Citations +++ [Kop12] C. Kop. Higher Order Termination. PhD Thesis, 2012. [KusIsoSakBla09] K. Kusakari, Y. Isogai, M. Sakai, and F. Blanqui. Static Dependency Pair Method Based On Strong Computability for Higher-Order Rewrite Systems. In volume 92(10) of IEICE Transactions on Information and Systems. 2007--2015, 2009.