Theory LambdaCalculus

chapter "The Lambda Calculus"

  text ‹
    In this second part of the article, we apply the residuated transition system framework
    developed in the first part to the theory of reductions in Church's λ›-calculus.
    The underlying idea is to exhibit λ›-terms as states (identities) of an RTS,
    with reduction steps as non-identity transitions.  We represent both states and transitions
    in a unified, variable-free syntax based on de Bruijn indices.
    A difficulty one faces in regarding the λ›-calculus as an RTS is that
    ``elementary reductions'', in which just one redex is contracted, are not preserved by
    residuation: an elementary reduction can have zero or more residuals along another
    elementary reduction.  However, ``parallel reductions'', which permit the contraction of
    multiple redexes existing in a term to be contracted in a single step, are preserved
    by residuation.  For this reason, in our syntax each term represents a parallel reduction
    of zero or more redexes; a parallel reduction of zero redexes representing an identity.
    We have syntactic constructors for variables, λ›-abstractions, and applications.
    An additional constructor represents a β›-redex that has been marked for contraction.
    This is a slightly different approach that that taken by other authors
    (\emph{e.g.}~cite"barendregt" or cite"huet-residual-theory"), in which it is the
    application constructor that is marked to indicate a redex to be contracted,
    but it seems more natural in the present setting in which a single syntax is used to
    represent both terms and reductions.

    Once the syntax has been defined, we define the residuation operation and prove
    that it satisfies the conditions for a weakly extensional RTS.  In this RTS, the source
    of a term is obtained by ``erasing'' the markings on redexes, leaving an identity term.
    The target of a term is the contractum of the parallel reduction it represents.
    As the definition of residuation involves the use of substitution, a necessary prerequisite
    is to develop the theory of substitution using de Bruijn indices.
    In addition, various properties concerning the commutation of residuation and substitution
    have to be proved.  This part of the work has benefited greatly from previous work
    of Huet cite"huet-residual-theory", in which the theory of residuation was formalized
    in the proof assistant Coq.  In particular, it was very helpful to have already available
    known-correct statements of various lemmas regarding indices, substitution, and residuation.
    The development of the theory culminates in the proof of L\'{e}vy's ``Cube Lemma''
    cite"levy", which is the key axiom in the definition of RTS.

    Once reductions in the λ›-calculus have been cast as transitions of an RTS,
    we are able to take advantage of generic results already proved for RTS's; in particular,
    the construction of the RTS of paths, which represent reduction sequences.
    Very little additional effort is required at this point to prove the Church-Rosser Theorem.
    Then, after proving a series of miscellaneous lemmas about reduction paths,
    we turn to the study of developments.  A development of a term is a reduction path from
    that term in which the only redexes that are contracted are those that are residuals of
    redexes in the original term.  We prove the Finite Developments Theorem: all developments
    are finite.  The proof given here follows that given by de Vrijer cite"deVrijer",
    except that here we make the adaptations necessary for a syntax based on de Bruijn
    indices, rather than the classical named-variable syntax used by de Vrijer.
    Using the Finite Developments Theorem, we define a function that takes a term and constructs
    a ``complete development'' of that term, which is a development in which no residuals of
    original redexes remain to be contracted.

    We then turn our attention to ``standard reduction paths'', which are reduction paths in
    which redexes are contracted in a left-to-right order, perhaps with some skips.
    After giving a definition of standard reduction paths, we define a function that takes a
    term and constructs a complete development that is also standard.
    Using this function as a base case, we then define a function that takes an arbitrary
    parallel reduction path and transforms it into a standard reduction path that is congruent
    to the given path.  The algorithm used is roughly analogous to insertion sort.
    We use this function to prove strong form of the Standardization Theorem: every reduction
    path is congruent to a standard reduction path.  As a corollary of the Standardization
    Theorem, we prove the Leftmost Reduction Theorem: leftmost reduction is a normalizing
    reduction strategy.

    It should be noted that, in this article, we consider only the λβ›-calculus.
    In the early stages of this work, I made an exploratory attempt to incorporate η›-reduction
    as well, but after encountering some unanticipated difficulties I decided not to attempt that
    extension until the β›-only case had been well-developed.
  ›

theory LambdaCalculus
imports Main ResiduatedTransitionSystem
begin

  section "Syntax"

  locale lambda_calculus
  begin

    text ‹
      The syntax of terms has constructors Var› for variables, Lam› for λ›-abstraction,
      and App› for application.  In addition, there is a constructor Beta› which is used
      to represent a β›-redex that has been marked for contraction.  The idea is that
      a term Beta t u› represents a marked version of the term App (Lam t) u›.
      Finally, there is a constructor Nil› which is used to represent the null element
      required for the residuation operation.
    ›

    datatype (discs_sels) lambda =
      Nil
    | Var nat
    | Lam lambda
    | App lambda lambda
    | Beta lambda lambda

    text ‹
      The following notation renders Beta t u› as a ``marked'' version of App (Lam t) u›,
      even though the former is a single constructor, whereas the latter contains two
      constructors.
    ›

    notation Nil  ("")
    notation Var  ("«_»")
    notation Lam  ("λ[_]")
    notation App  (infixl "" 55)
    notation Beta ("(λ[_]  _)" [55, 56] 55)

    text ‹
      The following function computes the set of free variables of a term.
      Note that since variables are represented by numeric indices, this is a set of numbers.
    ›

    fun FV
    where "FV  = {}"
        | "FV «i» = {i}"
        | "FV λ[t] = (λn. n - 1) ` (FV t - {0})"
        | "FV (t  u) = FV t  FV u"
        | "FV (λ[t]  u) = (λn. n - 1) ` (FV t - {0})  FV u"

    subsection "Some Orderings for Induction"

    text ‹
      We will need to do some simultaneous inductions on pairs and triples of subterms
      of given terms.  We prove the well-foundedness of the associated relations using
      the following size measure.
    ›

    fun size :: "lambda  nat"
    where "size  = 0"
        | "size «_» = 1"
        | "size λ[t] = size t + 1"
        | "size (t  u) = size t + size u + 1"
        | "size (λ[t]  u) = (size t + 1) + size u + 1"

    lemma wf_if_img_lt:
    fixes r :: "('a * 'a) set" and f :: "'a  nat"
    assumes "x y. (x, y)  r  f x < f y"
    shows "wf r"
      using assms
      by (metis in_measure wf_iff_no_infinite_down_chain wf_measure)

    inductive subterm
    where "t. subterm t λ[t]"
        | "t u. subterm t (t  u)"
        | "t u. subterm u (t  u)"
        | "t u. subterm t (λ[t]  u)"
        | "t u. subterm u (λ[t]  u)"
        | "t u v. subterm t u; subterm u v  subterm t v"

    lemma subterm_implies_smaller:
    shows "subterm t u  size t < size u"
      by (induct rule: subterm.induct) auto

    abbreviation subterm_rel
    where "subterm_rel  {(t, u). subterm t u}"

    lemma wf_subterm_rel:
    shows "wf subterm_rel"
      using subterm_implies_smaller wf_if_img_lt
      by (metis case_prod_conv mem_Collect_eq)

    abbreviation subterm_pair_rel
    where "subterm_pair_rel  {((t1, t2), u1, u2). subterm t1 u1  subterm t2 u2}"

    lemma wf_subterm_pair_rel:
    shows "wf subterm_pair_rel"
      using subterm_implies_smaller
            wf_if_img_lt [of subterm_pair_rel "λ(t1, t2). max (size t1) (size t2)"]
      by fastforce

    abbreviation subterm_triple_rel
    where "subterm_triple_rel 
           {((t1, t2, t3), u1, u2, u3). subterm t1 u1  subterm t2 u2  subterm t3 u3}"

    lemma wf_subterm_triple_rel:
    shows "wf subterm_triple_rel"
      using subterm_implies_smaller
            wf_if_img_lt [of subterm_triple_rel
                             "λ(t1, t2, t3). max (max (size t1) (size t2)) (size t3)"]
      by fastforce

    lemma subterm_lemmas:
    shows "subterm t λ[t]"
    and "subterm t (λ[t]  u)  subterm u (λ[t]  u)"
    and "subterm t (t  u)  subterm u (t  u)"
    and "subterm t (λ[t]  u)  subterm u (λ[t]  u)"
      by (metis subterm.simps)+

    subsection "Arrows and Identities"

    text ‹
      Here we define some special classes of terms.
      An ``arrow'' is a term that contains no occurrences of Nil›.
      An ``identity'' is an arrow that contains no occurrences of Beta›.
      It will be important for the commutation of substitution and residuation later on
      that substitution not be used in a way that could create any marked redexes;
      for example, we don't want the substitution of Lam (Var 0)› for Var 0› in an
      application App (Var 0) (Var 0)› to create a new ``marked'' redex.
      The use of the separate constructor Beta› for marked redexes automatically avoids this.
    ›

    fun Arr
    where "Arr  = False"
        | "Arr «_» = True"
        | "Arr λ[t] = Arr t"
        | "Arr (t  u) = (Arr t  Arr u)"
        | "Arr (λ[t]  u) = (Arr t  Arr u)"

    lemma Arr_not_Nil:
    assumes "Arr t"
    shows "t  "
      using assms by auto

    fun Ide
    where "Ide  = False"
        | "Ide «_» = True"
        | "Ide λ[t] = Ide t"
        | "Ide (t  u) = (Ide t  Ide u)"
        | "Ide (λ[t]  u) = False"

    lemma Ide_implies_Arr:
    shows "Ide t  Arr t"
      by (induct t) auto

    lemma ArrE [elim]:
    assumes "Arr t"
    and "i. t = «i»  T"
    and "u. t = λ[u]  T"
    and "u v. t = u  v  T"
    and "u v. t = λ[u]  v  T"
    shows T
      using assms
      by (cases t) auto

    subsection "Raising Indices"

    text ‹
      For substitution, we need to be able to raise the indices of all free variables
      in a subterm by a specified amount.  To do this recursively, we need to keep track
      of the depth of nesting of λ›'s and only raise the indices of variables that are
      already greater than or equal to that depth, as these are the variables that are free
      in the current context.  This leads to defining a function Raise› that has two arguments:
      the depth threshold d› and the increment n› to be added to indices above that threshold.
    ›

    fun Raise
    where "Raise _ _  = "
        | "Raise d n «i» = (if i  d then «i+n» else «i»)"
        | "Raise d n λ[t] = λ[Raise (Suc d) n t]"
        | "Raise d n (t  u) = Raise d n t  Raise d n u"
        | "Raise d n (λ[t]  u) = λ[Raise (Suc d) n t]  Raise d n u"

    text ‹
      Ultimately, the definition of substitution will only directly involve the function
      that raises all indices of variables that are free in the outermost context;
      in a term, so we introduce an abbreviation for this special case.
    ›

    abbreviation raise
    where "raise == Raise 0"

    lemma size_Raise:
    shows "d. size (Raise d n t) = size t"
      by (induct t) auto

    lemma Raise_not_Nil:
    assumes "t  "
    shows "Raise d n t  "
      using assms
      by (cases t) auto

    lemma FV_Raise:
    shows "FV (Raise d n t) = (λx. if x  d then x + n else x) ` FV t"
      apply (induct t arbitrary: d n)
          apply auto[3]
            apply force
           apply force
          apply force
         apply force
        apply fastforce
    proof -
      fix t u d n
      assume ind1: "d n. FV (Raise d n t) = (λx. if d  x then x + n else x) ` FV t"
      assume ind2: "d n. FV (Raise d n u) = (λx. if d  x then x + n else x) ` FV u"
      have "FV (Raise d n (λ[t]  u)) = 
            (λx. x - Suc 0) ` ((λx. x + n) `
              (FV t  {x. Suc d  x})  FV t  {x. ¬ Suc d  x} - {0}) 
            ((λx. x + n) ` (FV u  {x. d  x})  FV u  {x. ¬ d  x})"
        using ind1 ind2 by simp
      also have "... = (λx. if d  x then x + n else x) ` FV (λ[t]  u)"
        apply simp
        by force
      finally show "FV (Raise d n (λ[t]  u)) =
                    (λx. if d  x then x + n else x) ` FV (λ[t]  u)"
        by blast
    qed

    lemma Arr_Raise:
    shows "Arr t  Arr (Raise d n t)"
      using FV_Raise
      by (induct t arbitrary: d n) auto

    lemma Ide_Raise:
    shows "Ide t  Ide (Raise d n t)"
      by (induct t arbitrary: d n) auto

    lemma Raise_0:
    shows "Raise d 0 t = t"
      by (induct t arbitrary: d) auto

    lemma Raise_Suc:
    shows "Raise d (Suc n) t = Raise d 1 (Raise d n t)"
      by (induct t arbitrary: d n) auto

    lemma Raise_Var:
    shows "Raise d n «i» = «if i < d then i else i + n»"
      by auto

    text ‹
      The following development of the properties of raising indices, substitution, and
      residuation has benefited greatly from the previous work by Huet cite"huet-residual-theory".
      In particular, it was very helpful to have correct statements of various lemmas
      available, rather than having to reconstruct them.
    ›

    lemma Raise_plus:
    shows "Raise d (m + n) t = Raise (d + m) n (Raise d m t)"
      by (induct t arbitrary: d m n) auto

    lemma Raise_plus':
    shows "d'  d + n; d  d'  Raise d (m + n) t = Raise d' m (Raise d n t)"
      by (induct t arbitrary: n m d d') auto

    lemma Raise_Raise:
    shows "i  n  Raise i p (Raise n k t) = Raise (p + n) k (Raise i p t)"
      by (induct t arbitrary: i k n p) auto

    lemma raise_plus:
    shows "d  n  raise (m + n) t = Raise d m (raise n t)"
      using Raise_plus' by auto

    lemma raise_Raise:
    shows "raise p (Raise n k t) = Raise (p + n) k (raise p t)"
      by (simp add: Raise_Raise)

    lemma Raise_inj:
    shows "Raise d n t = Raise d n u  t = u"
    proof (induct t arbitrary: d n u)
      show "d n u. Raise d n  = Raise d n u   = u"
        by (metis Raise.simps(1) Raise_not_Nil)
      show "x d n. Raise d n «x» = Raise d n u  «x» = u" for u
        using Raise_Var
        apply (cases u, auto)
        by (metis add_lessD1 add_right_imp_eq)
      show "t d n. d n u'. Raise d n t = Raise d n u'  t = u';
                      Raise d n λ[t] = Raise d n u
                         λ[t] = u"
        for u
        apply (cases u, auto)
        by (metis lambda.distinct(9))
      show "t1 t2 d n. d n u'. Raise d n t1 = Raise d n u'  t1 = u';
                         d n u'. Raise d n t2 = Raise d n u'  t2 = u';
                         Raise d n (t1  t2) = Raise d n u
                            t1  t2 = u"
        for u
        apply (cases u, auto)
        by (metis lambda.distinct(11))
      show "t1 t2 d n. d n u'. Raise d n t1 = Raise d n u'  t1 = u';
                         d n u'. Raise d n t2 = Raise d n u'  t2 = u';
                         Raise d n (λ[t1]  t2) = Raise d n u
                            λ[t1]  t2 = u"
        for u
        apply (cases u, auto)
        by (metis lambda.distinct(13))
    qed

    subsection "Substitution"

    text ‹
      Following cite"huet-residual-theory", we now define a generalized substitution operation
      with adjustment of indices.  The ultimate goal is to define the result of contraction
      of a marked redex Beta t u› to be subst u t›.  However, to be able to give a proper
      recursive definition of subst›, we need to introduce a parameter n› to keep track of the
      depth of nesting of Lam›'s as we descend into the the term t›.  So, instead of subst u t›
      simply substituting u› for occurrences of Var 0›, Subst n u t› will be substituting
      for occurrences of Var n›, and the term u› will have the indices of its free variables
      raised by n› before replacing Var n›.  In addition, any variables in t› that have
      indices greater than n› will have these indices lowered by one, to account for the
      outermost Lam› that is being removed by the contraction.  We can then define
      subst u t› to be Subst 0 u t›.
    ›

    fun Subst
    where "Subst _ _  = "
        | "Subst n v «i» = (if n < i then «i-1» else if n = i then raise n v else «i»)"
        | "Subst n v λ[t] = λ[Subst (Suc n) v t]"
        | "Subst n v (t  u) = Subst n v t  Subst n v u"
        | "Subst n v (λ[t]  u) = λ[Subst (Suc n) v t]  Subst n v u"

    abbreviation subst
    where "subst  Subst 0"

    lemma Subst_Nil:
    shows "Subst n v  = "
      by (cases "v = ") auto

    lemma Subst_not_Nil:
    assumes "v  " and "t  "
    shows "t    Subst n v t  "
      using assms Raise_not_Nil
      by (induct t) auto

    text ‹
      The following expression summarizes how the set of free variables of a term Subst d u t›,
      obtained by substituting u› into t› at depth d›, relates to the sets of free variables
      of t› and u›.  This expression is not used in the subsequent formal development,
      but it has been left here as an aid to understanding.
    ›

    abbreviation FVS
    where "FVS d v t  (FV t  {x. x < d}) 
                        (λx. x - 1) ` {x. x > d  x  FV t} 
                        (λx. x + d) ` {x. d  FV t  x  FV v}"

    lemma FV_Subst:
    shows "FV (Subst d v t) = FVS d v t"
    proof (induct t arbitrary: d v)
      have "d t v. (λx. x - 1) ` (FVS (Suc d) v t - {0}) = FVS d v λ[t]"
        by auto force+  (* 8 sec *)
      thus "d t v. (d v. FV (Subst d v t) = FVS d v t)
                               FV (Subst d v λ[t]) = FVS d v λ[t]"
        by simp
      have "t u v d. (λx. x - 1) ` (FVS (Suc d) v t - {0})  FVS d v u = FVS d v (λ[t]  u)"
        by auto force+  (* 25 sec *)
      thus "t u v d. d v. FV (Subst d v t) = FVS d v t;
                       d v. FV (Subst d v u) = FVS d v u
                             FV (Subst d v (λ[t]  u)) = FVS d v (λ[t]  u)"
        by simp
    qed (auto simp add: FV_Raise)

    lemma Arr_Subst:
    assumes "Arr v"
    shows "Arr t  Arr (Subst n v t)"
      using assms Arr_Raise FV_Subst
      by (induct t arbitrary: n) auto

    lemma vacuous_Subst:
    shows "Arr v; i  FV t  Raise i 1 (Subst i v t) = t"
      apply (induct t arbitrary: i v, auto)
      by force+

    lemma Ide_Subst_iff:
    shows "Ide (Subst n v t)  Ide t  (n  FV t  Ide v)"
      using Ide_Raise vacuous_Subst
      apply (induct t arbitrary: n)
          apply auto[5]
       apply fastforce
      by (metis Diff_empty Diff_insert0 One_nat_def diff_Suc_1 image_iff insertE
                insert_Diff nat.distinct(1))

    lemma Ide_Subst:
    shows "Ide t; Ide v  Ide (Subst n v t)"
      using Ide_Raise
      by (induct t arbitrary: n) auto

    lemma Raise_Subst:
    shows "Raise (p + n) k (Subst p v t) = Subst p (Raise n k v) (Raise (Suc (p + n)) k t)"
      using raise_Raise
      apply (induct t arbitrary: v k n p, auto)
      by (metis add_Suc)+

    lemma Raise_Subst':
    assumes "t  "
    shows "v  ; k  n  Raise k p (Subst n v t) = Subst (p + n) v (Raise k p t)"
      using assms raise_plus
      apply (induct t arbitrary: v k n p, auto)
          apply (metis Raise.simps(1) Subst_Nil Suc_le_mono add_Suc_right)
         apply fastforce
        apply fastforce
       apply (metis Raise.simps(1) Subst_Nil Suc_le_mono add_Suc_right)
      by fastforce

    lemma Raise_subst:
    shows "Raise n k (subst v t) = subst (Raise n k v) (Raise (Suc n) k t)"
      using Raise_0
      apply (induct t arbitrary: v k n, auto)
      by (metis One_nat_def Raise_Subst plus_1_eq_Suc)+

    lemma raise_Subst:
    assumes "t  "
    shows "v    raise p (Subst n v t) = Subst (p + n) v (raise p t)"
      using assms Raise_plus raise_Raise Raise_Subst'
      apply (induct t arbitrary: v n p)
      by (meson zero_le)+

    lemma Subst_Raise:
    shows "v  ; d  m; m  n + d  Subst m v (Raise d (Suc n) t) = Raise d n t"
      by (induct t arbitrary: v m n d) auto

    lemma Subst_raise:
    shows "v  ; m  n  Subst m v (raise (Suc n) t) = raise n t"
      using Subst_Raise
      by (induct t arbitrary: v m n) auto

    lemma Subst_Subst:
    shows "v  ; w   
             Subst (m + n) w (Subst m v t) = Subst m (Subst n w v) (Subst (Suc (m + n)) w t)"
      using Raise_0 raise_Subst Subst_not_Nil Subst_raise
      apply (induct t arbitrary: v w m n, auto)
      by (metis add_Suc)+

    text ‹
      The Substitution Lemma, as given by Huet cite"huet-residual-theory".
    ›

    lemma substitution_lemma:
    shows "v  ; w    Subst n v (subst w t) = subst (Subst n v w) (Subst (Suc n) v t)"
      by (metis Subst_Subst add_0)

    section "Lambda-Calculus as an RTS"

    subsection "Residuation"

    text ‹
      We now define residuation on terms.
      Residuation is an operation which, when defined for terms t› and u›,
      produces terms t \ u› and u \ t› that represent, respectively, what remains
      of the reductions of t› after performing the reductions in u›,
      and what remains of the reductions of u› after performing the reductions in t›.

      The definition ensures that, if residuation is defined for two terms, then those
      terms in must be arrows that are \emph{coinitial} (\emph{i.e.}~they are the same
      after erasing marks on redexes).  The residual t \ u› then has marked redexes at
      positions corresponding to redexes that were originally marked in t› and that
      were not contracted by any of the reductions of u›.

      This definition has also benefited from the presentation in cite"huet-residual-theory".
    ›

    fun resid  (infix "\\" 70)
    where "«i» \\ «i'» = (if i = i' then «i» else )"
        | "λ[t] \\ λ[t'] = (if t \\ t' =  then  else λ[t \\ t'])"
        | "(t  u) \\ (t' u') = (if t \\ t' =   u \\ u' =  then  else (t \\ t')  (u \\ u'))"
        | "(λ[t]  u) \\ (λ[t']  u') = (if t \\ t' =   u \\ u' =  then  else subst (u \\ u') (t \\ t'))"
        | "(λ[t]  u) \\ (λ[t']  u') = (if t \\ t' =   u \\ u' =  then  else subst (u \\ u') (t \\ t'))"
        | "(λ[t]  u) \\ (λ[t']  u') = (if t \\ t' =   u \\ u' =  then  else λ[t \\ t']  (u \\ u'))"
        | "resid _  _ = "

    text ‹
      Terms t and u are \emph{consistent} if residuation is defined for them.
    ›

    abbreviation Con  (infix "" 50)
    where "Con t u  resid t u  "

    lemma ConE [elim]:
    assumes "t  t'"
    and "i. t = «i»; t' = «i»; resid t t' = «i»  T"
    and "u u'. t = λ[u]; t' = λ[u']; u  u'; t \\ t' = λ[u \\ u']  T"
    and "u v u' v'. t = u  v; t' = u'  v'; u  u'; v  v';
                      t \\ t' = (u \\ u')  (v \\ v')  T"
    and "u v u' v'. t = λ[u]  v; t' = λ[u']  v'; u  u'; v  v';
                      t \\ t' = subst (v \\ v') (u \\ u')  T"
    and "u v u' v'. t = λ[u]  v; t' = Beta u' v'; u  u'; v  v';
                      t \\ t' = subst (v \\ v') (u \\ u')  T"
    and "u v u' v'. t = λ[u]  v; t' = λ[u']  v'; u  u'; v  v';
                      t \\ t' = λ[u \\ u']  (v \\ v')  T"
    shows T
      using assms
      apply (cases t; cases t')
                     apply simp_all
           apply metis
          apply metis
         apply metis
        apply (cases "un_App1 t", simp_all)
        apply metis
       apply (cases "un_App1 t'", simp_all)
       apply metis
      by metis

    text ‹
      A term can only be consistent with another if both terms are ``arrows''.
    ›

    lemma Con_implies_Arr1:
    shows "t  u  Arr t"
    proof (induct t arbitrary: u)
      fix u v t'
      assume ind1: "u'. u  u'  Arr u"
      assume ind2: "v'. v  v'  Arr v"
      show "u  v  t'  Arr (u  v)"
        using ind1 ind2
        apply (cases t', simp_all)
         apply metis
        apply (cases u, simp_all)
        by (metis lambda.distinct(3) resid.simps(2))
      show "λ[u]  v  t'  Arr (λ[u]  v)"
        using ind1 ind2
        apply (cases t', simp_all)
         apply (cases "un_App1 t'", simp_all)
        by metis+
    qed auto

    lemma Con_implies_Arr2:
    shows "t  u  Arr u"
    proof (induct u arbitrary: t)
      fix u' v' t
      assume ind1: "u. u  u'  Arr u'"
      assume ind2: "v. v  v'  Arr v'"
      show "t  u'  v'  Arr (u'  v')"
        using ind1 ind2
        apply (cases t, simp_all)
         apply metis
        apply (cases u', simp_all)
        by (metis lambda.distinct(3) resid.simps(2))
      show "t  (λ[u']  v')  Arr (λ[u']  v')"
        using ind1 ind2
        apply (cases t, simp_all)
        apply (cases "un_App1 t", simp_all)
        by metis+
    qed auto

    lemma ConD:
    shows "t  u  t'  u'  t  t'  u  u'"
    and "λ[v]  u  λ[v']  u'  λ[v]  λ[v']  u  u'"
    and "λ[v]  u  t'  u'  λ[v]  t'  u  u'"
    and "t  u  λ[v']  u'  t  λ[v']  u  u'"
      by auto

    text ‹
      Residuation on consistent terms preserves arrows.
    ›

    lemma Arr_resid:
    shows "t  u  Arr (t \\ u)"
    proof (induct t arbitrary: u)
      fix t1 t2 u
      assume ind1: "u. t1  u  Arr (t1 \\ u)"
      assume ind2: "u. t2  u  Arr (t2 \\ u)"
      show "t1  t2  u  Arr ((t1  t2) \\ u)"
        using ind1 ind2 Arr_Subst
        apply (cases u, auto)
        apply (cases t1, auto)
        by (metis Arr.simps(3) ConD(2) resid.simps(2) resid.simps(4))
      show "λ[t1]  t2  u  Arr ((λ[t1]  t2) \\ u)"
        using ind1 ind2 Arr_Subst
        by (cases u) auto
    qed auto

    subsection "Source and Target"

    text ‹
      Here we give syntactic versions of the \emph{source} and \emph{target} of a term.
      These will later be shown to agree (on arrows) with the versions derived from the residuation.
      The underlying idea here is that a term stands for a reduction sequence in which
      all marked redexes (corresponding to instances of the constructor Beta›) are contracted
      in a bottom-up fashion.  A term without any marked redexes stands for an empty reduction
      sequence; such terms will be shown to be the identities derived from the residuation.
      The source of term is the identity obtained by erasing all markings; that is, by replacing
      all subterms of the form Beta t u› by App (Lam t) u›.  The target of a term is the
      identity that is the result of contracting all the marked redexes.
    ›

    fun Src
    where "Src  = "
        | "Src «i» = «i»"
        | "Src λ[t] = λ[Src t]"
        | "Src (t  u) = Src t  Src u"
        | "Src (λ[t]  u) = λ[Src t]  Src u"

    fun Trg
    where "Trg «i» = «i»"
        | "Trg λ[t] = λ[Trg t]"
        | "Trg (t  u) = Trg t  Trg u"
        | "Trg (λ[t]  u) = subst (Trg u) (Trg t)"
        | "Trg _ = "

    lemma Ide_Src:
    shows "Arr t  Ide (Src t)"
      by (induct t) auto

    lemma Ide_iff_Src_self:
    assumes "Arr t"
    shows "Ide t  Src t = t"
      using assms Ide_Src
      by (induct t) auto

    lemma Arr_Src [simp]:
    assumes "Arr t"
    shows "Arr (Src t)"
      using assms Ide_Src Ide_implies_Arr by blast

    lemma Con_Src:
    shows "size t + size u  n; t  u  Src t  Src u"
      by (induct n arbitrary: t u) auto

    lemma Src_eq_iff:
    shows "Src «i» = Src «i'»  i = i'"
    and "Src (t  u) = Src (t'  u')  Src t = Src t'  Src u = Src u'"
    and "Src (λ[t]  u) = Src (λ[t']  u')  Src t = Src t'  Src u = Src u'"
    and "Src (λ[t]  u) = Src (λ[t']  u')  Src t = Src t'  Src u = Src u'"
      by auto

    lemma Src_Raise:
    shows "Src (Raise d n t) = Raise d n (Src t)"
      by (induct t arbitrary: d) auto

    lemma Src_Subst [simp]:
    shows "Arr t; Arr u  Src (Subst d t u) = Subst d (Src t) (Src u)"
      using Src_Raise
      by (induct u arbitrary: d X) auto

    lemma Ide_Trg:
    shows "Arr t  Ide (Trg t)"
      using Ide_Subst
      by (induct t) auto

    lemma Ide_iff_Trg_self:
    shows "Arr t  Ide t  Trg t = t"
      apply (induct t)
          apply auto
      by (metis Ide.simps(5) Ide_Subst Ide_Trg)+

    lemma Arr_Trg [simp]:
    assumes "Arr X"
    shows "Arr (Trg X)"
      using assms Ide_Trg Ide_implies_Arr by blast

    lemma Src_Src [simp]:
    assumes "Arr t"
    shows "Src (Src t) = Src t"
      using assms Ide_Src Ide_iff_Src_self Ide_implies_Arr by blast

    lemma Trg_Src [simp]:
    assumes "Arr t"
    shows "Trg (Src t) = Src t"
      using assms Ide_Src Ide_iff_Trg_self Ide_implies_Arr by blast

    lemma Trg_Trg [simp]:
    assumes "Arr t"
    shows "Trg (Trg t) = Trg t"
      using assms Ide_Trg Ide_iff_Trg_self Ide_implies_Arr by blast

    lemma Src_Trg [simp]:
    assumes "Arr t"
    shows "Src (Trg t) = Trg t"
      using assms Ide_Trg Ide_iff_Src_self Ide_implies_Arr by blast

    text ‹
      Two terms are syntactically \emph{coinitial} if they are arrows with the same source;
      that is, they represent two reductions from the same starting term.
    ›

    abbreviation Coinitial
    where "Coinitial t u  Arr t  Arr u  Src t = Src u"

    text ‹
      We now show that terms are consistent if and only if they are coinitial.
    ›

    lemma Coinitial_cases:
    assumes "Arr t" and "Arr t'" and "Src t = Src t'"
    shows "(t =   t' = ) 
           (x. t = «x»  t' = «x») 
           (u u'. t = λ[u]  t' = λ[u']) 
           (u v u' v'. t = u  v  t' = u'  v') 
           (u v u' v'. t = λ[u]  v  t' = λ[u']  v') 
           (u v u' v'. t = λ[u]  v  t' = λ[u']  v') 
           (u v u' v'. t = λ[u]  v  t' = λ[u']  v')"
      using assms
      by (cases t; cases t') auto

    lemma Con_implies_Coinitial_ind:
    shows "size t + size u  n; t  u  Coinitial t u"
      using Con_implies_Arr1 Con_implies_Arr2
      by (induct n arbitrary: t u) auto

    lemma Coinitial_implies_Con_ind:
    shows "size (Src t)  n; Coinitial t u  t  u"
    proof (induct n arbitrary: t u)
      show "t u. size (Src t)  0; Coinitial t u  t  u"
        by auto
      fix n t u
      assume Coinitial: "Coinitial t u"
      assume n: "size (Src t)  Suc n"
      assume ind: "t u. size (Src t)  n; Coinitial t u  t  u"
      show "t  u"
        using n ind Coinitial Coinitial_cases [of t u] Subst_not_Nil by auto
    qed

    lemma Coinitial_iff_Con:
    shows "Coinitial t u  t  u"
      using Coinitial_implies_Con_ind Con_implies_Coinitial_ind by blast

    lemma Coinitial_Raise_Raise:
    shows "Coinitial t u  Coinitial (Raise d n t) (Raise d n u)"
      using Arr_Raise Src_Raise
      apply (induct t arbitrary: d n u, auto)
      by (metis Raise.simps(3-4))

    lemma Con_sym:
    shows "t  u  u  t"
      by (metis Coinitial_iff_Con)

    lemma ConI [intro, simp]:
    assumes "Arr t" and "Arr u" and "Src t = Src u"
    shows "Con t u"
      using assms Coinitial_iff_Con by blast

    lemma Con_Arr_Src [simp]:
    assumes "Arr t"
    shows "t  Src t" and "Src t  t"
      using assms
      by (auto simp add: Ide_Src Ide_implies_Arr)

    lemma resid_Arr_self:
    shows "Arr t  t \\ t = Trg t"
      by (induct t) auto

    text ‹
      The following result is not used in the formal development that follows,
      but it requires some proof and might eventually be useful.
    ›

    lemma finite_branching:
    shows "Ide a  finite {t. Arr t  Src t = a}"
    proof (induct a)
      show "Ide   finite {t. Arr t  Src t = }"
        by simp
      fix x
      have "t. Src t = «x»  t = «x»"
        using Src.elims by blast
      thus "finite {t. Arr t  Src t = «x»}"
        by simp
      next
      fix a
      assume a: "Ide λ[a]"
      assume ind: "Ide a  finite {t. Arr t  Src t = a}"
      have "{t. Arr t  Src t = λ[a]} = Lam ` {t. Arr t  Src t = a}"
        using Coinitial_cases by fastforce
      thus "finite {t. Arr t  Src t = λ[a]}"
        using a ind by simp
      next
      fix a1 a2
      assume ind1: "Ide a1  finite {t. Arr t  Src t = a1}"
      assume ind2: "Ide a2  finite {t. Arr t  Src t = a2}"
      assume a: "Ide (λ[a1]  a2)"
      show "finite {t. Arr t  Src t = λ[a1]  a2}"
        using a ind1 ind2 by simp
      next
      fix a1 a2
      assume ind1: "Ide a1  finite {t. Arr t  Src t = a1}"
      assume ind2: "Ide a2  finite {t. Arr t  Src t = a2}"
      assume a: "Ide (a1  a2)"
      have "{t. Arr t  Src t = a1  a2} =
            ({t. is_App t}  ({t. Arr t  Src (un_App1 t) = a1  Src (un_App2 t) = a2})) 
            ({t. is_Beta t  is_Lam a1} 
             ({t. Arr t  is_Lam a1  Src (un_Beta1 t) = un_Lam a1  Src (un_Beta2 t) = a2}))"
        by fastforce
      have "{t. Arr t  Src t = a1  a2} =
            (λ(t1, t2). t1  t2) ` ({t1. Arr t1  Src t1 = a1} × {t2. Arr t2  Src t2 = a2}) 
            (λ(t1, t2). λ[t1]  t2) `
              ({t1t2. is_Lam a1} 
                 {t1. Arr t1  Src t1 = un_Lam a1} × {t2. Arr t2  Src t2 = a2})"
      proof
        show "(λ(t1, t2). t1  t2) ` ({t1. Arr t1  Src t1 = a1} × {t2. Arr t2  Src t2 = a2}) 
              (λ(t1, t2). λ[t1]  t2) `
                ({t1t2. is_Lam a1} 
                   {t1. Arr t1  Src t1 = un_Lam a1} × {t2. Arr t2  Src t2 = a2})
                 {t. Arr t  Src t = a1  a2}"
          by auto
        show "{t. Arr t  Src t = a1  a2}
                 (λ(t1, t2). t1  t2) `
                    ({t1. Arr t1  Src t1 = a1} × {t2. Arr t2  Src t2 = a2}) 
                  (λ(t1, t2). λ[t1]  t2) `
                    ({t1t2. is_Lam a1} 
                       {t1. Arr t1  Src t1 = un_Lam a1} × {t2. Arr t2  Src t2 = a2})"
        proof
          fix t
          assume t: "t  {t. Arr t  Src t = a1  a2}"
          have "is_App t  is_Beta t"
            using t by auto
          moreover have "is_App t  t  (λ(t1, t2). t1  t2) `
                                        ({t1. Arr t1  Src t1 = a1} × {t2. Arr t2  Src t2 = a2})"
            using t image_iff is_App_def by fastforce
          moreover have "is_Beta t 
                           t  (λ(t1, t2). λ[t1]  t2) `
                             ({t1t2. is_Lam a1} 
                                {t1. Arr t1  Src t1 = un_Lam a1} × {t2. Arr t2  Src t2 = a2})"
            using t is_Beta_def by fastforce
          ultimately show "t  (λ(t1, t2). t1  t2) `
                                 ({t1. Arr t1  Src t1 = a1} × {t2. Arr t2  Src t2 = a2}) 
                               (λ(t1, t2). λ[t1]  t2) `
                                 ({t1t2. is_Lam a1} 
                                    {t1. Arr t1  Src t1 = un_Lam a1} × {t2. Arr t2  Src t2 = a2})"
            by blast
        qed
      qed
      moreover have "finite ({t1. Arr t1  Src t1 = a1} × {t2. Arr t2  Src t2 = a2})"
        using a ind1 ind2 Ide.simps(4) by blast
      moreover have "is_Lam a1 
                     finite ({t1. Arr t1  Src t1 = un_Lam a1} × {t2. Arr t2  Src t2 = a2})"
      proof -
        assume a1: "is_Lam a1"
        have "Ide (un_Lam a1)"
          using a a1 is_Lam_def by force
        have "Lam ` {t1. Arr t1  Src t1 = un_Lam a1} = {t. Arr t  Src t = a1}"
        proof
          show "Lam ` {t1. Arr t1  Src t1 = un_Lam a1}  {t. Arr t  Src t = a1}"
            using a1 by fastforce
          show "{t. Arr t  Src t = a1}  Lam ` {t1. Arr t1  Src t1 = un_Lam a1}"
          proof
            fix t
            assume t: "t  {t. Arr t  Src t = a1}"
            have "is_Lam t"
              using a1 t by auto
            hence "un_Lam t  {t1. Arr t1  Src t1 = un_Lam a1}"
              using is_Lam_def t by force
            thus "t  Lam ` {t1. Arr t1  Src t1 = un_Lam a1}"
              by (metis is_Lam t lambda.collapse(2) rev_image_eqI)
          qed
        qed
        moreover have "inj Lam"
          using inj_on_def by blast
        ultimately have "finite {t1. Arr t1  Src t1 = un_Lam a1}"
          by (metis (mono_tags, lifting) Ide.simps(4) a finite_imageD ind1 injD inj_onI)
        moreover have "finite {t2. Arr t2  Src t2 = a2}"
          using Ide.simps(4) a ind2 by blast
        ultimately
        show "finite ({t1. Arr t1  Src t1 = un_Lam a1} × {t2. Arr t2  Src t2 = a2})"
          by blast
      qed
      ultimately show "finite {t. Arr t  Src t = a1  a2}"
        using a ind1 ind2 by simp
    qed

    subsection "Residuation and Substitution"

    text ‹
      We now develop a series of lemmas that involve the interaction of residuation
      and substitution.
    ›

    lemma Raise_resid:
    shows "t  u  Raise k n (t \\ u) = Raise k n t \\ Raise k n u"
    proof -
      (*
       * Note: This proof uses subterm induction because the hypothesis Con t u yields
       * cases in which App and Beta terms are compared, so that the first argument to App
       * needs to be unfolded.
       *)
      let ?P = "λ(t, u). k n. t  u  Raise k n (t \\ u) = Raise k n t \\ Raise k n u"
      have "t u.
               t' u'. ((t', u'), (t, u))  subterm_pair_rel 
                         (k n. t'  u' 
                                Raise k n (t' \\ u') = Raise k n t' \\ Raise k n u') 
               (k n. t  u  Raise k n (t \\ u) = Raise k n t \\ Raise k n u)"
        using subterm_lemmas Coinitial_iff_Con Coinitial_Raise_Raise Raise_subst by auto
      thus "t  u  Raise k n (t \\ u) = Raise k n t \\ Raise k n u"
        using wf_subterm_pair_rel wf_induct [of subterm_pair_rel ?P] by blast
    qed

    lemma Con_Raise:
    shows "t  u  Raise d n t  Raise d n u"
      by (metis Raise_not_Nil Raise_resid)

    text ‹
      The following is Huet's Commutation Theorem cite"huet-residual-theory":
      ``substitution commutes with residuation''.
    ›

    lemma resid_Subst:
    assumes "t  t'" and "u  u'"
    shows "Subst n t u \\ Subst n t' u' = Subst n (t \\ t') (u \\ u')"
    proof -
      let ?P = "λ(u, u'). n t t'. t  t'  u  u' 
                                     Subst n t u \\ Subst n t' u' = Subst n (t \\ t') (u \\ u')"
      have "u u'. w w'. ((w, w'), (u, u'))  subterm_pair_rel 
                             (n v v'. v  v'  w  w' 
                               Subst n v w \\ Subst n v' w' = Subst n (v \\ v') (w \\ w')) 
                   n t t'. t  t'  u  u' 
                              Subst n t u \\ Subst n t' u' = Subst n (t \\ t') (u \\ u')"
        using subterm_lemmas Raise_resid Subst_not_Nil Con_Raise Raise_subst substitution_lemma
        by auto
      thus ?thesis
        using assms wf_subterm_pair_rel wf_induct [of subterm_pair_rel ?P] by auto
    qed

    lemma Trg_Subst [simp]:
    shows "Arr t; Arr u  Trg (Subst d t u) = Subst d (Trg t) (Trg u)"
      by (metis Arr_Subst Arr_Trg Arr_not_Nil resid_Arr_self resid_Subst)

    lemma Src_resid:
    shows "t  u  Src (t \\ u) = Trg u"
    proof (induct u arbitrary: t, auto simp add: Arr_resid)
      fix t t1'
      show "t2'. t1. t1  t1'  Src (t1 \\ t1') = Trg t1';
                   t2. t2  t2'  Src (t2 \\ t2') = Trg t2';
                   t  t1'  t2'
                       Src (t \\ (t1'  t2')) = Trg t1'  Trg t2'"
        apply (cases t; cases t1')
                            apply auto
        by (metis Src.simps(3) lambda.distinct(3) lambda.sel(2) resid.simps(2))
    qed

    lemma Coinitial_resid_resid:
    assumes "t  v" and "u  v"
    shows "Coinitial (t \\ v) (u \\ v)"
      using assms Src_resid Arr_resid Coinitial_iff_Con by presburger

    lemma Con_implies_is_Lam_iff_is_Lam:
    assumes "t  u"
    shows "is_Lam t  is_Lam u"
      using assms by auto

    lemma Con_implies_Coinitial3:
    assumes "t \\ v  u \\ v"
    shows "Coinitial v u" and "Coinitial v t" and "Coinitial u t"
      using assms
      by (metis Coinitial_iff_Con resid.simps(7))+

    text ‹
      We can now prove L\'{e}vy's ``Cube Lemma'' cite"levy", which is the key axiom
      for a residuated transition system.
    ›

    lemma Cube:
    shows "v \\ t  u \\ t  (v \\ t) \\ (u \\ t) = (v \\ u) \\ (t \\ u)"
    proof -
      let ?P = "λ(t, u, v). v \\ t  u \\ t  (v \\ t) \\ (u \\ t) = (v \\ u) \\ (t \\ u)"
      have "t u v.
               t' u' v'.
                 ((t', u', v'), (t, u, v))  subterm_triple_rel  ?P (t', u', v') 
                   v \\ t  u \\ t  (v \\ t) \\ (u \\ t) = (v \\ u) \\ (t \\ u)"
      proof -
        fix t u v
        assume ind: "t' u' v'. 
                       ((t', u', v'), (t, u, v))  subterm_triple_rel  ?P (t', u', v')"
        show "v \\ t  u \\ t  (v \\ t) \\ (u \\ t) = (v \\ u) \\ (t \\ u)"
        proof (intro impI)
          assume con: "v \\ t  u \\ t"
          have "Con v t"
            using con by auto
          moreover have "Con u t"
            using con by auto
          ultimately show "(v \\ t) \\ (u \\ t) = (v \\ u) \\ (t \\ u)"
            using subterm_lemmas ind Coinitial_iff_Con Coinitial_resid_resid resid_Subst
            apply (elim ConE [of v t] ConE [of u t])
                                apply simp_all
                    apply metis
                   apply metis
                  apply (cases "un_App1 t"; cases "un_App1 v", simp_all)
                  apply metis
                 apply metis
                apply metis
               apply metis
              apply metis
             apply (cases "un_App1 u", simp_all)
             apply metis
            by metis
        qed
      qed
      hence "?P (t, u, v)"
        using wf_subterm_triple_rel wf_induct [of subterm_triple_rel ?P] by blast
      thus "v \\ t  u \\ t  (v \\ t) \\ (u \\ t) = (v \\ u) \\ (t \\ u)"
        by simp
    qed

    subsection "Residuation Determines an RTS"

    text ‹
      We are now in a position to verify that the residuation operation that we have defined
      satisfies the axioms for a residuated transition system, and that various notions
      which we have defined syntactically above (\emph{e.g.}~arrow, source, target) agree
      with the versions derived abstractly from residuation.
    ›

    sublocale partial_magma resid
      apply unfold_locales
      by (metis Arr.simps(1) Coinitial_iff_Con)

    lemma null_char [simp]:
    shows "null = "
      using null_def
      by (metis null_is_zero(2) resid.simps(7))

    sublocale residuation resid
      using null_char Arr_resid Coinitial_iff_Con Cube
      apply (unfold_locales, auto)
      by metis+

    (* TODO: Try to understand when notation is and is not inherited. *)
    notation resid  (infix "\\" 70)

    lemma resid_is_residuation:
    shows "residuation resid"
      ..

    lemma arr_char [iff]:
    shows "arr t  Arr t"
      using Coinitial_iff_Con arr_def con_def null_char by auto

    lemma ide_char [iff]:
    shows "ide t  Ide t"
      by (metis Ide_iff_Trg_self Ide_implies_Arr arr_char arr_resid_iff_con ide_def
          resid_Arr_self)

    lemma resid_Arr_Ide:
    shows "Ide a; Coinitial t a  t \\ a = t"
      using Ide_iff_Src_self
      by (induct t arbitrary: a, auto)

    lemma resid_Ide_Arr:
    shows "Ide a; Coinitial a t  Ide (a \\ t)"
      by (metis Coinitial_resid_resid ConI Ide_iff_Trg_self cube resid_Arr_Ide resid_Arr_self)

    lemma resid_Arr_Src [simp]:
    assumes "Arr t"
    shows "t \\ Src t = t"
      using assms Ide_Src
      by (simp add: Ide_implies_Arr resid_Arr_Ide)

    lemma resid_Src_Arr [simp]:
    assumes "Arr t"
    shows "Src t \\ t = Trg t"
      using assms
      by (metis (full_types) Con_Arr_Src(2) Con_implies_Arr1 Src_Src Src_resid cube
          resid_Arr_Src resid_Arr_self)

    sublocale rts resid
    proof
      show "a t. ide a; con t a  t \\ a = t"
        using ide_char resid_Arr_Ide
        by (metis Coinitial_iff_Con con_def null_char)
      show "t. arr t  ide (trg t)"
        by (simp add: Ide_Trg resid_Arr_self trg_def)
      show "a t. ide a; con a t  ide (resid a t)"
        using ide_char null_char resid_Ide_Arr Coinitial_iff_Con con_def by force
      show "t u. con t u  a. ide a  con a t  con a u"
        by (metis Coinitial_iff_Con Ide_Src Ide_iff_Src_self Ide_implies_Arr con_def
            ide_char null_char)
      show "t u v. ide (resid t u); con u v  con (resid t u) (resid v u)"
        by (metis Coinitial_resid_resid ide_char not_arr_null null_char resid_Ide_Arr
                  con_def con_sym ide_implies_arr)
    qed

    lemma is_rts:
    shows "rts resid"
      ..

    lemma sources_charΛ:
    shows "sources t = (if Arr t then {Src t} else {})"
    proof (cases "Arr t")
      show "¬ Arr t  ?thesis"
        using arr_char arr_iff_has_source by auto
      assume t: "Arr t"
      have 1: "{Src t}  sources t"
        using t Ide_Src by force
      moreover have "sources t  {Src t}"
        by (metis Coinitial_iff_Con Ide_iff_Src_self ide_char in_sourcesE null_char
                  con_def singleton_iff subsetI)
      ultimately show ?thesis
        using t by auto
    qed

    lemma sources_simp [simp]:
    assumes "Arr t"
    shows "sources t = {Src t}"
      using assms sources_charΛ by auto

    lemma sources_simps [simp]:
    shows "sources  = {}"
    and "sources «x» = {«x»}"
    and "arr t  sources λ[t] = {λ[Src t]}"
    and "arr t; arr u  sources (t  u) = {Src t  Src u}"
    and "arr t; arr u  sources (λ[t]  u) = {λ[Src t]  Src u}"
      using sources_charΛ by auto

    lemma targets_charΛ:
    shows "targets t = (if Arr t then {Trg t} else {})"
    proof (cases "Arr t")
      show "¬ Arr t  ?thesis"
        by (meson arr_char arr_iff_has_target)
      assume t: "Arr t"
      have 1: "{Trg t}  targets t"
        using t resid_Arr_self trg_def trg_in_targets by force
      moreover have "targets t  {Trg t}"
        by (metis 1 Ide_iff_Src_self arr_char ide_char ide_implies_arr
            in_targetsE insert_subset prfx_implies_con resid_Arr_self
            sources_resid sources_simp t)
      ultimately show ?thesis
        using t by auto
    qed

    lemma targets_simp [simp]:
    assumes "Arr t"
    shows "targets t = {Trg t}"
      using assms targets_charΛ by auto

    lemma targets_simps [simp]:
    shows "targets  = {}"
    and "targets «x» = {«x»}"
    and "arr t  targets λ[t] = {λ[Trg t]}"
    and "arr t; arr u  targets (t  u) = {Trg t  Trg u}"
    and "arr t; arr u  targets (λ[t]  u) = {subst (Trg u) (Trg t)}"
      using targets_charΛ by auto

    lemma seq_char:
    shows "seq t u  Arr t  Arr u  Trg t = Src u"
      using seq_def arr_char sources_charΛ targets_charΛ by force

    lemma seqIΛ [intro, simp]:
    assumes "Arr t" and "Arr u" and "Trg t = Src u"
    shows "seq t u"
      using assms seq_char by simp

    lemma seqEΛ [elim]:
    assumes "seq t u"
    and "Arr t; Arr u; Trg t = Src u  T"
    shows T
      using assms seq_char by blast

    text ‹
      The following classifies the ways that transitions can be sequential.  It is useful
      for later proofs by case analysis.
    ›

    lemma seq_cases:
    assumes "seq t u"
    shows "(is_Var t  is_Var u) 
           (is_Lam t  is_Lam u) 
           (is_App t  is_App u) 
           (is_App t  is_Beta u  is_Lam (un_App1 t)) 
           (is_App t  is_Beta u  is_Beta (un_App1 t)) 
           is_Beta t"
      using assms seq_char
      by (cases t; cases u) auto

    sublocale confluent_rts resid
      by (unfold_locales) fastforce

    lemma is_confluent_rts:
    shows "confluent_rts resid"
      ..

    lemma con_char [iff]:
    shows "con t u  Con t u"
      by fastforce

    lemma coinitial_char [iff]:
    shows "coinitial t u  Coinitial t u"
      by fastforce

    lemma sources_Raise:
    assumes "Arr t"
    shows "sources (Raise d n t) = {Raise d n (Src t)}"
      using assms
      by (simp add: Coinitial_Raise_Raise Src_Raise)

    lemma targets_Raise:
    assumes "Arr t"
    shows "targets (Raise d n t) = {Raise d n (Trg t)}"
      using assms
      by (metis Arr_Raise ConI Raise_resid resid_Arr_self targets_charΛ)

    lemma sources_subst [simp]:
    assumes "Arr t" and "Arr u"
    shows "sources (subst t u) = {subst (Src t) (Src u)}"
      using assms sources_charΛ Arr_Subst arr_char by simp

    lemma targets_subst [simp]:
    assumes "Arr t" and "Arr u"
    shows "targets (subst t u) = {subst (Trg t) (Trg u)}"
      using assms targets_charΛ Arr_Subst arr_char by simp

    notation prfx  (infix "" 50)
    notation cong  (infix "" 50)

    lemma prfx_char [iff]:
    shows "t  u  Ide (t \\ u)"
      using ide_char by simp

    lemma prfx_Var_iff:
    shows "u  «i»  u = «i»"
      by (metis Arr.simps(2) Coinitial_iff_Con Ide.simps(1) Ide_iff_Src_self Src.simps(2)
          ide_char resid_Arr_Ide)

    lemma prfx_Lam_iff:
    shows "u  Lam t  is_Lam u  un_Lam u  t"
      using ide_char Arr_not_Nil Con_implies_is_Lam_iff_is_Lam Ide_implies_Arr is_Lam_def
      by fastforce

    lemma prfx_App_iff:
    shows "u  t1  t2  is_App u  un_App1 u  t1  un_App2 u  t2"
      using ide_char
      by (cases u; cases t1) auto

    lemma prfx_Beta_iff:
    shows "u  λ[t1]  t2  
           (is_App u  un_App1 u  λ[t1]  un_App2 u  t2 
             (0  FV (un_Lam (un_App1 u) \\ t1)  un_App2 u  t2)) 
           (is_Beta u  un_Beta1 u  t1  un_Beta2 u  t2 
             (0  FV (un_Beta1 u \\ t1)  un_Beta2 u  t2))"
      using ide_char Ide_Subst_iff
      by (cases u; cases "un_App1 u") auto

    lemma cong_Ide_are_eq:
    assumes "t  u" and "Ide t" and "Ide u"
    shows "t = u"
      using assms
      by (metis Coinitial_iff_Con Ide_iff_Src_self con_char prfx_implies_con)

    lemma eq_Ide_are_cong:
    assumes "t = u" and "Ide t"
    shows "t  u"
      using assms Ide_implies_Arr resid_Ide_Arr by blast

    sublocale weakly_extensional_rts resid
      apply unfold_locales
      by (metis Coinitial_iff_Con Ide_iff_Src_self Ide_implies_Arr ide_char ide_def)

    lemma is_weakly_extensional_rts:
    shows "weakly_extensional_rts resid"
      ..

    lemma src_char [simp]:
    shows "src t = (if Arr t then Src t else )"
      using src_def by force

    lemma trg_char [simp]:
    shows "trg t = (if Arr t then Trg t else )"
      by (metis Coinitial_iff_Con resid_Arr_self trg_def)

    text ‹
      We ``almost'' have an extensional RTS.
      The case that fails is λ[t1]  t2 ∼ u ⟹ λ[t1]  t2 = u›.
      This is because t1› might ignore its argument, so that subst t2 t1 = subst t2' t1›,
      with both sides being identities, even if t2 ≠ t2'›.

      The following gives a concrete example of such a situation.
    ›

    abbreviation non_extensional_ex1
    where "non_extensional_ex1  λ[λ[«0»]  λ[«0»]]  (λ[«0»]  λ[«0»])"

    abbreviation non_extensional_ex2
    where "non_extensional_ex2  λ[λ[«0»]  λ[«0»]]  (λ[«0»]  λ[«0»])"

    lemma non_extensional:
    shows "λ[«1»]  non_extensional_ex1  λ[«1»]  non_extensional_ex2"
    and "λ[«1»]   non_extensional_ex1  λ[«1»]  non_extensional_ex2"
      by auto

    text ‹
      The following gives an example of two terms that are both coinitial and coterminal,
      but which are not congruent.
    ›

    abbreviation cong_nontrivial_ex1
    where "cong_nontrivial_ex1 
           λ[«0»  «0»]  λ[«0»  «0»]  (λ[«0»  «0»]  λ[«0»  «0»])"

    abbreviation cong_nontrivial_ex2
    where "cong_nontrivial_ex2 
           λ[«0»  «0»]  λ[«0»  «0»]  (λ[«0»  «0»]  λ[«0»  «0»])"

    lemma cong_nontrivial:
    shows "coinitial cong_nontrivial_ex1 cong_nontrivial_ex2"
    and "coterminal cong_nontrivial_ex1 cong_nontrivial_ex2"
    and "¬ cong cong_nontrivial_ex1 cong_nontrivial_ex2"
      by auto

    text ‹
      Every two coinitial transitions have a join, obtained structurally by unioning the sets
      of marked redexes.
    ›

    fun Join  (infix "" 52)
    where "«x»  «x'» = (if x = x' then «x» else )"
        | "λ[t]  λ[t'] = λ[t  t']"
        | "λ[t]  u  λ[t']  u' = λ[(t  t')]  (u  u')"
        | "λ[t]  u  λ[t']  u' = λ[(t  t'