Theory UserGuide

(*  Author:      Norbert Schirmer
    Maintainer:  Norbert Schirmer, norbert.schirmer at web de

Copyright (C) 2004-2008 Norbert Schirmer
Copyright (c) 2022 Apple Inc. All rights reserved.
*)

section ‹User Guide \label{sec:UserGuide}›
(*<*)
theory UserGuide
imports HeapList Vcg
  "HOL-Statespace.StateSpaceSyntax" "HOL-Library.LaTeXsugar"
begin
(*>*)

(*<*)
syntax
 "_statespace_updates" :: "('a  'b)  updbinds  ('a  'b)" ("__" [900,0] 900)
(*>*)



text ‹
We introduce the verification environment with a couple
of examples that illustrate how to use the different
bits and pieces to verify programs.
›


subsection ‹Basics›

text ‹

First of all we have to decide how to represent the state space. There
are currently two implementations. One is based on records the other
one on the concept called `statespace' that was introduced with
Isabelle 2007 (see \texttt{HOL/Statespace}) . In contrast to records a
'satespace' does not define a new type, but provides a notion of state,
based on locales. Logically
the state is modelled as a function from (abstract) names to
(abstract) values and the statespace infrastructure organises
distinctness of names an projection/injection of concrete values into
the abstract one. Towards the user the interface of records and
statespaces is quite similar. However, statespaces offer more
flexibility, inherited from the locale infrastructure, in
particular multiple inheritance and renaming of components.

In this user guide we prefer statespaces, but give some comments on
the usage of records in Section \ref{sec:records}.


›

hoarestate vars =
  A :: nat
  I :: nat
  M :: nat
  N :: nat
  R :: nat
  S :: nat

text (in vars) ‹The command \isacommand{hoarestate} is a simple preprocessor
for the command \isacommand{statespaces} which decorates the state
components with the suffix _'›, to avoid cluttering the
namespace. Also note that underscores are printed as hyphens in this
documentation. So what you see as @{term "A_'"} in this document is
actually \texttt{A\_'}.  Every component name becomes a fixed variable in
the locale vars› and can no longer be used for logical
variables.

Lookup of a component @{term "A_'"} in a state @{term "s"} is written as
@{term "sA_'"}, and update with a value @{term "term v"} as @{term "sA_' := v"}.

To deal with local and global variables in the context of procedures the
program state is organised as a record containing the two componets @{const "locals"}
and @{const "globals"}. The variables defined in hoarestate vars› reside
in the @{const "locals"} part.

›

text ‹
  Here is a first example.
›

lemma (in vars) "Γ ´N = 5 ´N :== 2 * ´N ´N = 10"
  apply vcg
  txt @{subgoals}
  apply simp
  txt @{subgoals}
  done

text ‹We enable the locale of statespace vars› by the
\texttt{in vars} directive.  The verification condition generator is
invoked via the vcg› method and leaves us with the expected
subgoal that can be proved by simplification.›

text (in vars) ‹
 If we refer to components (variables) of the state-space of the program
 we always mark these with ´› (in assertions and also in the
 program itself). It is the acute-symbol and is present on
 most keyboards. The assertions of the Hoare tuple are
 ordinary Isabelle sets. As we usually want to refer to the state space
 in the assertions, we provide special brackets for them. They can be written
 as {\verb+{| |}+} in ASCII or ⦃ ⦄› with symbols. Internally,
 marking variables has two effects. First of all we refer to the implicit
 state and secondary we get rid of the suffix _'›.
 So the assertion @{term "{|´N = 5|}"} internally gets expanded to
 {s. locals s ⋅N_' = 5}› written in ordinary set comprehension notation of
 Isabelle. It describes the set of states where the N_'› component
 is equal to 5›.
 An empty context and an empty postcondition for abrupt termination can be
 omitted. The lemma above is a shorthand for
  Γ,{}⊢ ⦃´N = 5⦄ ´N :== 2 * ´N ⦃´N = 10⦄,{}›.
›

text ‹We can step through verification condition generation by the
method vcg_step›.
›

lemma (in vars) "Γ,{} ´N = 5 ´N :== 2 * ´N ´N = 10"
  apply vcg_step
  txt @{subgoals}
  txt ‹The last step of verification condition generation,
         transforms the inclusion of state sets to the corresponding
         predicate on components of the state space.
›
  apply vcg_step
  txt @{subgoals}
  by simp

text ‹
Although our assertions work semantically on the state space, stepping
through verification condition generation ``feels'' like the expected
syntactic substitutions of traditional Hoare logic. This is achieved
by light simplification on the assertions calculated by the Hoare rules.
›

lemma (in vars) "Γ ´N = 5 ´N :== 2 * ´N ´N = 10"
  apply (rule HoarePartial.Basic)
  txt @{subgoals}
  apply (simp only: mem_Collect_eq)
  txt @{subgoals}
  apply (tactic
    Hoare.BasicSimpTac @{context} Hoare.Function false
       [] (K all_tac) 1)
  txt @{subgoals}
  by simp


text ‹The next example shows how we deal with the while loop. Note the
invariant annotation.
›

lemma (in vars)
  "Γ,{} ´M = 0  ´S = 0
          WHILE ´M  a
          INV ´S = ´M * b
          DO ´S :== ´S + b;; ´M :== ´M + 1 OD
          ´S = a * b"
  apply vcg
  txt @{subgoals [display]}
  txt ‹The verification condition generator gives us three proof obligations,
         stemming from the path from the precondition to the invariant,
         from the invariant together with loop condition through the
         loop body to the invariant, and finally from the invariant together
         with the negated loop condition to the postcondition.›
  apply auto
  done

subsection ‹Procedures›

subsubsection ‹Declaration›

text ‹
Our first procedure is a simple square procedure. We provide the
command \isacommand{procedures}, to declare and define a
procedure.
›

procedures
  Square (N::nat|R::nat)
  where I::nat in
  "´R :== ´N * ´N"



text ‹A procedure is given by the signature of the procedure
followed by the procedure body.  The signature consists of the name of
the procedure and a list of parameters together with their types. The
parameters in front of the pipe |› are value parameters and
behind the pipe are the result parameters. Value parameters model call
by value semantics. The value of a result parameter at the end of the
procedure is passed back to the caller. Local variables follow the
where›. If there are no local variables the where …
in› can be omitted. The variable @{term "I"} is actually unused in
the body, but is used in the examples below.›


text ‹
The procedures command provides convenient syntax
for procedure calls (that creates the proper @{term init}, @{term return} and
@{term result} functions on the fly) and creates locales and statespaces to
reason about the procedure. The purpose of locales is to set up logical contexts
to support modular reasoning. Locales can be seen as freeze-dried proof contexts that
get alive as you setup a new lemma or theorem (\cite{Ballarin-04-locales}).
The locale the user deals with is named Square_impl›.
 It defines the procedure name (internally   @{term "Square_'proc"}), the procedure body
(named Square_body›) and the statespaces for parameters and local and
global variables.
Moreover it contains the
assumption @{term "Γ Square_'proc = Some Square_body"}, which states
that the procedure is properly defined in the procedure context.

The purpose of the locale is to give us easy means to setup the context
in which we prove programs correct.
In this locale the procedure context @{term "Γ"} is fixed.
So we always use this letter for the procedure
specification. This is crucial, if we prove programs under the
assumption of some procedure specifications.
›

(*<*)
context Square_impl
begin
(*>*)
text ‹The procedures command generates syntax, so that we can
either write CALL Square(´I,´R)› or @{term "´I :== CALL
Square(´R)"} for the procedure call. The internal term is the
following:
›

(*<*) declare [[hoare_use_call_tr' = false]] (*>*)
text ‹\small @{term [display] "CALL Square(´I,´R)"}
(*<*) declare [[hoare_use_call_tr' = true]] (*>*)

text ‹Note the
        additional decoration (with the procedure name) of the parameter and
         local variable names.›
(*<*)
end
(*>*)

text ‹The abstract syntax for the
procedure call is @{term "call init p return result"}.  The @{term
"init"} function copies the values of the actual parameters to the
formal parameters, the @{term return} function copies the global
variables back (in our case there are no global variables), and the
@{term "result"} function additionally copies the values of the formal
result parameters to the actual locations. Actual value parameters can
be all kind of expressions, since we only need their value. But result
parameters must be proper ``lvalues'': variables (including
dereferenced pointers) or array locations, since we have to assign
values to them.
›

subsubsection ‹Verification›

text (in Square_impl) ‹
A procedure specification is an ordinary Hoare tuple.
We use the parameterless
call for the specification; ´R :== PROC Square(´N)› is syntactic sugar
for Call Square_'proc›. This emphasises that the specification
describes the internal behaviour of the procedure, whereas parameter passing
corresponds to the procedure call.
The following precondition fixes the current value ´N› to the logical
variable @{term n}.
Universal quantification of @{term "n"} enables us to adapt
the specification to an actual parameter. The specification is
used in the rule for procedure call when we come upon a call to @{term Square}.
Thus @{term "n"} plays the role of the auxiliary variable @{term "Z"}.
›


text ‹To verify the procedure we need to verify the body. We use
a derived variant of the general recursion rule, tailored for non recursive procedures:
@{thm [source] HoarePartial.ProcNoRec1}:
\begin{center}
@{thm [mode=Rule,mode=ParenStmt] HoarePartial.ProcNoRec1 [no_vars]}
\end{center}
The naming convention for the rule
is the following: The 1› expresses that we look at one
 procedure, and NoRec› that the procedure is non
recursive.
›


lemma (in Square_impl)
shows "n. Γ´N = n  ´R :== PROC Square(´N) ´R = n * n"
txt ‹The directive in› has the effect that
the context of the locale @{term "Square_impl"} is included to the current
lemma, and that the lemma is added as a fact to the locale, after it is proven. The
next time locale @{term "Square_impl"} is invoked this lemma is immediately available
as fact, which the verification condition generator can use.
›
apply (hoare_rule HoarePartial.ProcNoRec1)
 txt "@{subgoals[display]}"
 txt ‹The method hoare_rule›, like rule› applies a
     single rule, but additionally does some ``obvious'' steps:
     It solves the canonical side-conditions of various Hoare-rules and it
     automatically expands the
     procedure body: With @{thm [source] Square_impl}:  @{thm [names_short] Square_impl [no_vars]} we
     get the procedure body out of the procedure context @{term "Γ"};
     with @{thm [source] Square_body_def}: @{thm [names_short] Square_body_def [no_vars]} we
     can unfold the definition of the body.

     The proof is finished by the vcg and simp.
›
txt "@{subgoals[display]}"
by vcg simp

text ‹If the procedure is non recursive and there is no specification given, the
verification condition generator automatically expands the body.›

lemma (in Square_impl) Square_spec:
shows "n. Γ´N = n  ´R :== PROC Square(´N) ´R = n * n"
  by vcg simp

text ‹An important naming convention is to name the specification as
<procedure-name>_spec›. The verification condition generator refers to
this name in order to search for a specification in the theorem database.
›

subsubsection ‹Usage›


text‹Let us see how we can use procedure specifications.›
(* fixme: maybe don't show this at all *)
lemma (in Square_impl)
  shows "Γ´I = 2 ´R :== CALL Square(´I) ´R = 4"
  txt ‹Remember that we have already proven @{thm [source] "Square_spec"} in the locale
  Square_impl›. This is crucial for
  verification condition generation. When reaching a procedure call,
  it looks for the specification (by its name) and applies the
  rule @{thm [source,mode=ParenStmt] HoarePartial.ProcSpec}
instantiated with the specification
  (as last premise).
  Before we apply the verification condition generator, let us
  take some time to think of what we can expect.
  Let's look at the specification @{thm [source] Square_spec} again:
  @{thm [display] Square_spec [no_vars]}
  The specification talks about the formal parameters @{term "N"} and
  @{term R}. The precondition @{term "´N = n"} just fixes the initial
  value of N›.
  The actual parameters are @{term "I"} and @{term "R"}. We
  have to adapt the specification to this calling context.
  @{term "n. Γ ´I = n ´R :== CALL Square(´I) ´R = n * n"}.
  From the postcondition @{term "´R = n * n"} we
  have to derive the actual postcondition @{term "´R = 4"}. So
  we gain something like: @{term "n * n = (4::nat)"}.
  The precondition is @{term "´I = 2"} and the specification
  tells us @{term "´I = n"} for the pre-state. So the value of @{term n}
  is the value of @{term I} in the pre-state. So we arrive at
  @{term "´I = 2  ´I * ´I = 4"}.
›
  apply vcg_step
  txt "@{subgoals[display]}"
  txt ‹
  The second set looks slightly more involved:
    @{term "t. tR = ´I * ´I  ´I * ´I = 4"}, this is an artefact from the
  procedure call rule. Originally ´I * ´I = 4› was tR = 4›. Where
  @{term "t"} denotes the final state of the procedure and the superscript notation
  allows to select a component from a particular state.
›
  apply vcg_step
  txt "@{subgoals[display]}"
  by simp

text ‹
The adaption of the procedure specification to the actual calling
context is done due to the @{term init}, @{term return} and @{term result} functions
in the rule @{thm [source] HoarePartial.ProcSpec} (or in the variant
@{thm [source] HoarePartial.ProcSpecNoAbrupt} which already
incorporates the fact that the postcondition for abrupt termination
is the empty set). For the readers interested in the internals,
here a version without vcg.
›
lemma (in Square_impl)
  shows "Γ´I = 2 ´R :== CALL Square(´I) ´R = 4"
  apply (rule HoarePartial.ProcSpecNoAbrupt [OF _ _ Square_spec])
  txt "@{subgoals[display]}"
  txt ‹This is the raw verification condition,
         It is interesting to see how the auxiliary variable @{term "Z"} is
         actually used. It is unified with @{term n} of the specification and
         fixes the state after parameter passing.
›
  apply simp
  txt "@{subgoals[display]}"
  prefer 2
  apply vcg_step
  txt "@{subgoals[display]}"
  apply (auto intro: ext)
  done



subsubsection ‹Recursion›

text ‹We want to define a procedure for the factorial. We first
define a HOL function that calculates it, to specify the procedure later on.
›

primrec fac:: "nat  nat"
where
"fac 0 = 1" |
"fac (Suc n) = (Suc n) * fac n"

(*<*)
lemma fac_simp [simp]: "0 < i   fac i = i * fac (i - 1)"
  by (cases i) simp_all
(*>*)

text ‹Now we define the procedure.›
procedures
  Fac (N::nat | R::nat)
  "IF ´N = 0 THEN ´R :== 1
   ELSE ´R :== CALL Fac(´N - 1);;
        ´R :== ´N * ´R
   FI"

text ‹
Now let us prove that our implementation of @{term "Fac"} meets its specification.
›

lemma (in Fac_impl)
shows "n. Γ ´N = n ´R :== PROC Fac(´N) ´R = fac n"
apply (hoare_rule HoarePartial.ProcRec1)
txt "@{subgoals[display]}"
apply vcg
txt "@{subgoals[display]}"
apply simp
done

text ‹
Since the factorial is implemented recursively,
the main ingredient of this proof is, to assume that the specification holds for
the recursive call of @{term Fac} and prove the body correct.
The assumption for recursive calls is added to the context by
the rule @{thm [source] HoarePartial.ProcRec1}
(also derived from the general rule for mutually recursive procedures):
\begin{center}
@{thm [mode=Rule,mode=ParenStmt] HoarePartial.ProcRec1 [no_vars]}
\end{center}
The verification condition generator infers the specification out of the
context @{term "Θ"} when it encounters a recursive call of the factorial.
›

subsection ‹Global Variables and Heap \label{sec:VcgHeap}›

text ‹
Now we define and verify some procedures on heap-lists. We consider
list structures consisting of two fields, a content element @{term "cont"} and
a reference to the next list element @{term "next"}. We model this by the
following state space where every field has its own heap.
›

hoarestate globals_heap =
  "next" :: "ref  ref"
  cont :: "ref  nat"

text ‹It is mandatory to start the state name with `globals'. This is exploited
by the syntax translations to store the components in the @{const globals} part
of the state.
›

text ‹Updates to global components inside a procedure are
always propagated to the caller. This is implicitly done by the
parameter passing syntax translations.
›

text ‹We first define an append function on lists. It takes two
references as parameters. It appends the list referred to by the first
parameter with the list referred to by the second parameter. The statespace
of the global variables has to be imported.
›

procedures (imports globals_heap)
  append(p :: ref, q::ref | p::ref)
    "IF ´p=Null THEN ´p :== ´q
     ELSE ´p→´next :== CALL append(´p→´next,´q) FI"

(*<*)
context append_impl
begin
(*>*)
text ‹
The difference of a global and a local variable is that global
variables are automatically copied back to the procedure caller.
We can study this effect on the translation of @{term "´p :== CALL append(´p,´q)"}:
›
(*<*)
declare [[hoare_use_call_tr' = false]]
(*>*)
text @{term [display] "´p :== CALL append(´p,´q)"}
(*<*)
declare [[hoare_use_call_tr' = true]]
end
(*>*)

text ‹Below we give two specifications this time.
One captures the functional behaviour and focuses on the
entities that are potentially modified by the procedure, the second one
is a pure frame condition.
›



text ‹
The functional specification below introduces two logical variables besides the
state space variable @{term "σ"}, namely @{term "Ps"} and @{term "Qs"}.
They are universally quantified and range over both the pre-and the postcondition, so
that we are able to properly instantiate the specification
during the proofs. The syntax ⦃σ. …⦄› is a shorthand to fix the current
state: {s. σ = s …}›. Moreover σx› abbreviates
the lookup of variable x› in the state
σ›.

The approach to specify procedures on lists
basically follows \cite{MehtaN-CADE03}. From the pointer structure
in the heap we (relationally) abstract to HOL lists of references. Then
we can specify further properties on the level of HOL lists, rather then
on the heap. The basic abstractions are:

@{thm [display] Path.simps [no_vars]}

@{term [show_types] "Path x h y ps"}: @{term ps} is a list of references that we can obtain
out of the heap @{term h} by starting with the reference @{term x}, following
the references in @{term h} up to the reference @{term y}.


@{thm [display] List_def [no_vars]}

A list @{term "List p h ps"} is a path starting in @{term p} and ending up
in @{term Null}.
›


lemma (in append_impl) append_spec1:
shows "σ Ps Qs.
  Γ σ. List ´p ´next Ps   List ´q ´next Qs  set Ps  set Qs = {}
       ´p :== PROC append(´p,´q)
     List ´p ´next (Ps@Qs)  (x. xset Ps  ´next x = σnext x)"
apply (hoare_rule HoarePartial.ProcRec1)
txt @{subgoals [margin=80,display]}
Note that @{term "hoare_rule"} takes care of multiple auxiliary variables!
@{thm [source] HoarePartial.ProcRec1} has only one auxiliary variable, namely @{term Z}.
But the type of @{term Z} can be instantiated arbitrarily. So hoare_rule›
instantiates @{term Z} with the tuple @{term "(σ,Ps,Qs)"} and derives a proper variant
of the rule. Therefore hoare_rule› depends on the proper quantification of
auxiliary variables!
›
apply vcg
txt @{subgoals [display]}
For each branch of the IF› statement we have one conjunct to prove. The
THEN› branch starts with p = Null ⟶ …› and the ELSE› branch
with p ≠ Null ⟶ …›. Let us focus on the ELSE› branch, were the
recursive call to append occurs. First of all we have to prove that the precondition for
the recursive call is fulfilled. That means we have to provide some witnesses for
the lists @{term Psa} and @{term Qsa} which are referenced by p→next› (now
written as @{term "next p"}) and @{term q}. Then we have to show that we can
derive the overall postcondition from the postcondition of the recursive call. The
state components that have changed by the recursive call are the ones with the suffix
a›, like nexta› and pa›.
›
apply fastforce
done


text ‹If the verification condition generator works on a procedure
call it checks whether it can find a modifies clause in the
context. If one is present the procedure call is simplified before the
Hoare rule @{thm [source] HoarePartial.ProcSpec} is
applied. Simplification of the procedure call means that the ``copy
back'' of the global components is simplified. Only those components
that occur in the modifies clause are actually copied back.  This
simplification is justified by the rule @{thm [source]
HoarePartial.ProcModifyReturn}.
So after this simplification all global
components that do not appear in the modifies clause are treated
as local variables.›

text ‹We study the effect of the modifies clause on the following
examples, where we want to prove that @{term "append"} does not change
the @{term "cont"} part of the heap.
›
lemma (in append_impl)
shows "Γ ´cont=c ´p :== CALL append(Null,Null) ´cont=c"
proof -
  note append_spec = append_spec1
  show ?thesis
    apply vcg
    txt @{subgoals [display]}
    txt ‹Only focus on the very last line: @{term conta} is the heap component
      after the procedure call,
      and @{term cont} the heap component before the procedure call. Since
      we have not added the modified clause we do not know that they have
      to be equal.
›
    oops

text ‹
We now add the frame condition.
The list in the modifies clause names all global state components that
may be changed by the procedure. Note that we know from the modifies clause
that the @{term cont} parts are not changed. Also a small
side note on the syntax. We use ordinary brackets in the postcondition
of the modifies clause, and also the state components do not carry the
acute, because we explicitly note the state @{term t} here.
›


lemma (in append_impl) append_modifies:
  shows "σ. Γ⊢⇘/UNIV{σ} ´p :== PROC append(´p,´q)
             {t. t may_only_modify_globals σ in [next]}"
  apply (hoare_rule HoarePartial.ProcRec1)
  apply (vcg spec=modifies)
  done

text ‹We tell the verification condition generator to use only the
modifies clauses and not to search for functional specifications by
the parameter spec=modifies›. It also tries to solve the
verification conditions automatically. Again it is crucial to name
the lemma with this naming scheme, since the verfication condition
generator searches for these names.
›

text ‹The modifies clause is equal to a state update specification
of the following form.
›

lemma (in append_impl) shows "{t. t may_only_modify_globals Z in [next]}
       =
       {t. next. globals t=update id id next_' (K_statefun next) (globals Z)}"
  apply (unfold mex_def meq_def)
  apply simp
  done

text ‹Now that we have proven the frame-condition, it is available within
the locale append_impl› and the vcg› exploits it.›

lemma (in append_impl)
shows "Γ ´cont=c ´p :== CALL append(Null,Null) ´cont=c"
proof -
  note append_spec = append_spec1
  show ?thesis
    apply vcg
    txt @{subgoals [display]}
    txt ‹With a modifies clause present we know that no change to @{term cont}
      has occurred.
›
    by simp
qed


text ‹
Of course we could add the modifies clause to the functional specification as
well. But separating both has the advantage that we split up the verification
work. We can make use of the modifies clause before we apply the
functional specification in a fully automatic fashion.
›


text ‹
To prove that a procedure respects the modifies clause, we only need
the modifies clauses of the procedures called in the body. We do not need
the functional specifications. So we can always prove the modifies
clause without functional specifications, but we may need the modifies
clause to prove the functional specifications. So usually the modifies clause is
proved before the proof of the functional specification, so that it can already be used
by the verification condition generator.
›



subsection ‹Total Correctness›

text ‹When proving total correctness the additional proof burden to
the user is to come up with a well-founded relation and to prove that
certain states get smaller according to this relation. Proving that a
relation is well-founded can be quite hard. But fortunately there are
ways to construct and stick together relations so that they are
well-founded by construction. This infrastructure is already present
in Isabelle/HOL.  For example, @{term "measure f"} is always well-founded;
the lexicographic product of two well-founded relations is again
well-founded and the inverse image construction @{term "inv_image"} of
a well-founded relation is again well-founded. The constructions are
best explained by some equations:

@{thm in_measure_iff [no_vars]}\\
@{thm in_lex_iff [no_vars]}\\
@{thm in_inv_image_iff [no_vars]}

Another useful construction is <*mlex*>› which is a combination
of a measure and a lexicographic product:

@{thm in_mlex_iff [no_vars]}\\
In contrast to the lexicographic product it does not construct a product type.
The state may either decrease according to the measure function @{term f} or the
measure stays the same and the state decreases because of the relation @{term r}.

Lets look at a loop:
›

lemma (in vars)
  "Γt ´M = 0  ´S = 0
       WHILE ´M  a
       INV ´S = ´M * b  ´M  a
       VAR MEASURE a - ´M
       DO ´S :== ´S + b;; ´M :== ´M + 1 OD
       ´S = a * b"
apply vcg
txt @{subgoals [display]}
The first conjunct of the second subgoal is the proof obligation that the
variant decreases in the loop body.
›
by auto



text ‹The variant annotation is preceded by VAR›. The capital MEASURE›
is a shorthand for measure (λs. a - sM)›. Analogous there is a capital
<*MLEX*>›.
›

lemma (in Fac_impl) Fac_spec':
shows "σ. Γt {σ} ´R :==  PROC Fac(´N) ´R = fac σN"
apply (hoare_rule HoareTotal.ProcRec1 [where r="measure (λ(s,p). sN)"])
txt ‹In case of the factorial the parameter @{term N} decreases in every call. This
is easily expressed by the measure function. Note that the well-founded relation for
recursive procedures is formally defined on tuples
containing the state space and the procedure name.
›
txt @{subgoals [display]}
The initial call to the factorial is in state @{term "σ"}. Note that in the
precondition @{term "{σ}  {σ'}"}, @{term "σ'"} stems from the lemma we want to prove
and @{term "σ"} stems from the recursion rule for total correctness. Both are
synonym for the initial state. To use the assumption in the Hoare context we
have to show that the call to the factorial is invoked on a smaller @{term N} compared
to the initial σN›.
›
apply vcg
txt @{subgoals [display]}
The tribute to termination is that we have to show N - 1 < N› in case of
the recursive call.
›
by simp

lemma (in append_impl) append_spec2:
shows "σ Ps Qs. Γt
  σ. List ´p ´next Ps   List ´q ´next Qs  set Ps  set Qs = {}
       ´p :== PROC append(´p,´q)
  List ´p ´next (Ps@Qs)  (x. xset Ps  ´next x = σnext x)"
apply (hoare_rule HoareTotal.ProcRec1
           [where r="measure (λ(s,p). length (list sp snext))"])
txt ‹In case of the append function the length of the list referenced by @{term p}
decreases in every recursive call.
›
txt @{subgoals [margin=80,display]}
apply vcg
apply (fastforce simp add: List_list)
done

text ‹
In case of the lists above, we have used a relational list abstraction @{term List}
to construct the HOL lists @{term Ps} and @{term Qs} for the pre- and postcondition.
To supply a proper measure function we use a functional abstraction @{term list}.
The functional abstraction can be defined by means of the relational list abstraction,
since the lists are already uniquely determined by the relational abstraction:

@{thm islist_def [no_vars]}\\
@{thm list_def [no_vars]}

\isacommand{lemma} @{thm List_conv_islist_list [no_vars]}

text ‹
The next contrived example is taken from \cite{Homeier-95-vcg}, to illustrate
a more complex termination criterion for mutually recursive procedures. The procedures
do not calculate anything useful.

›


procedures
  pedal(N::nat,M::nat)
  "IF 0 < ´N THEN
     IF 0 < ´M THEN
       CALL coast(´N- 1,´M- 1) FI;;
       CALL pedal(´N- 1,´M)
     FI"
  and

  coast(N::nat,M::nat)
  "CALL pedal(´N,´M);;
   IF 0 < ´M THEN CALL coast(´N,´M- 1) FI"


text ‹
In the recursive calls in procedure pedal› the first argument always decreases.
In the body of coast› in the recursive call of coast› the second
argument decreases, but in the call to pedal› no argument decreases.
Therefore an relation only on the state space is insufficient. We have to
take the procedure names into account, too.
We consider the procedure coast› to be ``bigger'' than pedal›
when we construct a well-founded relation on the product of state space and procedure
names.
›

ML ML_Thms.bind_thm ("HoareTotal_ProcRec2", Hoare.gen_proc_rec @{context} Hoare.Total 2)


text ‹
  We provide the ML function {\tt gen\_proc\_rec} to
automatically derive a convenient rule for recursion for a given number of mutually
recursive procedures.
›


lemma (in pedal_coast_clique)
shows "(σ. Γt {σ} PROC pedal(´N,´M) UNIV) 
         (σ. Γt {σ} PROC coast(´N,´M) UNIV)"
apply (hoare_rule HoareTotal_ProcRec2
            [where r= "((λ(s,p). sN) <*mlex*>
                           (λ(s,p). sM) <*mlex*>
                           measure (λ(s,p). if p = coast_'proc then 1 else 0))"])
  txt ‹We can directly express the termination condition described above with
  the <*mlex*>› construction. Either state component N› decreases,
  or it stays the same and M› decreases or this also stays the same, but
  then the procedure name has to decrease.›
  txt @{subgoals [margin=80,display]}
apply  simp_all
  txt @{subgoals [margin=75,display]}
by (vcg,simp)+

text ‹We can achieve the same effect without <*mlex*>› by using
 the ordinary lexicographic product <*lex*>›, inv_image› and
 measure›

lemma (in pedal_coast_clique)
shows "(σ. Γt {σ} PROC pedal(´N,´M) UNIV) 
         (σ. Γt {σ} PROC coast(´N,´M) UNIV)"
apply (hoare_rule HoareTotal_ProcRec2
          [where r= "inv_image (measure (λm. m) <*lex*>
                                        measure (λm. m) <*lex*>
                                        measure (λp. if p = coast_'proc then 1 else 0))
                           (λ(s,p). (sN,sM,p))"])
  txt ‹With the lexicographic product we construct a well-founded relation on
         triples of type @{typ "(nat×nat×string)"}. With @{term inv_image} we project
         the components out of the state-space and the procedure names to this
         triple.
›
  txt @{subgoals [margin=75,display]}
apply simp_all
by (vcg,simp)+

text ‹By doing some arithmetic we can express the termination condition with a single
measure function.
›

lemma (in pedal_coast_clique)
shows "(σ. Γt {σ} PROC pedal(´N,´M) UNIV) 
         (σ. Γt {σ} PROC coast(´N,´M) UNIV)"
apply(hoare_rule HoareTotal_ProcRec2
       [where r= "measure (λ(s,p). sN + sM + (if p = coast_'proc then 1 else 0))"])
apply simp_all
txt @{subgoals [margin=75,display]}
by (vcg,simp,arith?)+


subsection ‹Guards›

text (in vars) ‹The purpose of a guard is to guard the {\bf (sub-) expressions} of a
statement against runtime faults. Typical runtime faults are array bound violations,
dereferencing null pointers or arithmetical overflow. Guards make the potential
runtime faults explicit, since the expressions themselves never ``fail'' because
they are ordinary HOL expressions. To relieve the user from typing in lots of standard
guards for every subexpression, we supply some input syntax for the common
language constructs that automatically generate the guards.
For example the guarded assignment ´M :==g (´M + 1) div ´N› gets expanded to
guarded command @{term "´M :==g (´M + 1) div ´N"}. Here @{term "in_range"} is
uninterpreted by now.
›

lemma (in vars) "ΓTrue ´M :==g (´M + 1) div ´N True"
apply vcg
txt @{subgoals}
oops

text ‹
The user can supply on (overloaded) definition of in_range›
to fit to his needs.

Currently guards are generated for:

\begin{itemize}
\item overflow and underflow of numbers (in_range›). For subtraction of
      natural numbers a - b› the guard b ≤ a› is generated instead
      of in_range› to guard against underflows.
\item division by 0›
\item dereferencing of @{term Null} pointers
\item array bound violations
\end{itemize}

Following (input) variants of guarded statements are available:

\begin{itemize}
\item Assignment: … :==g …›
\item If: IFg …›
\item While: WHILEg …›
\item Call: CALLg …› or … :== CALLg …›
\end{itemize}
›

subsection ‹Miscellaneous Techniques›



subsubsection ‹Modifies Clause›

text ‹We look at some issues regarding the modifies clause with the example
of insertion sort for heap lists.
›

primrec sorted:: "('a  'a  bool)  'a list   bool"
where
"sorted le [] = True" |
"sorted le (x#xs) = ((yset xs. le x y)  sorted le xs)"

procedures (imports globals_heap)
  insert(r::ref,p::ref | p::ref)
    "IF ´r=Null THEN SKIP
     ELSE IF ´p=Null THEN ´p :== ´r;; ´p→´next :== Null
          ELSE IF ´r→´cont ≤ ´p→´cont
               THEN ´r→´next :== ´p;; ´p:==´r
               ELSE ´p→´next :== CALL insert(´r,´p→´next)
               FI
          FI
     FI"

lemma (in insert_impl) insert_modifies:
   "σ. Γ⊢⇘/UNIV{σ} ´p :== PROC insert(´r,´p)
        {t. t may_only_modify_globals σ in [next]}"
  by (hoare_rule HoarePartial.ProcRec1) (vcg spec=modifies)

lemma (in insert_impl) insert_spec:
  "σ Ps . Γ
  σ. List ´p ´next Ps  sorted (≤) (map ´cont Ps) 
      ´r  Null  ´r  set Ps
    ´p :== PROC insert(´r,´p)
  Qs. List ´p ´next Qs  sorted (≤) (map σcont  Qs) 
        set Qs = insert σr (set Ps) 
        (x. x  set Qs  ´next x = σnext x)"
  (*<*)
  apply (hoare_rule HoarePartial.ProcRec1)
  apply vcg
  apply (intro conjI impI)
  apply    fastforce
  apply   fastforce
  apply  fastforce
  apply (clarsimp)
  apply force
  done
  (*>*)

text ‹
In the postcondition of the functional specification there is a small but
important subtlety. Whenever we talk about the @{term "cont"} part we refer to
the one of the pre-state.
The reason is that we have separated out the information that @{term "cont"} is not
modified by the procedure, to the modifies clause. So whenever we talk about unmodified
parts in the postcondition we have to use the pre-state part, or explicitly
state an equality in the postcondition.
The reason is simple. If the postcondition would talk about ´cont›
instead of \mbox{σcont›}, we get a new instance of cont› during
verification and the postcondition would only state something about this
new instance. But as the verification condition generator uses the
modifies clause the caller of @{term "insert"} instead still has the
old cont› after the call. Thats the sense of the modifies clause.
So the caller and the specification simply talk about two different things,
without being able to relate them (unless an explicit equality is added to
the specification).
›


subsubsection ‹Annotations›

text ‹
Annotations (like loop invariants)
are mere syntactic sugar of statements that are used by the vcg›.
Logically a statement with an annotation is
equal to the statement without it. Hence annotations can be introduced by the user
while building a proof:

@{thm [source] HoarePartial.annotateI}: @{thm [mode=Rule] HoarePartial.annotateI [no_vars]}

When introducing annotations it can easily happen that these mess around with the
nesting of sequential composition. Then after stripping the annotations the resulting statement
is no longer syntactically identical to original one, only equivalent modulo associativity of sequential composition. The following rule also deals with this case:

@{thm [source] HoarePartial.annotate_normI}: @{thm [mode=Rule] HoarePartial.annotate_normI [no_vars]}

text_raw ‹\paragraph{Loop Annotations}
\mbox{}
\medskip

\mbox{}
›

procedures (imports globals_heap)
  insertSort(p::ref| p::ref)
  where r::ref q::ref in
    "´r:==Null;;
     WHILE (´p ≠ Null) DO
       ´q :== ´p;;
       ´p :== ´p→´next;;
       ´r :== CALL insert(´q,´r)
     OD;;
     ´p:==´r"

lemma (in insertSort_impl) insertSort_modifies:
  shows
   "σ. Γ⊢⇘/UNIV{σ} ´p :== PROC insertSort(´p)
    {t. t may_only_modify_globals σ in [next]}"
apply (hoare_rule HoarePartial.ProcRec1)
apply (vcg spec=modifies)
done




text ‹Insertion sort is not implemented recursively here, but with a
loop. Note that the while loop is not annotated with an invariant in the
procedure definition. The invariant only comes into play during verification.
Therefore we annotate the loop first, before we run the vcg›.
›

lemma (in insertSort_impl) insertSort_spec:
shows "σ Ps.
  Γ σ. List ´p ´next Ps 
       ´p :== PROC insertSort(´p)
     Qs. List ´p ´next Qs  sorted (≤) (map σcont Qs) 
           set Qs = set Ps"
apply (hoare_rule HoarePartial.ProcRec1)
apply (hoare_rule anno=
         "´r :== Null;;
         WHILE ´p  Null
         INV Qs Rs. List ´p ´next Qs  List ´r ´next Rs 
                  set Qs  set Rs = {} 
                  sorted (≤) (map ´cont Rs)  set Qs  set Rs = set Ps 
                  ´cont = σcont 
          DO ´q :== ´p;; ´p :== ´p´next;; ´r :== CALL insert(´q,´r) OD;;
          ´p :== ´r" in HoarePartial.annotateI)
apply vcg
txt …›
(*<*)

  apply   fastforce
  prefer 2
  apply  fastforce
  apply (clarsimp)
  apply (rule_tac x=ps in exI)
  apply (intro conjI)
  apply    (rule heap_eq_ListI1)
  apply     assumption
  apply    clarsimp
  apply    (subgoal_tac "xp  x  set Rs")
  apply     auto
  done
(*>*)

text ‹The method hoare_rule› automatically solves the side-condition
        that the annotated
        program is the same as the original one after stripping the annotations.›

text_raw ‹\paragraph{Specification Annotations}
\mbox{}
\medskip

\mbox{}
›



text ‹
When verifying a larger block of program text, it might be useful to split up
the block and to prove the parts in isolation. This is especially useful to
isolate loops. On the level of the Hoare calculus
the parts can then be combined with the consequence rule. To automate this
process we introduce the derived command @{term specAnno}, which allows to introduce
a Hoare tuple (inclusive auxiliary variables) in the program text:

@{thm specAnno_def [no_vars]}

The whole annotation reduces to the body @{term "c undefined"}. The
type of the assertions @{term "P"}, @{term "Q"} and @{term "A"} is
@{typ "'a  's set"} and the type of command @{term c} is @{typ "'a  ('s,'p,'f) com"}.
All entities formally depend on an auxiliary (logical) variable of type @{typ "'a"}.
The body @{term "c"} formally also depends on this variable, since a nested annotation
or loop invariant may also depend on this logical variable. But the raw body without
annotations does not depend on the logical variable. The logical variable is only
used by the verification condition generator. We express this by defining the
whole @{term specAnno} to be equivalent with the body applied to an arbitrary
variable.

The Hoare rule for specAnno› is mainly an instance of the consequence rule:

@{thm [mode=Rule,mode=ParenStmt] HoarePartial.SpecAnno [no_vars]}

The side-condition @{term "Z. c Z = c undefined"} expresses the intention of body @{term c}
explained above: The raw body is independent of the auxiliary variable. This
side-condition is solved automatically by the vcg›. The concrete syntax for
this specification annotation is shown in the following example:
›

lemma (in vars) "Γ {σ}
            ´I :== ´M;;
            ANNO τ. τ. ´I = σM
                         ´M :== ´N;; ´N :== ´I
                        ´M = τN  ´N = τI
           ´M = σN  ´N = σM"
txt ‹With the annotation we can name an intermediate state @{term τ}. Since the
       postcondition refers to @{term "σ"} we have to link the information about
       the equivalence of τI› and σM› in the specification in order
       to be able to derive the postcondition.
›
apply vcg_step
apply   vcg_step
txt @{subgoals [display]}
txt ‹The first subgoal is the isolated Hoare tuple. The second one is the
       side-condition of the consequence rule that allows us to derive the outermost
       pre/post condition from our inserted specification.
       ´I = σM› is the precondition of the specification,
       The second conjunct is a simplified version of
       ∀t. tM = ´N ∧ tN = ´I ⟶ tM = σN ∧ tN = σM› expressing that the
       postcondition of the specification implies the outermost postcondition.
›
apply  vcg
txt @{subgoals [display]}
apply  simp
apply vcg
txt @{subgoals [display]}
by simp


lemma (in vars)
  "Γ {σ}
  ´I :== ´M;;
  ANNO τ. τ. ´I = σM
    ´M :== ´N;; ´N :== ´I
    ´M = τN  ´N = τI
  ´M = σN  ´N = σM"
apply vcg
txt @{subgoals [display]}
by simp_all

text ‹Note that vcg_step› changes the order of sequential composition, to
allow the user to decompose sequences by repeated calls to vcg_step›, whereas
vcg› preserves the order.

The above example illustrates how we can introduce a new logical state variable
@{term "τ"}. You can introduce multiple variables by using a tuple:



›


lemma (in vars)
  "Γ {σ}
   ´I :== ´M;;
   ANNO (n,i,m). ´I = σM  ´N=n  ´I=i  ´M=m
     ´M :== ´N;; ´N :== ´I
   ´M = n  ´N = i
  ´M = σN  ´N = σM"
apply vcg
txt @{subgoals [display]}
by simp_all

text_raw ‹\paragraph{Lemma Annotations}
\mbox{}
\medskip

\mbox{}

›

text ‹
The specification annotations described before split the verification
into several Hoare triples which result in several subgoals. If we
instead want to proof the Hoare triples independently as
separate lemmas we can use the LEMMA› annotation to plug together the
lemmas. It
inserts the lemma in the same fashion as the specification annotation.
›
lemma (in vars) foo_lemma:
  "n m. Γ ´N = n  ´M = m ´N :== ´N + 1;; ´M :== ´M + 1
             ´N = n + 1  ´M = m + 1"
  apply vcg
  apply simp
  done

lemma (in vars)
  "Γ ´N = n  ´M = m
       LEMMA foo_lemma
             ´N :== ´N + 1;; ´M :== ´M + 1
       END;;
       ´N :== ´N + 1
       ´N = n + 2  ´M = m + 1"
  apply vcg
  apply simp
  done

lemma (in vars)
  "Γ ´N = n  ´M = m
           LEMMA foo_lemma
              ´N :== ´N + 1;; ´M :== ´M + 1
           END;;
           LEMMA foo_lemma
              ´N :== ´N + 1;; ´M :== ´M + 1
           END
      ´N = n + 2  ´M = m + 2"
  apply vcg
  apply simp
  done

lemma (in vars)
  "Γ ´N = n  ´M = m
              ´N :== ´N + 1;; ´M :== ´M + 1;;
              ´N :== ´N + 1;; ´M :== ´M + 1
      ´N = n + 2  ´M = m + 2"
  apply (hoare_rule anno=
          "LEMMA foo_lemma
              ´N :== ´N + 1;; ´M :== ´M + 1
           END;;
           LEMMA foo_lemma
              ´N :== ´N + 1;; ´M :== ´M + 1
           END"
          in HoarePartial.annotate_normI)
  apply vcg
  apply simp
  done


subsubsection ‹Total Correctness of Nested Loops›

text ‹
When proving termination of nested loops it is sometimes necessary to express that
the loop variable of the outer loop is not modified in the inner loop. To express this
one has to fix the value of the outer loop variable before the inner loop and use this value
in the invariant of the inner loop. This can be achieved by surrounding the inner while loop
with an ANNO› specification as explained previously. However, this
leads to repeating the invariant of the inner loop three times: in the invariant itself and
in the the pre- and postcondition of the ANNO› specification. Moreover one has
to deal with the additional subgoal introduced by ANNO› that expresses how
the pre- and postcondition is connected to the invariant. To avoid this extra specification
and verification work, we introduce an variant of the annotated while-loop, where one can
introduce logical variables by FIX›. As for the ANNO› specification
multiple logical variables can be introduced via a tuple (FIX (a,b,c).›).

The Hoare logic rule for the augmented while-loop is a mixture of the invariant rule for
loops and the consequence rule for ANNO›:

\begin{center}
@{thm [mode=Rule,mode=ParenStmt] HoareTotal.WhileAnnoFix' [no_vars]}
\end{center}

The first premise expresses that the precondition implies the invariant and that
the invariant together with the negated loop condition implies the postcondition. Since
both implications may depend on the choice of the auxiliary variable @{term "Z"} these two
implications are expressed in a single premise and not in two of them as for the usual while
rule. The second premise is the preservation of the invariant by the loop body. And the third
premise is the side-condition that the computational part of the body does not depend on
the auxiliary variable. Finally the last premise is the well-foundedness of the variant.
The last two premises are usually discharged automatically by the verification condition
generator. Hence usually two subgoals remain for the user, stemming from the first two
premises.

The following example illustrates the usage of this rule. The outer loop increments the
loop variable @{term "M"} while the inner loop increments @{term "N"}. To discharge the
proof obligation for the termination of the outer loop, we need to know that the inner loop
does not mess around with @{term "M"}. This is expressed by introducing the logical variable
@{term "m"} and fixing the value of @{term "M"} to it.
›


lemma (in vars)
  "Γt ´M=0  ´N=0
      WHILE (´M < i)
      INV ´M  i  (´M  0  ´N = j)  ´N  j
      VAR MEASURE (i - ´M)
      DO
         ´N :== 0;;
         WHILE (´N < j)
         FIX m.
         INV ´M=m  ´N  j
         VAR MEASURE (j - ´N)
         DO
           ´N :== ´N + 1
         OD;;
       ´M :== ´M + 1
       OD
       ´M=i  (´M0  ´N=j)"
apply vcg
txt @{subgoals [display]}

The first subgoal is from the precondition to the invariant of the outer loop.
The fourth subgoal is from the invariant together with the negated loop condition
of the outer loop to the postcondition. The subgoals two and three are from the body
of the outer while loop which is mainly the inner while loop. Because we introduce the
logical variable @{term "m"} here, the while Rule described above is used instead of the
ordinary while Rule. That is why we end up with two subgoals for the inner loop. Subgoal
two is from the invariant and the loop condition of the outer loop to the invariant
of the inner loop. And at the same time from the invariant of the inner loop to the
invariant of the outer loop (together with the proof obligation that the measure of the
outer loop decreases). The universal quantified variables @{term "Ma"} and @{term "N"} are
the ``fresh'' state variables introduced for the final state of the inner loop.
The equality @{term "Ma=M"} is the result of the equality ´M=m› in the inner
invariant. Subgoal three is the preservation of the invariant by the
inner loop body (together with the proof obligation that the measure of
the inner loop decreases).
›
(*<*)
apply    (simp)
apply   (simp,arith)
apply  (simp,arith)
done
(*>*)

subsection ‹Functional Correctness, Termination and Runtime Faults›

text ‹
Total correctness of a program with guards conceptually leads to three verification
tasks.
\begin{itemize}
\item functional (partial) correctness
\item absence of runtime faults
\item termination
\end{itemize}

In case of a modifies specification the functional correctness part
can be solved automatically. But the absence of runtime faults and
termination may be non trivial.  Fortunately the modifies clause is
usually just a helpful companion of another specification that
expresses the ``real'' functional behaviour. Therefor the task to
prove the absence of runtime faults and termination can be dealt with
during the proof of this functional specification. In most cases the
absence of runtime faults and termination heavily build on the
functional specification parts.  So after all there is no reason why
we should again prove the absence of runtime faults and termination
for the modifies clause. Therefor it suffices to have partial
correctness of the modifies clause for a program were all guards are
ignored.  This leads to the following pattern:›



procedures foo (N::nat|M::nat)
  "´M :== ´M
   ― ‹think of body with guards instead›"

  foo_spec: "∀σ. Γ⊢t (P σ) ´M :== PROC foo(´N) (Q σ)"
  foo_modifies: "∀σ. Γ⊢/UNIV {σ} ´M :== PROC foo(´N)
                   {t. t may_only_modify_globals σ in []}"

text ‹
The verification condition generator can solve those modifies clauses automatically
and can use them to simplify calls to foo› even in the context of total
correctness.
›

subsection ‹Procedures and Locales \label{sec:Locales}›



text ‹
Verification of a larger program is organised on the granularity of procedures.
We proof the procedures in a bottom up fashion.  Of course you can also always use Isabelle's
dummy proof sorry› to prototype your formalisation. So you can write the
theory in a bottom up fashion but actually prove the lemmas in any other order.

Here are some explanations of handling of locales. In the examples below, consider
proc1 and proc2 to be ``leaf'' procedures, which do not call any
other procedure.
Procedure proc› directly calls proc1 and proc2.

\isacommand{lemma} (\isacommand{in} proc1_impl›) proc1_modifies›:\\
\isacommand{shows} …›

After the proof of proc1_modifies›, the \isacommand{in} directive
stores the lemma in the
locale proc1_impl›. When we later on include proc1_impl› or prove
another theorem in locale proc1_impl› the lemma proc1_modifies›
will already be available as fact.

\isacommand{lemma} (\isacommand{in} proc1_impl›) proc1_spec›:\\
\isacommand{shows} …›

\isacommand{lemma} (\isacommand{in} proc2_impl›) proc2_modifies›:\\
\isacommand{shows} …›

\isacommand{lemma} (\isacommand{in} proc2_impl›) proc2_spec›:\\
\isacommand{shows} …›


\isacommand{lemma} (\isacommand{in} proc_impl›) proc_modifies›:\\
\isacommand{shows} …›

Note that we do not explicitly include anything about proc1 or
proc2 here. This is handled automatically. When defining
an impl›-locale it imports all impl›-locales of procedures that are
called in the body. In case of proc_impl› this means, that proc1_impl›
and proc2_impl› are imported. This has the neat effect that all theorems that
are proven in proc1_impl› and proc2_impl› are also present
in proc_impl›.

\isacommand{lemma} (\isacommand{in} proc_impl›) proc_spec›:\\
\isacommand{shows} …›

As we have seen in this example you only have to prove a procedure in its own
impl› locale. You do not have to include any other locale.
›

subsection ‹Records \label{sec:records}›

text ‹
Before @{term "statespaces"} where introduced the state was represented as a @{term "record"}.
This is still supported. Compared to the flexibility of statespaces there are some drawbacks
in particular with respect to modularity. Even names of local variables and
parameters are globally visible and records can only be extended in a linear fashion, whereas
statespaces also allow multiple inheritance. The usage of records is quite similar to the usage of statespaces.
We repeat the example of an append function for heap lists.
First we define the global components.
Again the appearance of the prefix `globals' is mandatory. This is the way the syntax layer distinguishes local and global variables.
›
record globals_list =
  next_' :: "ref  ref"
  cont_' :: "ref  nat"


text ‹The local variables also have to be defined as a record before the actual definition
of the procedure. The parent record state› defines a generic @{term "globals"}
field as a place-holder for the record of global components. In contrast to the
statespace approach there is no single @{term "locals"} slot. The local components are
just added to the record.
›
record 'g list_vars = "'g state" +
  p_'    :: "ref"
  q_'    :: "ref"
  r_'    :: "ref"
  root_' :: "ref"
  tmp_'  :: "ref"

text ‹Since the parameters and local variables are determined by the record, there are
no type annotations or definitions of local variables while defining a procedure.
›

procedures
  append'(p,q|p) =
    "IF ´p=Null THEN ´p :== ´q
     ELSE ´p →´next:== CALL append'(´p→´next,´q) FI"

text ‹As in the statespace approach, a locale called append'_impl› is created.
Note that we do not give any explicit information which global or local state-record to use.
Since the records are already defined we rely on Isabelle's type inference.
Dealing with the locale is analogous to the case with statespaces.
›

lemma (in append'_impl) append'_modifies:
  shows
   "σ. Γ {σ} ´p :== PROC append'(´p,´q)
        {t. t may_only_modify_globals σ in [next]}"
  apply (hoare_rule HoarePartial.ProcRec1)
  apply (vcg spec=modifies)
  done

lemma (in append'_impl) append'_spec:
  shows "σ Ps Qs. Γ
            σ. List ´p ´next Ps   List ´q ´next Qs  set Ps  set Qs = {}
                ´p :== PROC append'(´p,´q)
            List ´p ´next (Ps@Qs)  (x. xset Ps  ´next x = σnext x)"
  apply (hoare_rule HoarePartial.ProcRec1)
  apply vcg
  apply fastforce
  done


text ‹
However, in some corner cases the inferred state type in a procedure definition
can be too general which raises problems when  attempting to proof a suitable
specifications in the locale.
Consider for example the simple procedure body @{term "´p :== NULL"} for a procedure
init›.
›

procedures init (|p) =
  "´p:== Null"


text ‹
Here Isabelle can only
infer the local variable record. Since no reference to any global variable is
made the type fixed for the global variables (in the locale init'_impl›) is a
type variable say @{typ "'g"} and not a @{term "globals_list"} record. Any specification
mentioning @{term "next"} or @{term "cont"} restricts the state type and cannot be
added to the locale init_impl›. Hence we have to restrict the body
@{term "´p :== NULL"} in the first place by adding a typing annotation:
›

procedures init' (|p) =
  "´p:== Null::(('a globals_list_scheme, 'b) list_vars_scheme, char list, 'c) com"


subsubsection ‹Extending State Spaces›
text ‹
The records in Isabelle are
extensible \cite{Nipkow-02-hol,NaraschewskiW-TPHOLs98}. In principle this can be exploited
during verification. The state space can be extended while we we add procedures.
But there is one major drawback:
\begin{itemize}
  \item records can only be extended in a linear fashion (there is no multiple inheritance)
\end{itemize}

You can extend both the main state record as well as the record for the global variables.
›

subsubsection ‹Mapping Variables to Record Fields›

text ‹
Generally the state space (global and local variables) is flat and all components
are accessible from everywhere. Locality or globality of variables is achieved by
the proper init› and return›/result› functions in procedure
calls. What is the best way to map programming language variables to the state records?
One way is to disambiguate all names, by using the procedure names as prefix or the
structure names for heap components. This leads to long names and lots of
record components. But for local variables this is not necessary, since
variable @{term i} of procedure @{term A} and variable @{term "i"} of procedure @{term B}
can be mapped to the same record component, without any harm, provided they have the
same logical type. Therefor for local variables it is preferable to map them per type. You
only have to distinguish a variable with the same name if they have a different type.
Note that all pointers just have logical type ref›. So you even do not
have to distinguish between a  pointer p› to a integer and a pointer p› to
a list.
For global components (global variables and heap structures) you have to disambiguate the
name. But hopefully the field names of structures have different names anyway.
Also note that there is no notion of hiding of a global component by a local one in
the logic. You have to disambiguate global and local names!
As the names of the components show up in the specifications and the
proof obligations, names are even more important as for programming. Try to
find meaningful and short names, to avoid cluttering up your reasoning.
›

(*<*)
text ‹
in locales, includes, spec or impl?
Names: per type not per procedure…
downgrading total to partial…
›
(*>*)
text ‹›
(*<*)
end
(*>*)