Theory ToyList

Up to index of Isabelle/HOL/IsarTut

theory ToyList = PreList:
theory ToyList = PreList:

text {* We begin with the definitions for the
toy list theory verbatim from the tutorial.  *}


datatype 'a list = Nil      ("[]")
       | Cons 'a "'a list"  (infixr "#" 65)

consts app :: "'a list => 'a list => 'a list" (infixr "@" 65)
       rev :: "'a list => 'a list"

primrec
"[] @ ys = ys"
app_rdef:
"(x#xs) @ ys = x # (xs @ys)"

primrec
"rev [] = []"
rev_rdef:
"rev (x#xs) = (rev xs) @ (x # [])"

text {* Now we need to do an inductive proof.  Nowhere
that I have found is the basic idiom for induction in Isar
explained, yet we need it right off the bat.  Here is a
boiled-down version.  It is a little more verbose than the
Isabelle version but the ``is'' construct really helps. *}

lemma app_assoc [simp]: "(xs @ ys) @ zs = xs @ (ys @ zs)" (is "?P xs")
proof (induct xs)
  show "?P []" by simp
next
  fix x xs assume "?P xs"
  show "?P (x#xs)" by simp
qed

text {* Here's the deal.  The ``is'' clause tells the system to
treat the entire theorem statement as a predicate (truth-valued
function) of "xs".  This is nothing more than shorthand, but
it really helps.

The induct rule splits the current goal, using an induction
theorem determined by the type of the variable you name.
You clear a goal by showing it is true.  Having named the
predicate saves me having to type the whole thing in again.

Isar is philosophically committed to the idea that the
proof script should be human readable. In Isabelle you
can just say ``induct\_tac xs, simp'' and go on to work
on the inductive case, but a reader
would have to know that the simplifier had eliminated the
base case in order to follow your proof script. *}

lemma app_Nil2 [simp]: "xs @ [] = xs" (is "?P xs")
proof (induct xs)
  show "?P []" by simp
next
  fix x xs assume "?P xs"
  show "?P (x#xs)" by simp
qed

text {* This next lemma requires something new.  Even though ``auto'' works
in the Isabelle script, here we have to guide the proof step
by step.  This will turn out to be a good thing later, when
blind simplification can seriously break a proof.   So,
we dive immediately into the calculation ("also have")
style. *}

lemma rev_app [simp]: "rev(xs@ys) = (rev ys)@(rev xs)" (is "?P xs")
proof (induct xs)
  show "?P []" by simp
next
  fix x xs assume "?P xs"
  show "?P (x#xs)" (is "?LHS = ?RHS")
  proof -
    have "?RHS = (rev ys)@(rev xs)@(x#[])" by simp
    also from prems have "... = ?LHS" by simp
    finally show "?LHS = ?RHS" by simp
  qed
qed

text {* A lot of experimentation went into the writing
of that little proof.  Except for the sugar word ``next''
and the "is" clauses, nothing can be left out.  Play around
with it. 

Note that the ``fix'' clause must name x as well as xs.  I'm
not sure why, but if you drop the x it won't go through.
On the other hand, it has no such qualms about ys.  Go figure.

Now for the main theorem of this section. *}


theorem rev_rev [simp]: "rev(rev xs) = xs" (is "?P xs")
proof (induct xs)
  show "?P []" by simp
next
  fix x xs assume "?P xs"
  show "?P (x#xs)" by simp
qed

end

  

lemma app_assoc:

  (xs @ ys) @ zs = xs @ ys @ zs

lemma app_Nil2:

  xs @ [] = xs

lemma rev_app:

  rev (xs @ ys) = rev ys @ rev xs

theorem rev_rev:

  rev (rev xs) = xs