Theory Tries

Up to index of Isabelle/HOL/IsarTut

theory Tries = Main:
theory Tries = Main:

ML "set quick_and_dirty"

text {* An aside on managing a large project:

Once a theory file is acceptable to the interactive
prover you need to get it typset.  That's where the
file ROOT.ML and root.tex come in.  But the
documentation building process can generate new
errors, since the text portions are not checked
during interactive processing.

Since the order in which individual theory files
is processed is controlled by ROOT.ML while their
order in the document is controlled by root.tex, it
is possible to compile the (independent) theory
files in any order.  Put the newest files as high
up in the list as possible to minimize latency
in the ((modify compile correct)* inspect correct)* cycle. *}

text {* All about problems 3.4.4-5

[The early parts of this theory file were written
before I began working with Isar.  I'm not going
to rewrite them now.]

I begin by focusing on
the maintenance of the associative lists.  Here I solve
the whole problem by defining a lookup and update function
pair that realize a concrete associative set protocol.

With associative lists in hand, implmenting tries becomes
an exercise in structural recursion. [Ha!  Don't I wish
this optimistic forward-looking assertion had been true!]

Define the lookup function, \tt{assoc},
and field extractors. *}

consts assoc :: "('k*'v)list => 'k => 'v option"
       akey :: "('k*'v) => 'k"
       aval :: "('k*'v) => 'v"
primrec
"akey (kk,vv) = kk"
primrec
"aval (kk,vv) = vv"
primrec
assoc_base:
"assoc [] x = None"
assoc_rdef:
"assoc (p#ps) k = (if akey p = k then Some (aval p) else assoc ps k)"

text {* The associative list update function is named \tt{arep},
it uses the helper function \tt{thisrep}. *}

consts arep :: "('k*'v)list => 'k => 'v option => ('k*'v)list"
       thisrep :: "('k*'v) => 'k => 'v option => ('k*'v)list"
primrec
arep_base:
"arep [] k vo = (case vo of None => [] | Some v => [(k,v)])"
arep_rdef:
"arep (p#ps) k vo = (thisrep p k vo) @ (arep ps k vo)"
primrec
"thisrep p k None = (if akey p = k then [] else [p])"
"thisrep p k (Some v) = (if akey p = k then [(k,v)] else [p])"

text {* The main theorem (for associativ lists) is given in two parts,
the first establishing the independence of keys and the second establising,
in essence,
that the result of a lookup is the value of the most recent
(outermost) update operation.  This is strictly true only if
arep and \tt{\[\]} are the only constructors allowed.  It would
be entertaining to go ahead and cast all this as an axiomatic
data type, but I need to sleep. 

As recommended in the tutorial, we begin by expanding the
set of default inference. *}

declare Let_def[simp] option.split[split]

theorem arep_indep: "k1 ~= k2 ==> assoc (arep d k2 vo) k1 = assoc d k1"
apply (induct_tac d)
apply (force)
apply (simp only: arep_rdef)
apply (case_tac vo)
apply (auto)
done

theorem assoc_rep[simp]: "assoc (arep d k2 vo) k1 =
           (if k1 = k2 then vo else assoc d k1)"
apply (auto)
apply (induct_tac d, auto)
apply (case_tac vo, auto)
apply (rule arep_indep, simp)
done

text {* Note that this theorem cannot be established without
either formalizing the idea of a well-formed associative list
or having \tt{arep} delete extraneous entries.  After spending
a very interesting couple of days playing with the former, I
bailed out and went with the latter. 

That last proof directive, by the way, is the direct way to
get a lemma applied to the current goal.
\footnote{This interjection was writen relatively early
in my journey, I'm keeping it here for journal-keeping integrity
reasons.}
Now for the tries;
the beginning is verbatim from the tutorial. *}

datatype ('a,'v)trie = Trie "'v option" "('a * ('a,'v)trie)list"

consts value :: "('a,'v)trie => 'v option"
          alist :: "('a,'v)trie => ('a * ('a,'v)trie)list"
primrec value_def:
"value (Trie ov al) = ov"
primrec alist_def:
"alist (Trie ov al) = al"

consts lookup :: "('a,'v)trie => 'a list => 'v option"
primrec
lookup_base:
"lookup t [] = value t"
lookup_rdef:
"lookup t (a#as) = (case assoc (alist t) a of
   None => None
|  Some at => lookup at as)"

lemma [simp]: "lookup (Trie None []) as = None"
apply (case_tac as, auto)
done

text {* Now I need to use a modified update.  The one for
problem 4 is just
like the original, but uses the \tt{arep} operator. *}

consts update :: "('a,'v)trie => 'a list => 'v => ('a,'v)trie"

primrec
"update t [] v = Trie (Some v) (alist t)"
update_rdef:
"update t (a#as) v = (let tt = (case assoc (alist t) a of
               None => Trie None [] | Some x => x)
    in Trie (value t) (arep (alist t) a (Some (update tt as v))))"

text {* This update is leak-free, since the underlying update
operation is leak-free, and still satisfies the correctness property
theorem: *}

theorem "!t v bs. lookup (update t as v) bs =
     (if as = bs then Some v else lookup t bs)"
apply(induct_tac as, auto)
apply(case_tac[!] bs, auto)
done

text {* To build a trie update operation that implements
deletion, we procede along the same lines as the general @{text arep},
representing deletion by @{text None}.  The details, however, are
tricky: since intererior nodes can't be deleted just because they
carry no data, it may be necessary to do some garbage collection
when deleting a leaf node.  Fully developing this
would involve showing that a deletion in a well-formed trie leaves
it well-formed.  I'm not going to attack that.  Instead, all I show
is that the function I define also satisfies the correctness
property theorem. *}

consts u2 :: "('a,'v)trie => 'a list => 'v option => ('a,'v)trie"
       tfix :: "('a,'v)trie option => ('a,'v)trie"
       tgc :: "('a,'v)trie => ('a,'v)trie option"

primrec
tfix_none:
"tfix None = Trie None []"
tfix_some:
"tfix (Some t) = t"
recdef tgc "{}" (* see page 45 bottom *)
tgc_none:
"tgc (Trie None []) = None"
tgc_some:
"tgc t = Some t"

consts u2core:: "'v option => ('k*('k,'v)trie)list => 'k list => 'v option
              => ('k,'v)trie option"

defs
u2_def:
"u2 t s vo == tfix (u2core (value t) (alist t) s vo)"

primrec
u2core_base:
"u2core ovo al [] vo = tgc (Trie vo al)"
u2core_rdef:
"u2core ovo al (x#xs) vo = tgc (Trie ovo
           (let tt = tfix (assoc al x) in
           arep al x
           (u2core (value tt) (alist tt) xs vo)))"

text {* This update is leak-free, since the underlying list update
operation is leak-free, and since (I claim) it trims away non-terminal
nodes when they become leaves.  I will prove that \tt{u2} also
satisfies the correctness property theorem.  Attempting the
proof in one stage motivates the following lemmas: *}

text {* OK, this is where it all fell apart on my first pass.
Clean slate time.

I had written and proved the following
\begin{verbatim}
lemma string_diff1: "s ~= t & s ~= [] & t ~= [] ==>
                     hd s ~= hd t | tl s ~= tl t"
\end{verbatim}
but it never helped as much as I thought it would.
Now that I know more it seems that the conjunctions
were standing in the way.  The difference between an ``\&''
and a ``;'' in a theorem statement does not look very
important to the beginner's eye, given how quickly one
is exchanged for the other in proving a theorem.  But
it makes a big difference when you go to use a rule! *}

lemma string_diff1[intro]: "[| s ~= t ; s ~= [] ; t ~= [] |]
                     ==> hd s ~= hd t | tl s ~= tl t"
by (cases s, simp, cases t, simp_all)

text {* Again I've just taken the original 3-line Isabelle
proof and repackaged it with ``by''.  

A typo I made when reformatting the lemma led me to
discover something worth noting.
Markus Wenzel sent me a nice email with lots
of little example proofs, one of which turned me on to
using the @{text case} construct rather than assumptions
to drive a proof.  Unfortunately you have to use the
name of the constructor to designate the case, so if
you usually use a sugared form (like @{term "[]"}, which is really
@{term Nil})
you may not know how to write down the name of the case! 

Aha!  There is a ``show me cases'' thingy under the Isabell/Isar
help menu!  It only works when you are inside a cases construct,
but that's still a good thing.

Let's expand this next one to illustrate the syntax: *}

theorem case_some:"(case a of None => None | Some v => f v) ~= None
                           ==> a ~= None" (is "?P a ==> ?Q a")
proof (cases a)
  case None -- {* this just adds the assumption @{term "a = None"} *}
  assume "?P a" from prems show "?Q a" by simp
next
  case Some -- {* This is more intersting: it introduces a
fixed variable for a to be Some of. *}
  assume "?P a" from prems show "?Q a" by simp
qed

theorem case_none:"a = Some v & 
  (case a of None => None | Some x => f x) = None
    ==> f v = None"
by (case_tac a, auto)

theorem case_one:"(case a of None => None | Some x => f x) = Some v
             ==> ? x. a = Some x & f x = Some v"
by (case_tac a, auto)

text {* Here are some lemmas about my trie <--> trie option
conversion helper functions. *}

lemma tfix_id[simp]: "tfix (tgc t) = t"
apply (case_tac t)
apply (case_tac option, case_tac list)
apply (auto)
done

lemma tgc_id: "tfix (tgc t) = tfix (Some t)"
by (auto)

lemma lookup0 [simp]: "lookup (u2 t [] vo) x =
              (if x = [] then vo else lookup t x)"
apply (case_tac x)
apply (simp_all add: u2_def u2core_base)
done

text {* To make the main proofs tractible, it turns out
we need to be able to talk about the recursive structure of
the trie and not just about values of lookup. *}

constdefs subtrie:: "('k,'v)trie => 'k => ('k,'v)trie option"
    "subtrie t k == assoc (alist t) k"

lemma lookup1: "lookup t (a#s) = lookup (tfix (subtrie t a)) s"
apply (simp only: subtrie_def lookup_rdef)
apply (case_tac "subtrie t a")
apply (auto)
done

lemma lookup2: "a ~= b --> subtrie (u2 t (a#s) vo) b = subtrie t b"
apply (simp only: u2_def u2core_rdef Let_def subtrie_def)
apply (auto)
done

lemma subtrie_dist: "tfix (subtrie (u2 t (a#s) vo) a)
                     = u2 (tfix (subtrie t a)) s vo"
apply (simp only: u2_def u2core_rdef)
apply (simp)
apply (simp only: subtrie_def alist_def assoc_rep)
apply (simp)
done

text {* All of that could have been done with one @{text "simp add:"}
but breaking it up makes it possible to walk through it interactively. 

At this point I became frustrates trying to prove the main theorem
and started backing off to more and more abstract theorems about
predicates defined on pairs of strings that may or may not be equal. 
At every stage my proof ends up failing because the recursion hypothesis
has the wrong structure.  The classical proof for these situations
is to induct over the length of the shorter key, which involves a
case distinction.  But I can't get that proof to work, since the
automatically generated goal does not simulate the stripping of
the lead character on the longer string! *}

text {* What seems to be needed is a specialized deduction rule
for induction on a pair of unequal strings.  Let's try to set one up. *}

constdefs pair_reductive:: "('a list => 'a list => 'b) => bool"
         "pair_reductive f ==
            (!a s t. f (a#s) (a#t) = f s t)"
          pair_symmetric:: "('a list => 'a list => 'b) => bool"
         "pair_symmetric f == (!s t. f s t = f t s)"
          pair_inductive:: "('a list => 'a list => bool) => bool"
         "pair_inductive f ==
            (!s t ss tt. f ss [] & f [] tt &
             (f ss tt --> f (s#ss) (t#tt)) )"

text {* Here I'm using some more of the tricks I got
in my email.  Cool! *}

lemma ind_sym: "[|pair_reductive f ;
                  pair_inductive f |]
              ==> pair_symmetric f"
proof -
   assume red: "pair_reductive f" 
   assume ind: "pair_inductive f"
   show ?thesis
   proof (simp only: pair_symmetric_def, rule allI, rule allI)
     fix s t show "f s t = f t s"
     proof (cases s)
        case Nil from ind show "f s t = f t s"
oops

text {* Well Shoot.  Maybe I don't need that theorem.  It turns
out to be true, since @{term pair_inductive} forces f to be a predicate
which is always true.  Let's just prove that, if we can. *}

theorem pair_induction: "pair_inductive f ==> f x y"
proof -
   assume ind: "pair_inductive f"
   from ind have lshort: "!!t. f [] t"
     by (simp add: ind pair_inductive_def)
   moreover from ind have rshort: "!!s. f s []"
     by (simp add: ind pair_inductive_def)
   moreover have samesame: "!!s. f s s"
apply (cases "(x,y)")
apply (cases x)
sorry
   have sym: "!!x y. f x y = f y x" -- {* I want to prove it this
way since otherwise I end up having to prove it twice.  Eliminating
(using ``introduction'' rules!) the predicate-level universal quantifier
on two variables found in the definition of @{term pair_symmetric}
yields two different subgoals that are identical down
to renaming the meta-universally quantified variables.  Foo. *}
sorry
   from sym show ?thesis sorry
   -- {* try @{text "by (simp add: pair_symmetric_def)"} *}
oops

text {* I bailed out of that attempt for the following
News Flash:

I had posted a version of the string pair induction problem
to the mailing list for help.  Jeremy@discus.anu.edu.au sent
me a proof which appears to be in some hindbrain dialect not
accepted by Isabelle99-2/Isar/HOL:

\begin{verbatim}

goal List.thy
"( (!a ss tt. P ss tt --> P (a#ss) (a#tt)) &
   (!a b ss tt. a ~= b --> P (a#ss) (b#tt)) &
   (!s. P s []) & (!s. P [] s) ) --> P x y";

br impI 1;
by (res_inst_tac[("x","y")] spec 1);
by (induct_tac "x" 1);
by (Fast_tac 1);
br allI 1;
by (case_tac "x" 1)
by (Fast_tac 1)
by (case_tac "a=aa");
by (Fast_tac 1);
by (Fast_tac 1);
\end{verbatim}

Let's see if we can figure out what it's saying. *}

theorem pair_induct:
"( (!a ss tt. P ss tt --> P (a#ss) (a#tt)) &
   (!a b ss tt. a ~= b --> P (a#ss) (b#tt)) &
   (!s. P s []) & (!s. P [] s) ) --> P x y" (is "?H --> P x y")

apply(rule impI)
-- {* The application of ``spec'' here seems to be
both unnecessary and impossible.  *}
apply (induct_tac x)
apply (fast)
oops
text {* Well, Shoot.

Digging around in the Isabelle reference manual shows at least
some information about these constructs, but they are disabled
when Isar is enabled.  I can't easily switch to non-Isar Isabelle
in the middle of a file, and the chapter on converting scripts
(the last section of the Isar Reference Manual) is daunting.

Attempting to read the unreadable proof, it seems that
it is essentially the induct x, cases y proof that
solves the lookup-update theorem for the leaky implementation
given in the tutorial.  Using spec on a pair to simultaneously
fix both x and y is a neat trick, but it doesn't seem to
work out in Isar.

Maybe somebody will send me an Isar proof.  At least I know
that it's a theorem!  On the authority of a random piece
of email I'll fake a proof of the pair induction principle
and use it to end this nightmare. *}

theorem pair_induct:
"[| !!a ss tt. P ss tt ==> P (a#ss) (a#tt) ;
    !!a b ss tt. a ~= b ==> P (a#ss) (b#tt) ;
    !!s. P s [] ; !!s. P [] s |] ==> P x y"
sorry

text {* OK, let's see how a fancy rule like that can be
used to turn an impossible proof into a cakewalk.

The following is a concrete example of the pair induction
problem that I was unable to solve directly.  It should
go through easily using the new pair induction principle. *}

consts str_common:: "('a list * 'a list) => 'a list"
recdef str_common "measure (%(ss,tt). length ss)"
"str_common(s#ss,t#tt) = (if s = t then s#str_common (ss,tt) else [])"
"str_common(ss,tt) = []"

text {* I could have added a case for identical arguments,
but it would be cleaner to prove it as a simplification lemma. *}

lemma [simp]: "str_common([],s) = []" by simp

lemma [simp]: "str_common(s,[]) = []" by (cases s, auto)

text {* Fascinating: I had to give it cases on the
second lemma.  What is the difference??? 

I had to come back and name the following, since I needed to use them
in preference to all other simplification rules in a tight spot
farther down. *}

lemma comm_cat_right[simp]: "str_common(s,s@t) = s"
by (induct s, auto)
lemma comm_cat_left [simp]: "str_common(s@t,s) = s"
by (induct s, auto)

consts str_uncommon:: "('a list * 'a list) => 'a list"
recdef str_uncommon "measure (%(s,t). length s)"
"str_uncommon([],tt) = []"
"str_uncommon(ss,[]) = ss"
"str_uncommon(s#ss,t#tt) = (if s = t then str_uncommon(ss,tt) else s#ss)"

lemma str_com_rec: "hd (str_common(a#s,a#t)) = a &
                       tl (str_common(a#s,a#t)) = str_common(s,t)"
by simp

lemma str_com_stop: "s ~= t ==> str_common(s#ss,t#tt) = []"
by simp

lemma str_com_go: "s = t ==> str_common(s#ss,t#tt) = s#(str_common(ss,tt))"
by simp

text {* even with the foregoing I could not make progress
on the following:
\begin{verbatim}
lemma str_com_prefix: "(!t. str_common(s,t) = str_common(t,s))
                     & (!s. str_common(s,t) = str_common(t,s))" (is "?P s t")
proof (induct ?P s and t)
proof (cases "s=t", simp, cases s, simp, cases t, simp)
  fix x xs y ys assume "s=x#xs" "t=y#ys" "?P xs ys"
  from prems show "?P s t"
\end{verbatim}

Let's try again, without having to strengthen the real goal: *}

lemma str_com_prefix: "str_common(s,t) = str_common(t,s)" (is "?P s t")
by (rule pair_induct, auto)

text {* Hurrah!  I'm finally getting past the beginner stage. *}

lemma uncommon_prefix[simp]:
     "str_common(s,t)@str_uncommon(s,t) = s" (is "?P s t")
proof (rule pair_induct [of ?P])
-- {* Without the of clause it fails miserably.  Sigh. *}
   fix s show "?P [] s" by (cases s, auto)
next
   fix s show "?P s []" by (cases s, auto)
next
   fix a ss tt assume "?P ss tt" show "?P (a#ss) (a#tt)" by simp
next
   fix a b ss tt assume "(a::'a) ~= (b::'a)"
-- {* Apparently $~=$ doesn't imply type compatibility!
Interestingly, you can't use a new free type variable here.
In fact, you only really need the type specifier on @{term b}. *}
   from prems show "?P (a#ss) (b#tt)" by auto
qed

text {* At this point I would go back and insert the type constraint
into the original rule by changing the way the inequality is written,
but inspecting the stored theorem shows that the constraint is there.
Somehow it is being lost in the shuffle.

The only thing not solved by the induction idiom followed by
auto, interestingly, are the two base cases.  Adding list.split
doesn't help.  You'd think adding the definition of @{term str_uncommon}
would help, but it doesn't.  There's a comment in the tutorial
about priving trivial-looking simplification lemmas for functions
defined by recdef.  This is where that comes in. Might as well
add them now: *}

lemma [simp]: "str_uncommon(s,[]) = s" by (cases s, auto)
lemma [simp]: "str_uncommon([],s) = []" by (cases s, auto)

text {* Note that a string not being empty is the minimum
condition for saying anything at all about its head or tail. *}

text {* New revelation: not not equal is not the same thing
as equal. For this reason, the following lemma is quite a
challenge to prove.  Negating the thesis buys you nothing!

Fortunately enough, assuming the equality seems to be
an acceptable step. *}

lemma str_split1:
   "[|
       str_uncommon(s,t) ~= [] ;
       str_uncommon(t,s) ~= [] 
    |]  ==>
    hd (str_uncommon(s,t)) ~= hd (str_uncommon(t,s))"
proof -- {* Default intro rule turns it into a proof by contradiction *}
   assume tails: "str_uncommon(s,t) ~= []" and
          tailt: "str_uncommon(t,s) ~= []" and
          sameheads: "hd (str_uncommon(s,t)) = hd (str_uncommon(t,s))"
 -- {* Note the use of the connective ``and''
when assuming a batch of individually named propositions. *}
   def prefix: p == "str_common(s,t)"
   def pivots: c == "hd (str_uncommon(s,t))"
   from this and sameheads have pivott: "c = hd(str_uncommon(t,s))" by simp
   def ttail: tt == "tl (str_uncommon(t,s))"
   def stail: ss == "tl (str_uncommon(s,t))"
   from prefix pivots stail tails have scomp: "s = p@[c]@ss" by simp
-- {* Alas, here is one of those places where we must guide the
simplifier step by bloody step.  *}
   from pivott have "p@[c]@tt = p@([hd (str_uncommon(t,s))]@tt)" by simp
   also from ttail tailt have "... = p@(str_uncommon(t,s))"  by simp
   also from prefix and str_com_prefix
      have "... = (str_common(t,s))@(str_uncommon(t,s))" by force
   finally have tcomp: "t = p@[c]@tt" by simp
-- {* Well that finally worked.  Now to show a contradiction. *}
   from scomp have "s = (p@[c])@ss" by simp
   from this have killer1: "str_common(s,p@[c]) = p@[c]"
       by (simp only: comm_cat_left)
-- {* getting that to go through was a bear.  At some earlier stage
I had a theorem that @{term str_common} curried with one string was
idempotent.  I seem to have deleted it.  Sigh. *}
   from tcomp have "t = (p@[c])@tt" by simp
   from this have killer2: "str_common(t,p@[c]) = p@[c]"
       by (simp only: comm_cat_left) -- {*
OK, now I'm just wandering around in the wilderness.
If I'd proved this on paper first I would know exactly
what contradiction to show here.  Of course on paper
I would have probably just written ``obvious'' ;-) *}
   have killer3: "str_common(p@[c]@ss,p@[c]@tt) ~= p"
     by (induct p, auto)

   from killer3 prefix scomp tcomp show False by simp
qed

text {* Now that was easy, once I figured out how to prove
the silly thing.  New proverb:

Automatic theorem proving is mighty fine, but don't
forget how to develop a proof strategy.  You'll need
one when the automation fails.

Exercise: figure out how much shorter that proof
could have been if I'd had the strategy clearly
worked out ahead of time. *}

text {* Well, I've popped my lemma stack all the
way back to the main theorem on my updatable tries! *}

theorem "!t. lookup (u2 t as vo) bs =
     (if as = bs then vo else lookup t bs)" (is "?P as bs")
proof (cases "as = bs")

oops

text {* Stopping for the evening.  Need to do laundry.

\begin{verbatim}
   case True [simplified]
   show "?P as bs" sorry
   proof (simp only: split: split_if, simp only: prems, induct bs)
      case Nil

apply, induct as)
proof (rule pair_induct [of ?P])
   fix s show "?P [] s" by force
next
   fix s show "?P s []" -- {* Note the operands are not used symmetrically! 
     by (cases s, auto simp: u2_def)
next
   fix a b ss tt assume "(a::'a) ~= (b::'a)"
   show "?P (a#ss) (b#tt)"
   apply (auto simp: u2_def lookup_rdef)
apply (simp only: prems lookup_rdef u2_def)
apply clarify


apply (intro)
apply (case_tac "as = bs", split split_if, safe)
-- {* This has isolated the case lookup (update t as v) as = v, which
we attack by induction on the key and case splitting on the value option.
For the case where the result is none we need another recursion.  The
Some case is inconsistent with a None result from assoc, which is enough
to finish that branch of the recursion.  
apply (induct_tac as, simp)
apply (case_tac vo)
prefer 3
apply (split split_if)
apply (simp)
apply (simp only: u2_rdef)
apply (simp only: Let_def)
apply (simp only: lookup_rdef)
apply (clarify)
apply (case_tac t)
apply (force)
apply (simp add: Let_def)
apply (split split_if)
apply (clarify)
back
back
back
back
back
back
apply (induct_tac as)
   -- {* At this point there are three goals, the
first corresponding to as=bs=[], which the simplifier solves. 
apply (simp)
-- {* The second is the case as=bs ~= []. 
apply (split split_if, safe)
-- {* that mouthful makes if in the conclustion go away, 
apply (rule assoc_rep)
apply (induct_tac as, simp)
-- {* The basis case is disposed of by simp.  Now it gets interesting!
simp on the recursive case returns a goal the size of Dallas. The
key is to exploit a lemma about list inequality: 
apply (subgoal_tac "")
apply (simp only: lookup_rdef)
apply (force)
apply (insert assoc_rep, simp)

apply (force)
done
\end{verbatim}
*}
end

theorem arep_indep:

  k1 ~= k2 ==> assoc (arep d k2 vo) k1 = assoc d k1

theorem assoc_rep:

  assoc (arep d k2 vo) k1 = (if k1 = k2 then vo else assoc d k1)

lemma

  lookup (Trie None []) as = None

theorem

  ALL t v bs.
     lookup (update t as v) bs = (if as = bs then Some v else lookup t bs)

lemma string_diff1:

  [| s ~= t; s ~= []; t ~= [] |] ==> hd s ~= hd t | tl s ~= tl t

theorem case_some:

  option_case None f a ~= None ==> a ~= None

theorem case_none:

  a = Some v & option_case None f a = None ==> f v = None

theorem case_one:

  option_case None f a = Some v ==> EX x. a = Some x & f x = Some v

lemma tfix_id:

  tfix (tgc t) = t

lemma tgc_id:

  tfix (tgc t) = tfix (Some t)

lemma lookup0:

  lookup (u2 t [] vo) x = (if x = [] then vo else lookup t x)

lemma lookup1:

  lookup t (a # s) = lookup (tfix (subtrie t a)) s

lemma lookup2:

  a ~= b --> subtrie (u2 t (a # s) vo) b = subtrie t b

lemma subtrie_dist:

  tfix (subtrie (u2 t (a # s) vo) a) = u2 (tfix (subtrie t a)) s vo

theorem pair_induct:

  [| !!a ss tt. P ss tt ==> P (a # ss) (a # tt);
     !!a b ss tt. a ~= b ==> P (a # ss) (b # tt); !!s. P s []; !!s. P [] s |]
  ==> P x y
    [!]

lemma

  str_common ([], s) = []

lemma

  str_common (s, []) = []

lemma comm_cat_right:

  str_common (s, s @ t) = s

lemma comm_cat_left:

  str_common (s @ t, s) = s

lemma str_com_rec:

  hd (str_common (a # s, a # t)) = a &
  tl (str_common (a # s, a # t)) = str_common (s, t)

lemma str_com_stop:

  s ~= t ==> str_common (s # ss, t # tt) = []

lemma str_com_go:

  s = t ==> str_common (s # ss, t # tt) = s # str_common (ss, tt)

lemma str_com_prefix:

  str_common (s, t) = str_common (t, s)  [!]

lemma uncommon_prefix:

  str_common (s, t) @ str_uncommon (s, t) = s  [!]

lemma

  str_uncommon (s, []) = s

lemma

  str_uncommon ([], s) = []

lemma str_split1:

  [| str_uncommon (s, t) ~= []; str_uncommon (t, s) ~= [] |]
  ==> hd (str_uncommon (s, t)) ~= hd (str_uncommon (t, s))
    [!]