Theory HOL_Specific
theory HOL_Specific
  imports
    Main
    "HOL-Library.Old_Datatype"
    "HOL-Library.Old_Recdef"
    "HOL-Library.Dlist"
    "HOL-Library.FSet"
    Base
begin
chapter ‹Higher-Order Logic›
text ‹
  Isabelle/HOL is based on Higher-Order Logic, a polymorphic version of
  Church's Simple Theory of Types. HOL can be best understood as a
  simply-typed version of classical set theory. The logic was first
  implemented in Gordon's HOL system \<^cite>‹"mgordon-hol"›. It extends
  Church's original logic \<^cite>‹"church40"› by explicit type variables (naive
  polymorphism) and a sound axiomatization scheme for new types based on
  subsets of existing types.
  Andrews's book \<^cite>‹andrews86› is a full description of the original
  Church-style higher-order logic, with proofs of correctness and completeness
  wrt.\ certain set-theoretic interpretations. The particular extensions of
  Gordon-style HOL are explained semantically in two chapters of the 1993 HOL
  book \<^cite>‹pitts93›.
  Experience with HOL over decades has demonstrated that higher-order logic is
  widely applicable in many areas of mathematics and computer science. In a
  sense, Higher-Order Logic is simpler than First-Order Logic, because there
  are fewer restrictions and special cases. Note that HOL is ∗‹weaker› than
  FOL with axioms for ZF set theory, which is traditionally considered the
  standard foundation of regular mathematics, but for most applications this
  does not matter. If you prefer ML to Lisp, you will probably prefer HOL to
  ZF.
  ┉ The syntax of HOL follows ‹λ›-calculus and functional programming.
  Function application is curried. To apply the function ‹f› of type ‹τ⇩1 ⇒
  τ⇩2 ⇒ τ⇩3› to the arguments ‹a› and ‹b› in HOL, you simply write ‹f a b› (as
  in ML or Haskell). There is no ``apply'' operator; the existing application
  of the Pure ‹λ›-calculus is re-used. Note that in HOL ‹f (a, b)› means ``‹f›
  applied to the pair ‹(a, b)› (which is notation for ‹Pair a b›). The latter
  typically introduces extra formal efforts that can be avoided by currying
  functions by default. Explicit tuples are as infrequent in HOL
  formalizations as in good ML or Haskell programs.
  ┉ Isabelle/HOL has a distinct feel, compared to other object-logics like
  Isabelle/ZF. It identifies object-level types with meta-level types, taking
  advantage of the default type-inference mechanism of Isabelle/Pure. HOL
  fully identifies object-level functions with meta-level functions, with
  native abstraction and application.
  These identifications allow Isabelle to support HOL particularly nicely, but
  they also mean that HOL requires some sophistication from the user. In
  particular, an understanding of Hindley-Milner type-inference with
  type-classes, which are both used extensively in the standard libraries and
  applications.
›
chapter ‹Derived specification elements›
section ‹Inductive and coinductive definitions \label{sec:hol-inductive}›
text ‹
  \begin{matharray}{rcl}
    @{command_def (HOL) "inductive"} & : & ‹local_theory → local_theory› \\
    @{command_def (HOL) "inductive_set"} & : & ‹local_theory → local_theory› \\
    @{command_def (HOL) "coinductive"} & : & ‹local_theory → local_theory› \\
    @{command_def (HOL) "coinductive_set"} & : & ‹local_theory → local_theory› \\
    @{command_def "print_inductives"}‹⇧*› & : & ‹context →› \\
    @{attribute_def (HOL) mono} & : & ‹attribute› \\
  \end{matharray}
  An ∗‹inductive definition› specifies the least predicate or set ‹R› closed
  under given rules: applying a rule to elements of ‹R› yields a result within
  ‹R›. For example, a structural operational semantics is an inductive
  definition of an evaluation relation.
  Dually, a ∗‹coinductive definition› specifies the greatest predicate or set
  ‹R› that is consistent with given rules: every element of ‹R› can be seen as
  arising by applying a rule to elements of ‹R›. An important example is using
  bisimulation relations to formalise equivalence of processes and infinite
  data structures.
  Both inductive and coinductive definitions are based on the Knaster-Tarski
  fixed-point theorem for complete lattices. The collection of introduction
  rules given by the user determines a functor on subsets of set-theoretic
  relations. The required monotonicity of the recursion scheme is proven as a
  prerequisite to the fixed-point definition and the resulting consequences.
  This works by pushing inclusion through logical connectives and any other
  operator that might be wrapped around recursive occurrences of the defined
  relation: there must be a monotonicity theorem of the form ‹A ≤ B ⟹ ℳ A ≤ ℳ
  B›, for each premise ‹ℳ R t› in an introduction rule. The default rule
  declarations of Isabelle/HOL already take care of most common situations.
  \<^rail>‹
    (@@{command (HOL) inductive} | @@{command (HOL) inductive_set} |
      @@{command (HOL) coinductive} | @@{command (HOL) coinductive_set})
      @{syntax vars} @{syntax for_fixes} ⏎
      (@'where' @{syntax multi_specs})? (@'monos' @{syntax thms})?
    ;
    @@{command print_inductives} ('!'?)
    ;
    @@{attribute (HOL) mono} (() | 'add' | 'del')
  ›
  ➧ @{command (HOL) "inductive"} and @{command (HOL) "coinductive"} define
  (co)inductive predicates from the introduction rules.
  The propositions given as ‹clauses› in the @{keyword "where"} part are
  either rules of the usual ‹⋀/⟹› format (with arbitrary nesting), or
  equalities using ‹≡›. The latter specifies extra-logical abbreviations in
  the sense of @{command_ref abbreviation}. Introducing abstract syntax
  simultaneously with the actual introduction rules is occasionally useful for
  complex specifications.
  The optional @{keyword "for"} part contains a list of parameters of the
  (co)inductive predicates that remain fixed throughout the definition, in
  contrast to arguments of the relation that may vary in each occurrence
  within the given ‹clauses›.
  The optional @{keyword "monos"} declaration contains additional
  ∗‹monotonicity theorems›, which are required for each operator applied to a
  recursive set in the introduction rules.
  ➧ @{command (HOL) "inductive_set"} and @{command (HOL) "coinductive_set"}
  are wrappers for to the previous commands for native HOL predicates. This
  allows to define (co)inductive sets, where multiple arguments are simulated
  via tuples.
  ➧ @{command "print_inductives"} prints (co)inductive definitions and
  monotonicity rules; the ``‹!›'' option indicates extra verbosity.
  ➧ @{attribute (HOL) mono} declares monotonicity rules in the context. These
  rule are involved in the automated monotonicity proof of the above inductive
  and coinductive definitions.
›
subsection ‹Derived rules›
text ‹
  A (co)inductive definition of ‹R› provides the following main theorems:
  ➧ ‹R.intros› is the list of introduction rules as proven theorems, for the
  recursive predicates (or sets). The rules are also available individually,
  using the names given them in the theory file;
  ➧ ‹R.cases› is the case analysis (or elimination) rule;
  ➧ ‹R.induct› or ‹R.coinduct› is the (co)induction rule;
  ➧ ‹R.simps› is the equation unrolling the fixpoint of the predicate one
  step.
  When several predicates ‹R⇩1, …, R⇩n› are defined simultaneously, the list
  of introduction rules is called ‹R⇩1_…_R⇩n.intros›, the case analysis rules
  are called ‹R⇩1.cases, …, R⇩n.cases›, and the list of mutual induction rules
  is called ‹R⇩1_…_R⇩n.inducts›.
›
subsection ‹Monotonicity theorems›
text ‹
  The context maintains a default set of theorems that are used in
  monotonicity proofs. New rules can be declared via the @{attribute (HOL)
  mono} attribute. See the main Isabelle/HOL sources for some examples. The
  general format of such monotonicity theorems is as follows:
  ▪ Theorems of the form ‹A ≤ B ⟹ ℳ A ≤ ℳ B›, for proving monotonicity of
  inductive definitions whose introduction rules have premises involving terms
  such as ‹ℳ R t›.
  ▪ Monotonicity theorems for logical operators, which are of the general form
  ‹(… ⟶ …) ⟹ … (… ⟶ …) ⟹ … ⟶ …›. For example, in the case of the operator ‹∨›,
  the corresponding theorem is
  \[
  \infer{‹P⇩1 ∨ P⇩2 ⟶ Q⇩1 ∨ Q⇩2›}{‹P⇩1 ⟶ Q⇩1› & ‹P⇩2 ⟶ Q⇩2›}
  \]
  ▪ De Morgan style equations for reasoning about the ``polarity'' of
  expressions, e.g.
  \[
  \<^prop>‹¬ ¬ P ⟷ P› \qquad\qquad
  \<^prop>‹¬ (P ∧ Q) ⟷ ¬ P ∨ ¬ Q›
  \]
  ▪ Equations for reducing complex operators to more primitive ones whose
  monotonicity can easily be proved, e.g.
  \[
  \<^prop>‹(P ⟶ Q) ⟷ ¬ P ∨ Q› \qquad\qquad
  \<^prop>‹Ball A P ≡ ∀x. x ∈ A ⟶ P x›
  \]
›
subsubsection ‹Examples›
text ‹The finite powerset operator can be defined inductively like this:›
experiment begin
inductive_set Fin :: "'a set ⇒ 'a set set" for A :: "'a set"
where
  empty: "{} ∈ Fin A"
| insert: "a ∈ A ⟹ B ∈ Fin A ⟹ insert a B ∈ Fin A"
text ‹The accessible part of a relation is defined as follows:›
inductive acc :: "('a ⇒ 'a ⇒ bool) ⇒ 'a ⇒ bool"
  for r :: "'a ⇒ 'a ⇒ bool"  (infix ‹≺› 50)
where acc: "(⋀y. y ≺ x ⟹ acc r y) ⟹ acc r x"
end
text ‹
  Common logical connectives can be easily characterized as non-recursive
  inductive definitions with parameters, but without arguments.
›
experiment begin
inductive AND for A B :: bool
where "A ⟹ B ⟹ AND A B"
inductive OR for A B :: bool
where "A ⟹ OR A B"
  | "B ⟹ OR A B"
inductive EXISTS for B :: "'a ⇒ bool"
where "B a ⟹ EXISTS B"
end
text ‹
  Here the ‹cases› or ‹induct› rules produced by the @{command inductive}
  package coincide with the expected elimination rules for Natural Deduction.
  Already in the original article by Gerhard Gentzen \<^cite>‹"Gentzen:1935"›
  there is a hint that each connective can be characterized by its
  introductions, and the elimination can be constructed systematically.
›
section ‹Recursive functions \label{sec:recursion}›
text ‹
  \begin{matharray}{rcl}
    @{command_def (HOL) "primrec"} & : & ‹local_theory → local_theory› \\
    @{command_def (HOL) "fun"} & : & ‹local_theory → local_theory› \\
    @{command_def (HOL) "function"} & : & ‹local_theory → proof(prove)› \\
    @{command_def (HOL) "termination"} & : & ‹local_theory → proof(prove)› \\
    @{command_def (HOL) "fun_cases"} & : & ‹local_theory → local_theory› \\
  \end{matharray}
  \<^rail>‹
    @@{command (HOL) primrec} @{syntax specification}
    ;
    (@@{command (HOL) fun} | @@{command (HOL) function}) opts? @{syntax specification}
    ;
    opts: '(' (('sequential' | 'domintros') + ',') ')'
    ;
    @@{command (HOL) termination} @{syntax term}?
    ;
    @@{command (HOL) fun_cases} (@{syntax thmdecl}? @{syntax prop} + @'and')
  ›
  ➧ @{command (HOL) "primrec"} defines primitive recursive functions over
  datatypes (see also @{command_ref (HOL) datatype}). The given ‹equations›
  specify reduction rules that are produced by instantiating the generic
  combinator for primitive recursion that is available for each datatype.
  Each equation needs to be of the form:
  @{text [display] "f x⇩1 … x⇩m (C y⇩1 … y⇩k) z⇩1 … z⇩n = rhs"}
  such that ‹C› is a datatype constructor, ‹rhs› contains only the free
  variables on the left-hand side (or from the context), and all recursive
  occurrences of ‹f› in ‹rhs› are of the form ‹f … y⇩i …› for some ‹i›. At
  most one reduction rule for each constructor can be given. The order does
  not matter. For missing constructors, the function is defined to return a
  default value, but this equation is made difficult to access for users.
  The reduction rules are declared as @{attribute simp} by default, which
  enables standard proof methods like @{method simp} and @{method auto} to
  normalize expressions of ‹f› applied to datatype constructions, by
  simulating symbolic computation via rewriting.
  ➧ @{command (HOL) "function"} defines functions by general wellfounded
  recursion. A detailed description with examples can be found in \<^cite>‹"isabelle-function"›. The function is specified by a set of (possibly
  conditional) recursive equations with arbitrary pattern matching. The
  command generates proof obligations for the completeness and the
  compatibility of patterns.
  The defined function is considered partial, and the resulting simplification
  rules (named ‹f.psimps›) and induction rule (named ‹f.pinduct›) are guarded
  by a generated domain predicate ‹f_dom›. The @{command (HOL) "termination"}
  command can then be used to establish that the function is total.
  ➧ @{command (HOL) "fun"} is a shorthand notation for ``@{command (HOL)
  "function"}~‹(sequential)›'', followed by automated proof attempts regarding
  pattern matching and termination. See \<^cite>‹"isabelle-function"› for
  further details.
  ➧ @{command (HOL) "termination"}~‹f› commences a termination proof for the
  previously defined function ‹f›. If this is omitted, the command refers to
  the most recent function definition. After the proof is closed, the
  recursive equations and the induction principle is established.
  ➧ @{command (HOL) "fun_cases"} generates specialized elimination rules for
  function equations. It expects one or more function equations and produces
  rules that eliminate the given equalities, following the cases given in the
  function definition.
  Recursive definitions introduced by the @{command (HOL) "function"} command
  accommodate reasoning by induction (cf.\ @{method induct}): rule ‹f.induct›
  refers to a specific induction rule, with parameters named according to the
  user-specified equations. Cases are numbered starting from 1. For @{command
  (HOL) "primrec"}, the induction principle coincides with structural
  recursion on the datatype where the recursion is carried out.
  The equations provided by these packages may be referred later as theorem
  list ‹f.simps›, where ‹f› is the (collective) name of the functions defined.
  Individual equations may be named explicitly as well.
  The @{command (HOL) "function"} command accepts the following options.
  ➧ ‹sequential› enables a preprocessor which disambiguates overlapping
  patterns by making them mutually disjoint. Earlier equations take precedence
  over later ones. This allows to give the specification in a format very
  similar to functional programming. Note that the resulting simplification
  and induction rules correspond to the transformed specification, not the one
  given originally. This usually means that each equation given by the user
  may result in several theorems. Also note that this automatic transformation
  only works for ML-style datatype patterns.
  ➧ ‹domintros› enables the automated generation of introduction rules for the
  domain predicate. While mostly not needed, they can be helpful in some
  proofs about partial functions.
›
subsubsection ‹Example: evaluation of expressions›
text ‹
  Subsequently, we define mutual datatypes for arithmetic and boolean
  expressions, and use @{command primrec} for evaluation functions that follow
  the same recursive structure.
›
experiment begin