1 Logic


1.1 Reading

Discrete Mathematics using a Comptuter (O’Donnell, Hall, Page) Chapter 6, 7

The Haskell Road to Logic, Math and Programming (Doets, van Eijck) Chapter 2, 3

Discrete Mathematics with Applications - Metric Edition (Epp) Chapter 2, 3

https://www.cs.carleton.edu/faculty/dln/book/ch03_logic_2021_September_08.pdf

https://www.cs.yale.edu/homes/aspnes/classes/202/notes.pdf Chapter 2

https://runestone.academy/ns/books/published/dmoi-4/ch_logic.html
https://runestone.academy/ns/books/published/dmoi-4/sec_logic-statements.html
https://runestone.academy/ns/books/published/dmoi-4/sec_logic-implications.html
https://runestone.academy/ns/books/published/dmoi-4/sec_logic-rules.html

https://runestone.academy/ns/books/published/ads/chapter3.html
https://runestone.academy/ns/books/published/ads/s-propositions-logic-operators.html
https://runestone.academy/ns/books/published/ads/s-truth-tables.html
https://runestone.academy/ns/books/published/ads/s-equivalence-implication.html
https://runestone.academy/ns/books/published/ads/s-logic-laws.html
https://runestone.academy/ns/books/published/ads/s-math-systems.html
https://runestone.academy/ns/books/published/ads/s-propositions-over-universe.html
https://runestone.academy/ns/books/published/ads/s-induction.html
https://runestone.academy/ns/books/published/ads/s-quantifiers.html
https://runestone.academy/ns/books/published/ads/s-proof-review.html

https://runestone.academy/ns/books/published/ads/chapter_13.html
https://runestone.academy/ns/books/published/ads/s-posets-revisited.html
https://runestone.academy/ns/books/published/ads/s-lattices.html
https://runestone.academy/ns/books/published/ads/s-boolean-algebras.html
https://runestone.academy/ns/books/published/ads/s-atoms-of-a-boolean-algebra.html
https://runestone.academy/ns/books/published/ads/s-finite-boolean-algebras-ntuples.html
https://runestone.academy/ns/books/published/ads/s-boolean-expressions.html
https://runestone.academy/ns/books/published/ads/s-logic-design.html

https://runestone.academy/ns/books/published/DiscreteMathText/chapter2.html
https://runestone.academy/ns/books/published/DiscreteMathText/logical_form2-1.html
https://runestone.academy/ns/books/published/DiscreteMathText/conditional2-2.html
https://runestone.academy/ns/books/published/DiscreteMathText/arguments2-3.html

https://runestone.academy/ns/books/published/DiscreteMathText/chapter3.html
https://runestone.academy/ns/books/published/DiscreteMathText/quantifiersI3-1.html
https://runestone.academy/ns/books/published/DiscreteMathText/negquant3-2.html
https://runestone.academy/ns/books/published/DiscreteMathText/multiplequant3-3.html
https://runestone.academy/ns/books/published/DiscreteMathText/argumentsquant3-4.html

https://en.wikipedia.org/wiki/Mathematical_logic
https://en.wikipedia.org/wiki/Propositional_calculus
https://en.wikipedia.org/wiki/First-order_logic
https://en.wikipedia.org/wiki/Truth_function
https://en.wikipedia.org/wiki/Boolean_function
https://en.wikipedia.org/wiki/Truth_table
https://en.wikipedia.org/wiki/Rule_of_replacement
https://en.wikipedia.org/wiki/Rule_of_inference
https://en.wikipedia.org/wiki/List_of_rules_of_inference

1.2 Why logic?

Logic provides a powerful tool for reasoning correctly about mathematics, algorithms, and computers.
You need to understand its basic concepts in order to study many of the more advanced subjects in computing.

In software engineering,
it is good practice to specify what a system should do before starting to code it.
Logic is frequently used for software specifications.

In safety-critical applications,
it is essential to establish that a program is correct.
Conventional debugging isn’t enough.
What we want is a proof of correctness.
Formal logic is the foundation of program correctness proofs.

In information retrieval,
including Web search engines,
logical propositions are used to specify the properties that should (or should not) be present,
in a piece of information in order for it to be considered relevant.

In artificial intelligence,
formal logic is used to simulate intelligent thought processes.
People don’t do their ordinary reasoning using mathematical logic,
but logic is a convenient tool for implementing certain forms of reasoning.

In digital circuit design and computer architecture,
logic is the language used to describe the signal values that are produced by components.
A common problem is that a first-draft circuit design written by an engineer is too slow,
so it has to be transformed into an equivalent circuit that is more efficient.
This process is often quite tricky, and logic provides the framework for doing it.

In database systems,
complex queries are built up from simpler components.
It’s essential to have a clear, precise way to express such queries,
so that users of the database can be sure they’re getting the right information.
Logic is the key to expressing such queries.

In compiler construction,
the typechecking phase uses logic.
It must determine whether the program being translated uses any variables or functions inconsistently.
It turns out that the method for doing this is similar to the method for performing logical inference.
Algorithms designed originally to perform calculations in mathematical logic are now embedded in modern compilers.

In programming language design,
a commonly used method for specifying the meaning of a computer program is the lambda calculus,
which is actually a formal logical system,
originally invented by a mathematician for purely theoretical research.

In computability theory,
logic is used both to specify abstract machine models and to reason about their capabilities.
There has been an extremely close interaction between mathematical logicians and theoretical computer scientists,
in developing a variety of machine models.

In scientific ontologies,
logic is used to construct knowledgebases,
and to perform reasoning upon them.

1.3 Theory

See basic table here:
https://en.wikipedia.org/wiki/Truth_table#Truth_table_for_most_commonly_used_logical_operators

True / T / 1
False / F / 0

Logic/logic-truth-table.jpg

Logical And (∧)
The logical and operator corresponds to the English conjunction ‘and’:
it is used to claim that two statements are both true.
Sometimes it is called ‘logical conjunction’, and its mathematical symbol is ∧.

Inclusive Logical Or (∨)
The logical or operator corresponds to the most common usage of the English word or.
It takes two arguments and returns True if either argument (or both) is True;
otherwise it returns False.
Other commonly used names for this operation are logical disjunction and inclusive or,
and the symbol for the operation is ∨.

Exclusive Logical Or (⊗)
In English, we often think of ‘A or B’ as meaning:
‘either A or B—one or the other, but not both’,
excluding the possibility that A is true at the same time as B.
This meaning of the word ‘or’ is called exclusive or,
because if one of the alternatives is true,
then this excludes the possibility that the other is also true.

Logical Not (¬)
The English word ‘not’ is used to assert that a statement is false.
It corresponds to the logical not (also called logical negation) operator,
whose symbol is ¬.
It takes one argument and returns the opposite truth value

Logical Implication (→)
The logical implication operator → corresponds to conditional phrases in English.
For example, an English sentence in the form ‘If A is true, then B is true.’
would be translated into the proposition A → B.
Logical implication mirrors the if … then … then construct in programming.
Logical implication says nothing about cause-and-effect relationships.
It may seem strange to you to define False → False to be True.

English sentences contain all sorts of connotations,
but logical propositions mean nothing more than what the defining truth tables say.
The truth table for → is just a definition.
It is meaningless to argue about whether the definition given above is ‘correct’;
definitions aren’t either right or wrong, they are just definitions.

The pertinent question to ask is:
Why is it convenient to define → so that False → False is true?
Logicians have been refining their definitions for hundreds of years,
and based on their experience, they feel this is the best way to define →.

However, here’s a more computer-oriented way to think about it:
consider the programming language statement if A then B.
If A turns out to be true,
then the statement B will be executed,
but if A is false,
then the machine won’t even look at B.
It makes no difference at all what B says.
The definition of → captures this indifference to B in the cases where A is false.

Logical Equivalence (↔︎)
The logical equivalence operator ↔︎ is used to claim that two propositions have
the same value—either both are true, or both are false.
The statement A ↔︎ B could also be expressed by writing (A → B) ∧ (B → A).
Sometimes ↔︎ is considered to be an abbreviation for conjunction of two ordinary implications,
rather than a fundamental logical operator.

There are two distinct mathematical ways to claim propositions A and B have the same value.
One way is to write the equation A = B,
and the other way is to claim that the proposition A ↔︎ B has the value True.
Logical equivalence is similar to but not the same as ordinary equality.
The difference between them is important, and we’ll say more about it later.

1.3.1 Haskell Boolean basics

¬x is written in Haskell as not x;

a ∧ b is written either as a && b,
or in this codebase, alse as a /\ b;

a ∨ b is written either as a || b,
or in this codebase, also as a \/ b;

a → b in this codebase, is written as a ==> b;

a ↔︎ b in this codebase, is written as a <=> b.

This is the way that Booleans and Boolean functions are defined in Prelude.
I stole this directly from the Haskell source code:
Logic/DataBool.hs

-- This is copied from the actual implementation in Haskell.
-- So I hide the actual implementation from this namespace:
import qualified Prelude

-- This is all a boolean is in Haskel!
-- It's amazingly simple.
data Bool = False | True
  deriving (Prelude.Show)

-- | Boolean "and", lazy in the second argument
-- >>> True && True
-- True

-- | Boolean "and", lazy in the second argument
-- >>> True && False
-- False

-- | Boolean "and", lazy in the second argument
-- >>> False && True
-- False

-- | Boolean "and", lazy in the second argument
-- >>> False && False
-- False
(&&) :: Bool -> Bool -> Bool
True && x = x
False && _ = False

-- | Boolean "or", lazy in the second argument
-- >>> True || True
-- True

-- | Boolean "or", lazy in the second argument
-- >>> True || False
-- True

-- | Boolean "or", lazy in the second argument
-- >>> False || True
-- True

-- | Boolean "or", lazy in the second argument
-- >>> False || False
-- False
(||) :: Bool -> Bool -> Bool
True || _ = True
False || x = x

-- | Boolean "not"
-- >>> not True
-- False

-- | Boolean "not"
-- >>> not False
-- True
not :: Bool -> Bool
not True = False
not False = True

We define some extra logic operators for the sake of this class:
Code-DMUC/Stdm06LogicOperators.hs

-- # Software Tools for Discrete Mathematics
module Stdm06LogicOperators where

-- ∨ Disjunction / or defined for symbolic convenience.
infix 3 \/

(\/) = (||)

-- ∧ Conjunction / and defined for symbolic convenience.
infix 4 /\

(/\) = (&&)

-- ↔ Equivalence / If and only if
infix 1 <=>

-- |
-- >>> True <=> True
-- True

-- |
-- >>> True <=> False
-- False

-- |
-- >>> False <=> False
-- True

-- |
-- >>> False <=> True
-- False
(<=>) :: Bool -> Bool -> Bool
(<=>) a b = a == b

-- → Implication / Therefore / if
infix 2 ==>

-- |
-- >>> False ==> False
-- True

-- |
-- >>> False ==> True
-- True

-- |
-- >>> True ==> False
-- False

-- |
-- >>> True ==> True
-- True
(==>) :: Bool -> Bool -> Bool
(==>) True False = False
(==>) a b = True

-- Exclusive or
infixr 2 <+>

-- |
-- >>> False <+> False
-- False

-- |
-- >>> False <+> True
-- True

-- |
-- >>> True <+> False
-- True

-- |
-- >>> True <+> True
-- False
(<+>) :: Bool -> Bool -> Bool
x <+> y = x /= y

1.3.2 Substitutions and inference

To provide a preview of where we are headed,
like with elementary arithmetic,
substitutions can be performed with logical statements:

Natural inference:
Logic/logic-inference.png

Boolean algebra:
Logic/logic-laws.png

This will enable complex reasoning and proof-checking systems.
We will build toward a full understanding each entry in the table above.

1.3.3 Types of formal logic

There are several different kinds of formal logic,
and for now we will consider just the simplest one, called propositional logic.
After looking at the language of propositional logic,
we will detail three different mathematical systems for reasoning formally about propositions:

  1. Truth tables,
  2. Natural deduction, and
  3. Boolean algebra.

1.3.4 1. Truth tables

Truth tables define the meanings of the logical operators,
and they can be used to calculate the values of expressions,
and prove that two propositions are logically equivalent.
As truth tables directly express the underlying meaning of propositions,
they are a semantic technique.
Truth tables are easy to understand for small problems,
but they become impossibly large for most realistic problems.

1.3.5 2. Natural deduction

Natural deduction is a formalisation of the basic principles of reasoning.
It provides a set of inference rules,
that specify exactly what new facts you are allowed to deduce from some given facts.
There is no notion of the ‘value’ of propositions;
everything in this system is encapsulated in the inference rules.
Since the rules are based on the forms of propositions,
and we don’t work with truth values,
inference is a purely syntactic approach to logic.
Recent techniques in programming language theory employ more advanced logical systems,
that are related to natural deduction.

1.3.6 3. Boolean algebra

Boolean algebra is another syntactic formalisation of logic,
using a set of equations—the laws of Boolean algebra,
to specify that certain propositions are equal to each other.
Boolean algebra is an axiomatic approach,
similar to elementary algebra and geometry.
It provides an effective set of laws for manipulating propositions,
and is an essential tool for digital circuit design.

1.3.7 Propositional logic

A proposition is just a symbolic variable,
whose value must be either True or False,
and which stands for an English statement,
that could be either true or false.

1.3.8 Well Formed Formulae (WFF)

Before spending a lot of effort arguing about a statement somebody claims to be true,
it’s prudent to make sure the statement actually makes sense,
and to check that all the participants in the debate agree about what it means.

It might be fun to discuss:
‘International trade creates jobs’,

but there’s no point wasting time on:
‘Quickly raspberries running green’.
That one isn’t either true or false—it’s just gibberish.

A big advantage of using logic rather than English,
is that we define propositional logic formally, precisely, and unambiguously.
This allows us to check that a proposition makes sense,
before expending any further work deciding whether it’s true.

Because there is no formal definition of natural languages,
it’s fundamentally impossible to do that for English.
It isn’t even possible in all cases,
to decide definitively whether a statement in a natural language is grammatical:
English has no formal grammar.
While English-language ontologies do,
that is a more advanced topic in logic!

A proposition that ‘makes sense’ is called a well-formed formula,
which is often abbreviated WFF (pronounced ‘woof’).
Every WFF has a well-defined meaning,
and you can work out whether it’s true or false,
given the values of the propositional variables.

For example, (A → (B ∧ (¬A))) is a well-formed formula,
so we can calculate what it means,
and it’s worth spending a little time to decide correctness,
whether there are any possible values of A and B for which it’s true.
(It’s True, if A is False.)

On the other hand, what about ∨ A B ¬C?
This isn’t a WFF.
It’s just nonsense, so it has no truth value.

A term in propositional logic is sometimes called a formula,
and a term that is constructed correctly, following all the syntax rules,
is called a well-formed formula, abbreviated WFF.

We will never consider the meaning of ill-formed formulas;
these are simply considered to be meaningless.

For example, the formula (P ∧ (¬P )) is well-formed;
this means that we can go on to consider its meaning
(this WFF happens to be False).

However, the formula P ∨ → Q violates the syntax rules of the language,
so we refuse to be drawn into debates about whether it is True or False.

The situation is similar to that of programming languages:
we can talk about the behaviour of a program that is syntactically correct
(and the behaviour might be correct or incorrect),
but we never talk about the behaviour of a program with syntax errors.
The compiler would refuse to translate the program, so there is no run-time behaviour.

1.3.9 Inductive formalization of the WFF syntax

The set of well-formed formulas (WFFs) can be defined,
by saying precisely how you can construct them.

The constants False and True are WFFs.
The only examples are: False, True.

Any propositional variable is a WFF.
Examples: P, Q, R.

If a is a WFF, then (¬a) is a WFF.
Examples: (¬P ), (¬Q), (¬(¬P )).

If a and b are WFFs, then (a ∧ b) is a WFF.
Examples: (P ∧ Q), ((¬P ) ∧ (P ∧ Q)).

If a and b are WFFs, then (a ∨ b) is a WFF.
Examples: (P ∨ Q), ((P ∧ Q) ∨ (¬Q)).

If a and b are WFFs, then (a → b) is a WFF.
Examples: (P → Q), ((P ∧ Q) → (P ∨ Q)).

If a and b are WFFs, then (a ↔︎ b) is a WFF.
Examples: (P ↔︎ Q), ((P ∧ Q) ↔︎ (Q ∧ P )).

Any formula that cannot be constructed using these rules is not a WFF.
Examples: (P ∨ ∧Q), P → ¬.

The rules may be used repeatedly to build nested formulas.
This process is recursive!

For example, here is a demonstration using recursion,
that (P → (Q ∧ R)) is a WFF:
1. P, Q, and R are propositional variables, so they are all WFFs.
2. Because Q and R are WFFs, (Q ∧ R) is also a WFF.
3. Because P and (Q ∧ R) are both WFFs, so is (P → (Q ∧ R)).

1.3.10 Precedence of Logical Operators

Well-formed formulas are fully parenthesised,
so there is no ambiguity in their interpretation.
Often, however, it’s more convenient to omit some of the parentheses,
for the sake of readability.
For example, we may prefer to write:
P → ¬Q ∧ R rather than
(P → ((¬Q) ∧ R)).

The syntax rules given below define what an expression means,
when some of the parentheses are omitted.
These conventions are analogous to those of elementary algebra,
as well as most programming languages,
where there is a precedence rule that says:
a + b × c means a + (b × c) rather than (a + b) × c.
The syntax rules for propositional logic are straightforward:

  1. The most tightly binding operator is ¬.
    For example, ¬P ∧ Q means (¬P ) ∧ Q.
    Furthermore, ¬¬P means ¬(¬P ).

  2. The second highest precedence is the ∧ operator.
    In expressions combining ∧ and ∨, the ∧ operations come first.
    For example, P ∨ Q ∧ R means P ∨ (Q ∧ R).
    If there are several ∧ operations in a sequence,
    then they are performed left to right;
    for example, P ∧ Q ∧ R ∧ S means (((P ∧ Q) ∧ R) ∧ S).
    This property is described by saying ‘∧ associates to the left.’

  3. The ∨ operator has the next level of precedence,
    and it associates to the left.
    For example, P ∧ Q ∨ R ∨ U ∧ V means (((P ∧ Q) ∨ R) ∨ (U ∧ V)).
    This example would be more readable,
    if a few of the parentheses are removed:
    (P ∧ Q) ∨ R ∨ (U ∧ V ).

  4. The → operator has the next lower level of precedence.
    For example, P ∧ Q → P ∨ Q means (P ∧ Q) → (P ∨ Q).
    The → operator associates to the right;
    thus P → Q → R → S means (P → (Q → (R → S))).

  5. The ↔︎ operator has the lowest level of precedence,
    and it associates to the right,
    but we recommend that you use parentheses,
    rather than relying on the associativity.

In general it’s a good idea to omit parentheses that are clearly redundant,
but to include parentheses that improve readability.
Too much reliance on these syntax conventions leads to inscrutable expressions.

1.3.11 Object Language and Meta-Language

Propositional logic is a precisely defined language of well-formed formulas.
We need another richer language to use,
when we are reasoning about well-formed formulas.
This means we will be working simultaneously with two distinct languages,
that must be kept separate.
The WFFs of propositional logic will be called the object language,
because the objects we will be talking about are sentences in propositional logic.
The algebraic language of equations, substitutions, and justifications resembles arithmetic;
we will call it the meta-language,
because it ‘surrounds’ the object language,
and can be used to talk about propositions.

This distinction between object language and meta language is common.
For example, there are many programming languages,
and we need both to write programs in them,
and also to make statements about them.
We have already seen all the operators in the propositional logic object language;
these are ¬, ∧, ∨, → and ↔︎.

Later we will use several operators that belong to the meta-language:
\(\vDash\) (which will be used in the section on truth tables);
\(\vdash\) (which will be used in the section on natural deduction); and
= (which will be used in the section on Boolean algebra).
After studying each of those three major systems,
we will look briefly at some of the metalogical properties of:
\(\vDash\), \(\vdash\), and = when discussing meta-logic.

1.3.12 Method 2. Truth tables

The first method of reasoning in propositional logic is the semastic truth tables.
Truth tables provide an easy method for reasoning about propositions
(as long as they don’t contain too many variables).
The truth table approach treats propositions as expressions to be evaluated.
All the expressions have type Bool, the constants are True and False,
and the variables A, B, … stand for unknowns that must be either True or False.

When you write out truth tables by hand,
it’s really too tedious to keep writing True and False.
We recommend abbreviating True as 1 and False as 0;
as pointed out earlier,
T and F are also common abbreviations, but are harder to read.
However, it should be stressed that 0 and 1 are just abbreviations,
for the logical constants, whose type is Bool.
Truth tables don’t contain numbers.
Only a few truth tables will appear below,
so we will forgo the abbreviations,
and stick safely with True and False.
However, when truth tables are used for reasoning about digital circuits,
it is standard practice to use 0 and 1.

We have already been using truth tables to define the logical operators,
so the format should be clear by now.
Nothing could be simpler than using a truth table to evaluate a propositional expression:
you just calculate the proposition for all possible values of the variables,
write down the results in a table, and see what happens.
Generally, it’s useful to give several of the intermediate results in the truth table,
not just the final value of the entire proposition.

For example, let’s consider the proposition ((A → B) ∧ ¬B) → ¬A and find out when it’s true.
On the left side of the table we list all the possible values of A and B,
on the right side we give the value of the full proposition,
and in the middle (separated by double vertical lines) we give the values of the subexpressions.

For example, to solve this by hand: ((A → B) ∧ ¬B) → ¬A

A       B       A → B       ¬B      (A → B) ∧ ¬B        ¬A      ((A → B) ∧ ¬B) → ¬A
0       0       1           1       1                   1       1
0       1       1           0       0                   1       1
1       0       0           1       0                   0       1
1       1       1           0       0                   0       1

Code-DMUC/logicTableExample.hs

import Stdm06LogicOperators

-- |
-- >>> aBooleanProposition True True
-- True

-- |
-- >>> aBooleanProposition True False
-- True

-- |
-- >>> aBooleanProposition False True
-- True

-- |
-- >>> aBooleanProposition False False
-- True
aBooleanProposition a b = ((a ==> b) /\ not b) ==> not a

1.3.12.1 Tautologies and Contradictions

A tautology is a proposition that is always True, regardless of the values of its variables.

A contradiction is a proposition that is always False, regardless of the values of its variables.

You can find out whether a proposition is a tautology or a contradiction (or neither),
by writing down its truth table:
If a column contains nothing but True, then the proposition is a tautology.
If there is nothing but False, then it’s a contradiction.
If there is a mixture of True and False, then the proposition is neither.

For example,
P ∨ ¬P is a tautology,
but P ∧ ¬P is a contradiction,
and the following truth table proves it:

P       ¬P      P ∨ ¬P      P ∧ ¬P
0       1       1           0
1       0       1           0

Logic/tautologyContradiction.hs

-- |
-- >>> tautology True
-- True

-- |
-- >>> tautology False
-- True
tautology p = p || not p

-- |
-- >>> contradiction True
-- False

-- |
-- >>> contradiction False
-- False
contradiction p = p && not p

1.3.12.2 Meta-language

There is a special notation for expressing statements about propositions.
As it’s used to make statements about propositions,
this notation belongs to the meta-language,
and it uses the meta-operator \(\vDash\),
which is often pronounced ‘double-turnstile’
(to distinguish it from \(\vdash\), which you’ll see later,
and which is pronounced ‘turnstile’).

The notation P1, P2, …, Pn \(\vDash\) Q means:
if all of the propositions P1, P2, …, Pn are true,
then the proposition Q is also true.

The \(\vDash\) meta-operator makes a statement about the actual meanings of the propositions;
it’s concerned with which propositions are True and which are False.
Truth tables can be used to prove statements containing \(\vDash\).
The meaning of a proposition is called its semantics.
The set of basic truth values (True and False),
along with a method for calculating the meaning of any well-formed formula,
is called a model of the logical system.

There may be any number n of propositions P1, P2, …, Pn in the list of assumptions.
If n = 1 then the statement looks like P \(\vDash\) Q.
A particularly important case is when n = 0, so there are no assumptions at all, and the statement becomes simply \(\vDash\) Q.
This means that Q is always true, regardless of the values of the propositional variables inside it;
in other words, Q is a tautology.

1.3.12.3 Limitations of Truth Tables

The truth table method is straightforward to use;
it just requires some brute-force calculation.
You don’t need any insight into the proposition,
which may be true (or false, or sometimes true).
This is both a strength and a weakness of the method.
It’s comforting to know that you can reliably crank out a proof, given enough time.
However, a big advantage of the other two logical systems we will study,
natural deduction and Boolean algebra,
is that they give much better insight into why a theorem is true.

The truth table method requires one line for each combination of variable values.
If there are k variables, this means there are 2k lines in the table.
For one variable, you get 21 = 2 lines (for example, the truth table for the ¬ operator).
For two variables, there are 22 = 4 lines (like the definitions of ∨, ∧, → and ↔︎).
For three variables there are 23 = 8 lines, which is about as much as anybody can handle.

Since the number of lines in the table grows exponentially in the number of variables,
the truth table method becomes unwieldy for most interesting problems.
Later, we will prove a theorem whose truth table has 2129 lines.
That number is larger than the number of atoms in the known universe,
so you’ll be relieved to know that we will skip the truth table and use more abstract, efficient methods.

See how we might use functions to brute-force truth tables,
and why that is not an ideal solution:
Code-HRLMP/HL02TAMO.hs

{-# LANGUAGE FlexibleInstances #-}

module HL02TAMO where

-- Implication / Therefore / if
infix 1 ==>

-- |
-- >>> False ==> False
-- True

-- |
-- >>> False ==> True
-- True

-- |
-- >>> True ==> False
-- False

-- |
-- >>> True ==> True
-- True
(==>) :: Bool -> Bool -> Bool
x ==> y = (not x) || y

-- Equivalence / If and only if
infix 1 <=>

-- |
-- >>> True <=> True
-- True

-- |
-- >>> True <=> False
-- False

-- |
-- >>> False <=> False
-- True

-- |
-- >>> False <=> True
-- False
(<=>) :: Bool -> Bool -> Bool
x <=> y = x == y

-- Exclusive or
infixr 2 <+>

-- |
-- >>> False <+> False
-- False

-- |
-- >>> False <+> True
-- True

-- |
-- >>> True <+> False
-- True

-- |
-- >>> True <+> True
-- False
(<+>) :: Bool -> Bool -> Bool
x <+> y = x /= y

-- Examples:
p = True

q = False

-- |
-- >>> formula1
-- False
formula1 = (not p) && (p ==> q) <=> not (q && (not p))

-- | To prove a statement true, we can brute force check all combinations of True and False:
-- >>> formula2 True True
-- False

-- |
-- >>> formula2 True False
-- False

-- |
-- >>> formula2 False True
-- False

-- |
-- >>> formula2 False False
-- True
formula2 p q = ((not p) && (p ==> q) <=> not (q && (not p)))

-- | This checks any boolean formula (bf) which takes one value, for example:
-- >>> valid1 excluded_middle
-- True
valid1 :: (Bool -> Bool) -> Bool
valid1 bf = (bf True) && (bf False)

excluded_middle :: Bool -> Bool
excluded_middle p = p || not p

-- | This checks any logical proposition with two values, for example:
-- >>> valid2 form1
-- True

-- |
-- >>> valid2 form2
-- False
valid2 :: (Bool -> Bool -> Bool) -> Bool
valid2 bf =
  (bf True True)
    && (bf True False)
    && (bf False True)
    && (bf False False)

form1 p q = p ==> (q ==> p)

form2 p q = (p ==> q) ==> p

-- | This checks any logical statement with three values, for example:
-- >>>
valid3 :: (Bool -> Bool -> Bool -> Bool) -> Bool
valid3 bf =
  and
    [ bf p q r | p <- [True, False], q <- [True, False], r <- [True, False]
    ]

-- | This checks any logical statement with four values, for example:
-- >>>
valid4 :: (Bool -> Bool -> Bool -> Bool -> Bool) -> Bool
valid4 bf =
  and
    [ bf p q r s | p <- [True, False], q <- [True, False], r <- [True, False], s <- [True, False]
    ]

-- | Logical equivalence of boolean formuae with one input can be tested as follows:
-- >>> logEquiv1 bfBasic1a bfBasic1b
-- True
logEquiv1 :: (Bool -> Bool) -> (Bool -> Bool) -> Bool
logEquiv1 bf1 bf2 =
  (bf1 True <=> bf2 True) && (bf1 False <=> bf2 False)

bfBasic1a :: (Bool -> Bool)
bfBasic1a b = b || not b

bfBasic1b :: (Bool -> Bool)
bfBasic1b b = not b || b

-- | Logical equivalence of boolean formuae with two inputs can be tested as follows:
-- >>> logEquiv2 formula3 formula4
-- True

-- | Logical equivalence of boolean formuae with two inputs can be tested as follows:
-- >>> logEquiv2 formula3 formula5
-- False

-- | Wrapping it up with a validity check:
-- >>> valid2 formula5
-- True
logEquiv2 ::
  (Bool -> Bool -> Bool) ->
  (Bool -> Bool -> Bool) ->
  Bool
logEquiv2 bf1 bf2 =
  and
    [ (bf1 p q) <=> (bf2 p q) | p <- [True, False], q <- [True, False]
    ]

formula3 p q = p

formula4 p q = (p <+> q) <+> q

formula5 p q = p <=> ((p <+> q) <+> q)

-- | Logical equivalence of boolean formuae with three inputs can be tested as follows:
-- >>>
logEquiv3 ::
  (Bool -> Bool -> Bool -> Bool) ->
  (Bool -> Bool -> Bool -> Bool) ->
  Bool
logEquiv3 bf1 bf2 =
  and
    [ (bf1 p q r) <=> (bf2 p q r) | p <- [True, False], q <- [True, False], r <- [True, False]
    ]

-- For defining equivalence laws (Theorem 2.10).
-- Show mathy definition in book (page 45/56) next to code:

class TF p where
  valid :: p -> Bool
  lequiv :: p -> p -> Bool

instance TF Bool where
  valid = id
  lequiv f g = f == g

instance (TF p) => TF (Bool -> p) where
  valid f = valid (f True) && valid (f False)
  lequiv f g =
    (f True) `lequiv` (g True)
      && (f False) `lequiv` (g False)

-- Law of double negation
-- P 𠪪P
test1 = lequiv id (\p -> not (not p))

-- Laws of idempotence
-- P ∧ P ≡ P
-- P ∨ P ≡ P
test2a = lequiv id (\p -> p && p)

test2b = lequiv id (\p -> p || p)

-- (P ⇒ Q) ≡ ¬P ∨ Q
-- ¬(P ⇒ Q) ≡ P ∧ ¬Q
test3a = lequiv (\p q -> p ==> q) (\p q -> not p || q)

test3b = lequiv (\p q -> not (p ==> q)) (\p q -> p && not q)

-- Laws of contraposition
-- (¬P ⇒ ¬Q) ≡ (Q ⇒ P )
-- (P ⇒ ¬Q) ≡ (Q ⇒ ¬P )
-- (¬P ⇒ Q) ≡ (¬Q ⇒ P )
test4a = lequiv (\p q -> not p ==> not q) (\p q -> q ==> p)

test4b = lequiv (\p q -> p ==> not q) (\p q -> q ==> not p)

test4c = lequiv (\p q -> not p ==> q) (\p q -> not q ==> p)

-- (P ⇔ Q) ≡ ((P ⇒ Q) ∧ (Q ⇒ P )) ≡ ((P ∧ Q) ∨ (¬P ∧ ¬Q))
test5a =
  lequiv
    (\p q -> p <=> q)
    (\p q -> (p ==> q) && (q ==> p))

test5b =
  lequiv
    (\p q -> p <=> q)
    (\p q -> (p && q) || (not p && not q))

-- Laws of commutativity
-- P ∧ Q ≡ Q ∧ P
-- P ∨ Q ≡ Q ∨ P
test6a = lequiv (\p q -> p && q) (\p q -> q && p)

test6b = lequiv (\p q -> p || q) (\p q -> q || p)

-- DeMorgan laws
-- ¬(P ∧ Q) ≡ ¬P ∨ ¬Q
-- ¬(P ∨ Q) ≡ ¬P ∧ ¬Q
test7a =
  lequiv
    (\p q -> not (p && q))
    (\p q -> not p || not q)

test7b =
  lequiv
    (\p q -> not (p || q))
    (\p q -> not p && not q)

-- Laws of associativity
-- P ∧ (Q ∧ R) ≡ (P ∧ Q) ∧ R
-- P ∨ (Q ∨ R) ≡ (P ∨ Q) ∨ R
test8a =
  lequiv
    (\p q r -> p && (q && r))
    (\p q r -> (p && q) && r)

test8b =
  lequiv
    (\p q r -> p || (q || r))
    (\p q r -> (p || q) || r)

-- Distribution laws
-- P ∧ (Q ∨ R) ≡ (P ∧ Q) ∨ (P ∧ R)
-- P ∨ (Q ∧ R) ≡ (P ∨ Q) ∧ (P ∨ R)
test9a =
  lequiv
    (\p q r -> p && (q || r))
    (\p q r -> (p && q) || (p && r))

test9b =
  lequiv
    (\p q r -> p || (q && r))
    (\p q r -> (p || q) && (p || r))

-- Proof-related code (not for logic section):
square1 :: Integer -> Integer
square1 x = x ^ 2

square2 :: Integer -> Integer
square2 = \x -> x ^ 2

m1 :: Integer -> Integer -> Integer
m1 = \x -> \y -> x * y

m2 :: Integer -> Integer -> Integer
m2 = \x y -> x * y

solveQdr :: (Float, Float, Float) -> (Float, Float)
solveQdr = \(a, b, c) ->
  if a == 0
    then error "not quadratic"
    else
      let d = b ^ 2 - 4 * a * c
       in if d < 0
            then error "no real solutions"
            else
              ( (-b + sqrt d) / 2 * a,
                (-b - sqrt d) / 2 * a
              )

every, some :: [a] -> (a -> Bool) -> Bool
every xs p = all p xs
some xs p = any p xs

Code-HRLMP/Sol02.hs

module Sol02 where

import HL01GS
import HL02TAMO

-- 2.13 Theorem 2.12 p48/59

-- ¬> ≡ ⊥; ¬⊥ ≡ >
tst1a = not True <=> False

tst1b = not False <=> True

-- P ⇒ ⊥ ≡ ¬P
tst2 = logEquiv1 (\p -> p ==> False) (\p -> not p)

-- P ∨ > ≡ >; P ∧ ⊥ ≡ ⊥ (dominance laws)
tst3a = logEquiv1 (\p -> p || True) (const True)

tst3b = logEquiv1 (\p -> p && False) (const False)

-- P ∨ ⊥ ≡ P ; P ∧ > ≡ P (identity laws)
tst4a = logEquiv1 (\p -> p || False) id

tst4b = logEquiv1 (\p -> p && True) id

-- P ∨ ¬P ≡ > (law of excluded middle)
tst5 = logEquiv1 excluded_middle (const True)

-- P ∧ ¬P ≡ ⊥ (contradiction)
tst6 = logEquiv1 (\p -> p && not p) (const False)

-- The implementation uses excluded_middle; this is defined in Chapter 2 as a name for the function
-- (\ p -> p || not p). const k is the function which gives value k for any argument.
-- Note that the implementation of tst4a and tst4b makes clear why P ∨ ⊥ ≡ P and P ∧ > ≡ P are called laws of identity.

-- 2.15 A propositional contradiction is a formula that yields false for every combination of truth values for its proposition letters. Write Haskell definitions of contradiction tests for propositional functions with one, two and three variables.
contrad1 :: (Bool -> Bool) -> Bool
contrad1 bf = not (bf True) && not (bf False)

contrad2 :: (Bool -> Bool -> Bool) -> Bool
contrad2 bf = and [not (bf p q) | p <- [True, False], q <- [True, False]]

contrad3 :: (Bool -> Bool -> Bool -> Bool) -> Bool
contrad3 bf =
  and [not (bf p q r) | p <- [True, False], q <- [True, False], r <- [True, False]]

-- 2.51 Define a function unique :: (a -> Bool) -> [a] -> Bool that gives True for unique p xs just in case there is exactly one object among xs that satisfies p.
unique :: (a -> Bool) -> [a] -> Bool
unique p xs = length (filter p xs) == 1

-- 2.52 Define a function parity :: [Bool] -> Bool that gives True for parity xs just in case an even number of the xss equals True.
parity :: [Bool] -> Bool
parity [] = True
parity (x : xs) = x /= (parity xs)

-- 2.53 Define a function evenNR :: (a -> Bool) -> [a] -> Bool that gives True for evenNR p xs just in case an even number of the xss have property p. (Use the parity function from the previous exercise.)
evenNR :: (a -> Bool) -> [a] -> Bool
evenNR p = parity . map p

-- Alternative solution:
evenNR' :: (a -> Bool) -> [a] -> Bool
evenNR' p xs = even (length (filter p xs))

1.3.13 Method 2. Natural Deduction: Inference Reasoning

Natural deduction is a formal logical system,
that allows you to reason directly with logical propositions using inference,
without having to substitute truth values for variables or evaluate expressions.
Natural deduction provides a solid theoretical foundation for logic,
and it helps focus attention on why logical propositions are true or false.
Natural deduction is well suited for automatic proof checking by a program,
and it has a variety of applications in computer science.

In normal English usage,
the verb ‘infer’ means to reason about some statements,
in order to reach a conclusion.
This is an informal process,
and it can run into problems due to the ambiguities of English,
but the intent is to establish that the conclusion is definitely true.

Logical inference means reasoning formally about a set of statements,
in order to decide, beyond all shadow of doubt, what is true.
In order to make the reasoning absolutely clear cut,
we need to do several things:

The set of statements we’re reasoning about must be defined.
This is called the object language,
and a typical choice would be propositional expressions.

The methods for inferring new facts from given information must be specified precisely.
These are called the inference rules.

The form of argument must be defined precisely,
so that if anybody claims to have an argument that proves a fact,
we can determine whether it actually is a valid argument.
This defines a meta-language,
in which proofs of statements in the object language can be written.
Every step in the reasoning must be justified by an inference rule.

There are several standard ways to write down formal proofs.
In this book we’ll use the form which is most commonly used in computer science.

The notation P1, P2, …, Pn \(\vdash\) Q is called a sequent,
and it means that if all of the propositions P1, P2, …, Pn are known,
then the proposition Q can be inferred formally,
using the inference rules of natural deduction.

We have seen two similar notations:

With truth tables,
we had meta-logical statements with the \(\vDash\) operator,
like P \(\vDash\) Q → P .
This statement means that if P is True,
then the proposition Q → P is also True.
It says nothing about how we know that to be the case.

In contrast, the notation P \(\vdash\) Q → P , which will be used in this section,
means there is a proof of Q → P, and the proof assumes P .
Both \(\vDash\) and \(\vdash\) are used to state theorems;
the distinction is that \(\vDash\) is concerned with the ultimate truth of the propositions,
while \(\vdash\) is concerned with whether we have a proof.
We will return to the relationship between \(\vDash\) and \(\vdash\) later.

In formal logic, you can’t just make intuitive arguments,
and hope they are convincing.
Every step of reasoning you make has to be backed up by an inference rule.
Intuitively, an inference rule says,
“If you know that Statement1 and Statement2 are established (either assumed or proven),
then you may infer Statement3,”
and furthermore, the inference constitutes a proof.
Statement1 and Statement2 are called the assumptions of the inference rule,
and Statement3 is called the conclusion.
There may be any number of assumptions.
there don’t have to be just two of them.
Here’s an example of an inference rule about the ∧ operator,
written informally in English:

If you know some proposition a, and also the proposition b,
then you are allowed to infer the proposition a ∧ b.

In this example, there are two assumptions—a and b—and the conclusion is a ∧ b.
Note that we have expressed this inference using meta-variables a and b.
These meta-variables could stand for any WFF;
thus a might be P, Q → R, etc.
The variables P, Q, etc., are propositional variables,
belonging to the object language,
and their values are True or False.
We will adopt the following convention:

Meta-Variables belong to the meta-language,
and are written as lower-case letters a, b, c, …
The value of a meta-variable is a WFF.
For example a might have the value P ∧ Q.

Propositional variables belong to the object language,
and are written as upper-case letters A, B, …, P, Q, R, …
The value of a propositional variable is either True or False.

1.3.13.1 Formalizing inference

Inference rules are expressed by writing down the assumptions above a horizontal line,
and writing the conclusion below the line:

\[\frac{Assumption1 \qquad Assumption2}{Conclusion3}\]

This says that if you can somehow establish the truth of Assumption1 and Assumption2,
then Conclusion3 is guaranteed (by the inference rule) to be true.
The inference about ∧ would be written formally as

\[\frac{a \qquad b}{a \land b}\]

An inference rule works in only one direction—for example,
if you have established a ∧ b, then you cannot use the rule above to infer a.
A different inference rule, which we will see later, would be needed.

The below list summarizes all the inference rules of propositional logic.
In the next several sections we will work systematically through them all.

And Introduction \(\{ \land I \}\):
\[\frac{a \qquad b}{a \land b} \{ \land I \}\]

And Elimination \(\{ \land E_L \}, \{ \land E_R \}\):
\[\frac{a \land b}{a} \{ \land E_L \}\]
\[\frac{a \land b}{b} \{ \land E_R \}\]

Imply Elimination \(\{ \rightarrow E \}\):
\[\frac{a \qquad a \rightarrow b}{b} \{ \rightarrow E \}\]

Imply Introduction \(\{ \rightarrow I \}\):
\[\frac{a \vdash b}{a \rightarrow b} \{ \rightarrow I \}\]

Or Introduction \(\{ \lor I_L \}, \{ \lor I_R \}\):
\[\frac{a}{a \lor b} \{ \lor I_L \}\]
\[\frac{b}{a \lor b} \{ \lor I_R \}\]

Or Elimination \(\{ \lor E \}\):
\[\frac{a \lor b \qquad a \vdash c \qquad b \vdash c}{c} \{ \lor E \}\]

Identity \(\{ ID \}\):
\[\frac{a}{a} \{ ID \}\]

Contraduction \(\{ CTR \}\):
\[\frac{False}{a} \{ CTR \}\]

Reductio ad Absurdum \(\{ RAA \}\):
\[\frac{\neg a \vdash False}{a} \{ RAA \}\]

Many of the rules fall into two categories.

The introduction rules have a conclusion,
into which a new logical operator has been introduced;
they serve to build up more complex expressions from simpler ones.

The elimination rules require an assumption,
that contains a logical operator which is eliminated from the conclusion;
these are used to break down complex expressions into simpler ones.

Introduction rules tell you what you need to know,
in order to introduce a logical operator.

Elimination rules tell you what you can infer,
from a proposition containing a particular operator.

1.3.13.2 Definitions of True, ¬, and ↔︎

Natural deduction works with a very minimal set of basic operators.
In fact, the only primitive built-in objects are:
the constant, False, and the three operators ∧, ∨ and →.

Everything else is an abbreviation!
It’s particularly intriguing that:
False is the only primitive logic value in the natural deduction system.

The constant, True, and the operators ¬ and ↔︎ are abbreviations defined as follows:

True = False → False
¬a = a → False
a ↔︎ b = (a → b) ∧ (b → a)

You can check all of these definitions semantically, using truth tables.

In natural deduction, we will manipulate them only using the inference rules.

Notice that ¬False is defined to be False → False,
which happens to be the definition of True.
In other words, ¬False = True.
Going the other direction,
¬True becomes True → False when the ¬ abbreviation is expanded out,
and that becomes (False → False) → False when the True is expanded out.
Later we will see an inference rule (called Reductio ad Absurdum)
which will allow us to infer False from (False → False) → False.
The result is that ¬True = False and ¬False = True.

We will write propositions with True, ¬ and ↔︎ just as usual,
but sometimes to make a proof go through,
it will be necessary to expand out the abbreviations,
and work just with the primitives.

1.3.13.3 And Introduction \(\{ \land I \}\):

\[\frac{a \qquad b}{a \land b} \{ \land I \}\]

The And Introduction inference rule says that:
if you know that some proposition a is true,
and you also know that b is true,
then you may infer that the proposition a ∧ b is true.
As the name ‘And Introduction’ suggests,
the rule specifies what you have to know,
in order to infer a proposition with a new occurrence of ∧.

We will now work through several examples,
starting with a theorem saying that:
the conclusion P ∧ Q can be inferred from the assumptions P and Q.

Theorem 35. P, Q \(\vdash\) P ∧ Q

Proof. The theorem is proved by just one application of the {∧I} inference;
it’s the simplest possible example of how to use the {∧I} inference rule.

\[\frac{P \qquad Q}{P \land Q} \{ \land I \}\]

Notice that the theorem above involves two specific propositional variables, P and Q.
The {∧I} rule does not require any particular propositions.
It uses the meta-variables a and b, which can stand for any well-formed formula.
For example, the following theorem has the same structure as the previous one,
and is also proved with just one application of the {∧I} rule,
but this time the meta-variable a stands for R → S and b stands for ¬P .

Theorem 36. (R → S), ¬P \(\vdash\) (R → S) ∧ ¬P

Proof. \[\frac{R \rightarrow S \qquad \neg P}{R \rightarrow S \land \neg P} \{ \land I \}\]

Usually you need several steps to prove a theorem,
and each step requires explicit use of an inference rule.
You can build up a proof in one diagram.
When a subproof results in a conclusion that serves as an assumption for a larger proof,
then the entire subproof diagram appears above the line,
for the main inference in the large proof.
Here is an example:

Theorem 37. P, Q, R \(\vdash\) (P ∧ Q) ∧ R

Proof. The main inference here has two assumptions: P ∧ Q and R.
However, the first assumption P ∧ Q is not one of the assumptions of the entire theorem;
it is the conclusion of another {∧I} inference with assumptions P and Q.
The entire proof is written in a single diagram.
The conclusion P ∧ Q of the first inference is above the longer line
(the line belonging to the main inference).

\[\frac{\frac{P \qquad Q}{P \land Q} \{ \land I \} \qquad R}{(P \land Q) \land R} \{ \land I \}\]

Inference proofs have a natural tree structure:
assumptions of the theorem are like the leaves of a tree,
and subproofs are like the nodes (forks) of a tree.
The method we are using here for writing proofs makes this tree structure explicit.

There is an alternative format for writing logical inference proofs,
where each inference is written on a numbered line.
When the conclusion of one inference is required as an assumption to another one,
this is indicated by explicit references to the line numbers.
This format looks like a flat list of statements
(much like an assembly language program),
while the tree format we are using has a nested structure
(much like a program in a block structured language).

Logical inference has many important applications in computer science.
The tree format is normally used for computing applications,
so we will use that notation in this text,
in order to help prepare you for more advanced studies.

1.3.13.4 And Elimination \(\{ \land E_L \}, \{ \land E_R \}\):

\[\frac{a \land b}{a} \{ \land E_L \}\]
\[\frac{a \land b}{b} \{ \land E_R \}\]

Two inference rules allow the elimination of an And operation:

if a ∧ b is known to be true,
then a must be true,
and also b must be true.

The ‘And Elimination Left’ {∧EL } rule retains the left argument of a ∧ b in the result.

The ‘And Elimination Right’ {∧ER } rule retains the right argument.

Here is a simple example using And Elimination:

Theorem 38. P, Q ∧ R \(\vdash\) P ∧ Q

Proof. Two inferences are required.
First the ‘Left’ form of the And Elimination rule {∧EL } is used,
to obtain the intermediate result Q,
and the And Introduction rule is then used,
to combine this with P to obtain the result P ∧ Q.

\[\frac{P \qquad \frac{Q \land R}{Q} \{ \land E_L \} }{P \land Q} \{ \land I \}\]

The next example contains several applications of And Elimination,
including both the Left and Right forms of the rule.
As these two examples show,
the three rules {∧I}, {∧EL} and {∧ER} can be used systematically,
to deduce consequences of logical conjunctions.

Theorem 39. (P ∧ Q) ∧ R \(\vdash\) R ∧ Q

Proof.
\[\frac{\frac{(P \land Q) \land R}{R} \{ \land E_R\} \qquad \frac{\frac{(P \land Q) \land R}{P \land Q} \{ \land E_L\}}{Q} \{ \land E_R \}}{R \land Q} \{ \land I \}\]

The ∧ operator is commutative:
the proposition P ∧ Q is equivalent to Q ∧ P.
The following theorem establishes this.

Theorem 40. P ∧ Q \(\vdash\) Q ∧ P

Proof.
The idea behind this proof is to infer Q and P, in that order,
with Q to the left of P, above the main line,
so {∧I} can be used to infer Q ∧ P.
The intermediate results Q and P are obtained by And Elimination.

\[\frac{\frac{P \land Q}{Q} \{ \land E_R\} \qquad \frac{P \land Q}{P} \{ \land E_L\} }{Q \land P} \{ \land I\}\]

Inference proof trees can become quite large,
but you can always work through them systematically,
one inference at a time,
until you see how all the parts of the proof fit together.
You can check the individual inferences in any order you like,
either top-down or bottom-up or any other order you like.

Below is a theorem proved with the And Introduction and Elimination rules,
with a larger proof:

Theorem 41. For any well formed propositions a, b and c,
a ∧ (b ∧ c) \(\vdash\) (a ∧ b) ∧ c

Proof.
\[\frac{\frac{\frac{a \land (b \land c)}{a} \{\land E_L \} \qquad \frac{\frac {a \land (b \land c)}{b \land c} \{\land E_R\} }{b} \{\land E_L\}}{a \land b} \{\land I\} \qquad \frac{\frac{a \land (b \land c)}{b \land c} \{\land E_R\}}{c} \{\land E_R\} }{(a \land b) \land c} \{ \land I \}\]

1.3.13.5 Imply Elimination \(\{ \rightarrow E \}\):

\[\frac{a \qquad a \rightarrow b}{b} \{ \rightarrow E \}\]

The Imply Elimination rule {→ E} says that:
if you know a is true,
and also that a implies b,
then you can infer b.
The traditional Latin name for the rule is Modus Ponens.

The following theorem provides a simple example of the application of {→ E}.

Theorem 42. Q ∧ P, P ∧ Q → R \(\vdash\) R.

Proof.
\[\frac{\frac{\frac{Q \land P}{P}\{\land E_R\} \qquad \frac{Q \land P}{Q}\{\land E_L\}}{P \land Q}\{\land I\} \qquad P \land Q \rightarrow R}{R} \{\rightarrow E\}\]

Often we have chains of implication of the form:
a → b, b → c, etc.
The following theorem says that:
given a, and these linked implications,
then you can infer c.

Theorem 43. For all propositions a, b, and c,
a, a → b, b → c \(\vdash\) c.

Proof.
\[\frac{\frac{a \qquad a \rightarrow b}{b}\{\rightarrow E\} \qquad b \rightarrow c}{c}\{\rightarrow E\}\]

1.3.13.6 Imply Introduction \(\{ \rightarrow I \}\):

\[\frac{a \vdash b}{a \rightarrow b} \{ \rightarrow I \}\]

The Imply Introduction rule {→ I} says that,
in order to infer the logical implication a → b,
you must have a proof of b using a as an assumption.

We will first give a simple example of Imply Introduction:

Theorem 44. \(\vdash\) (P ∧ Q) → Q.

Proof.
First consider the sequent P ∧ Q \(\vdash\) Q,
which is proved by the following And Elimination inference:
\[\frac{P \land Q}{Q}\{\land E_R\}\]

Now we can use the sequent,
the theorem established by the inference,
and the {→ I} rule:
\[\frac{P \land Q \vdash Q}{P \land Q \rightarrow Q}\{\rightarrow I\}\]

It is crucially important to be careful about what we are assuming.
The reason for having the sequent notation with the \(\vdash\) operator,
is to write down the assumptions of a theorem,
just as precisely as the conclusion.

In the {∧ER} inference we assumed P ∧ Q;
without that assumption we could not have inferred Q.
This assumption appears in the sequent to the left of the \(\vdash\).
The entire sequent P ∧ Q \(\vdash\) Q does not rely on any further assumptions;
it is independently true.
Therefore we can put it above the line of the {→ I} rule ‘for free,’
without actually making any assumptions.
Since nothing at all needed to be assumed,
to support the application of the {→ I} rule,
we end up with a theorem that doesn’t require any assumptions.
The sequent that expresses the theorem,
\(\vdash\) P ∧ Q → Q,
has nothing to the left of the \(\vdash\) operator.

It is customary to write the entire proof as a single tree,
where a complete proof tree appears above the line of the {→ I} rule.
That allows us to prove \(\vdash\) P ∧ Q → Q with just one diagram:

\[\frac{\frac{P \land Q}{Q}\{\land E_R\}}{P \land Q \rightarrow Q}\{\rightarrow I\}\]

From this diagram,
it looks like there is an assumption P ∧ Q.
However, that was a temporary, local assumption,
whose only purpose was to establish P ∧ Q \(\vdash\) Q.
Once that result is obtained the assumption P ∧ Q can be thrown away.
An assumption that is made temporarily,
only in order to establish a sequent,
and which is then thrown away, is said to be discharged.
A discharged assumption does not need to appear first,
to the left of the \(\vdash\) of the main theorem.
In our example, the proposition P ∧ Q → Q is always true,
and it doesn’t matter whether P ∧ Q is true or false.

In big proof trees,
keep track which assumptions have been discharged and which have not.
We will indicate the discharged assumptions,
by putting a box around them.
A more common notation is to draw a line through the discharged assumption,
but for certain propositions that leads to a highly unreadable result.
Following this convention, the proof becomes:

\[\frac{\frac{\boxed{P \land Q}}{Q}\{\land E_R\}}{P \land Q \rightarrow Q}\{\rightarrow I\}\]

Recall the proof of Theorem 43,
which used the chain of implications a → b and b → c to infer c,
given also that a is true.
A more satisfactory theorem would just focus on the chain of implications,
without relying on a actually being true.
The following theorem gives this purified chain property:
it says that if a → b and b → c, then a → c.

Theorem 45 (Implication chain rule).
For all propositions a, b and c,

a → b, b → c \(\vdash\) a → c

Proof.
Because we are proving an implication a → c,
we need to use the {→ I} rule.
There is no other way to introduce the → operator!
That rule requires a proof of c given the assumption a,
which is essentially the same proof tree used before in Theorem 43.
The assumption, a, is discharged when we apply {→ I}.
The other two assumptions, (a → b and b → c), are not discharged.
Consequently a doesn’t appear to the left of the \(\vdash\) in the theorem;
instead it appears to the left of the → in the conclusion of the theorem.

\[\frac{\frac{\frac{\boxed{a} \qquad a \rightarrow b}{b}\{\rightarrow E\} \qquad b \rightarrow c}{c}\{\rightarrow E\}}{a \rightarrow c}\{\rightarrow I\}\]

Sometimes in large proofs,
it can be confusing to see just where an assumption has been discharged.
You may have an assumption P with a box around it,
but there could be all sorts of implications P → ··· which came from somewhere else.
In such cases it’s probably clearer to build up the proof in stages,
with several separate proof tree diagrams.

A corollary of this implication chain rule is the following theorem:
if a → b, but you know that b is false,
then a must also be false.
This is an important theorem,
which is widely used to prove other theorems,
and its traditional Latin name is Modus Tollens.

Theorem 46 (Modus Tollens).
For all propositions a and b,
a → b, ¬b \(\vdash\) ¬a

Proof.
First we need to expand out the abbreviations,
using the definition that ¬a means a → False.
This results in the following sequent to be proved:
a → b, b → False  a → False.
This is an instance of Theorem 45.

1.3.13.7 Or Introduction \(\{ \lor I_L \}, \{ \lor I_R \}\):

\[\frac{a}{a \lor b} \{ \lor I_L \}\]
\[\frac{b}{a \lor b} \{ \lor I_R \}\]

Like all the introduction rules,
Or Introduction specifies what you have to establish,
in order to infer a proposition containing a new ∨ operator.
If the proposition a is true,
then both, a ∨ b, and b ∨ a, must also be true.
You can see this by checking the truth table definition of ∨.
Or Introduction comes in two forms, Left and Right.

Theorem 47. P ∧ Q \(\vdash\) P ∨ Q

Proof.
The proof requires the use of Or Introduction.
There are two equally valid ways to organise the proof.
One method begins by establishing P and then uses \(\{\lor I_L\}\) to put the P to the left of ∨:

\[\frac{\frac{P \land Q}{P}\{\land E_L\}}{P \lor Q}\{\lor I_L\}\]

An alternative proof first establishes Q and then uses \(\{\lor I_R\}\) to put the Q to the right of ∨:

\[\frac{\frac{P \land Q}{Q}\{\land E_R\}}{P \lor Q}\{\lor I_R\}\]

Normally, of course, you would choose one of these proofs randomly;
there is no reason to give both.

1.3.13.8 Or Elimination \(\{ \lor E \}\):

\[\frac{a \lor b \qquad a \vdash c \qquad b \vdash c}{c} \{ \lor E \}\]

The Or Elimination rule specifies what you can conclude,
if you know that a proposition of the form a ∨ b is true.
We can’t conclude anything about either a or b,
because either of those might be false even if a ∨ b is true.
However, suppose we know a ∨ b is true,
and also suppose there is some conclusion c,
that can be inferred from a,
and can also be inferred from b,
then c must also be true.

Or Elimination is a formal version of proof by case analysis.
It amounts to the following argument:
(0) Either a or b is true.
(1) if a is true, then c holds;
(2) if b is true then c holds.
Therefore c is true.
Here is an example:

Theorem 48. (P ∧ Q) ∨ (P ∧ R) \(\vdash\) P

Proof.
There are two proofs above the line.
The first proof assumes P ∧ Q in order to infer P.
However, that inference is all that we need;
P ∧ Q is discharged, and is not an assumption of the main theorem.
For the same reason, P ∧ R is also discharged.
The only undischarged assumption is (P ∧ Q) ∨ (P ∨ R),
which therefore must appear to the left of the \(\vdash\) in the main theorem.

\[\frac{(P \land Q) \lor (P \land R) \qquad \frac{\boxed{P \land Q}}{P}\{\land E_L\} \qquad \frac{\boxed{P \land R}}{P}\{\land E_L\}}{P}\{\lor E\}\]

Finally, here is a slightly more complex example:

Theorem 49. (a ∧ b) ∨ (a ∧ c) \(\vdash\) b ∨ c

Proof.
\[\frac{(a \land b) \lor (a \land c) \qquad \frac{\frac{\boxed{a \land b}}{b}\{\land E_R\}}{b \lor c}\{\lor I_L\} \qquad \frac{\frac{\boxed{a \land c}}{c}\{\land E_R\}}{b \lor c}\{\land I_R\}}{b \lor c}\{\lor E\}\]

1.3.13.9 Identity \(\{ ID \}\):

\[\frac{a}{a} \{ ID \}\]

The Identity rule {ID} says, rather obviously,
that if you know a is true,
then you know a is true.

Although the Identity rule seems trivial,
it is also necessary.
Without it, for example,
there would be no way to prove the following theorem,
which we would certainly want to be true.

Theorem 50. P \(\vdash\) P

Proof.
\[\frac{P}{P}\{ID\}\]

An interesting consequence of the Identity rule is that True is true,
as the following theorem states.

Theorem 51. \(\vdash\) True

Proof.
Recall that True is an abbreviation for False → False.
We need to infer this implication using the {→ I} rule,
and that in turn requires a proof of False,
given the assumption False.
This can be done with the {ID} rule.
It could also be done with the Contradiction rule,
which we’ll study shortly.

\[\frac{\frac{\boxed{False}}{False}\{ID\}}{False \rightarrow False}\{\rightarrow I\}\]

Notice that we had to assume False in order to prove this theorem,
but fortunately that assumption was discharged when we inferred False → False.
The assumption of False was just temporary,
and it doesn’t appear as an assumption of the theorem.
That’s a relief, as it would never do if we had to assume that:
False is true in order to prove that True is true!

1.3.13.10 Contraduction \(\{ CTR \}\):

\[\frac{False}{a} \{ CTR \}\]

The Contradiction rule says that you can infer anything at all,
given the assumption that False is true.

In effect, this rule says that False is untrue,
and it expresses that fact,
purely through the mechanism of logical inference.
It would be disappointing if we had to describe the fundamental falseness of False,
by making a meaningless statement outside the system,
such as ‘False is wrong.’

The goal is to describe the process of logical reasoning formally,
using a small set of clearly specified inference rules.
It would also be a bad idea to try to define False as equal to ¬True.
Since True is already defined to be ¬False,
that would be a meaningless and useless circular definition.
The Contradiction rule describes the untruthfulness of False indirectly,
by saying that everything would become provable,
if False is ever assumed or inferred.

Theorem 52. P, ¬P \(\vdash\) Q

Proof.
Recall that ¬P is just an abbreviation for P → False.
That means we can use the {→ E} rule to infer False,
and once that happens we can use {CTR} to support any conclusion we feel like,
even Q, which isn’t even mentioned in the theorem’s assumptions!

\[\frac{\frac{P \qquad P \rightarrow False}{False}\{\rightarrow E\}}{Q}\{CTR\}\]

The Identity and Contradiction rules often turn up in larger proofs.
A typical example occurs in the following theorem,
which states an important property of the logical Or operator ∨.
This theorem says that if a ∨ b is true, but a is false,
then b has to be true.
It should be intuitively obvious,
but the proof is subtle and merits careful study.

Theorem 53. For all propositions a and b,
a ∨ b, ¬a \(\vdash\) b

Proof.
As usual, the ¬a abbreviation should be expanded to a → False.
Because we’re given a ∨ b,
the basic structure will be an Or Elimination,
and there will be two smaller proof trees,
corresponding to the two cases for a ∨ b.
In the first case, when a is assumed temporarily,
we obtain a contradiction with the theorem’s assumption of ¬a:
False is inferred,
from which we can infer anything else (and here we want b).
In the second case, when b is assumed temporarily,
the desired result of b is an immediate consequence of the {ID} rule.

\[\frac{a \lor b \qquad \frac{\frac{\boxed{a} \qquad a \rightarrow False}{False}\{\rightarrow E\}}{b}\{CTR\} \qquad \frac{\boxed{b}}{b}\{ID\}}{b}\{\lor E\}\]

Note that there are two undischarged assumptions, a ∨ b and a → False,
and there are two temporary assumptions that are discharged by the {∨E} rule.

1.3.13.11 Reductio ad Absurdum \(\{ RAA \}\):

\[\frac{\neg a \vdash False}{a} \{ RAA \}\]

The Reductio ad Absurdum (reduce to absurdity) rule says that:
if you can infer False from an assumption ¬a,
then a must be true.
This rule underpins the proof by contradiction strategy:
if you want to prove a, first assume the contradiction ¬a and infer False;
the {RAA} rule then allows you to infer a.

Theorem 54 (Double negation). ¬¬a \(\vdash\) a

Proof.
Our strategy is to use a proof by contradiction.
That is, if we can assume ¬a and infer False,
then the {RAA} rule would yield the inference a.
Because we are given ¬¬a,
the contradiction will have the following general form:

\[\frac{\neg a \qquad \neg\neg a}{False}\]

To make this go through,
we need to replace the abbreviations by their full defined values.
Recall that ¬a is an abbreviation for a → False,
so ¬¬a actually means (a → False) → False.
Once we expand out these definitions,
the inference becomes much clearer:
it is just a simple application of {→ E}:

\[\frac{\frac{\boxed{a \rightarrow False} \qquad (a \rightarrow False) \rightarrow False}{False}\{\rightarrow E\}}{a}\{RAA\}\]

Both requirements for the {RAA} rule have now been provided,
and the {RAA} gives the final result a.

1.3.13.12 Inferring operator truth tables

Inference rules of natural deduction serve as formal foundation for logical reasoning.
This raises two fundamental questions:
Are the inference rules actually powerful enough?
Will they ever allow us to make mistakes?
These are profound and difficult questions.
Modern research in mathematical logic addresses such questions,
and has led to some astonishing results.
In this section we start with an easy version of the first question:
To re-calculate the truth tables of the logical operators,
are the inference rules powerful enough?

Answering this question will provide practice in using the inference rules.

A deeper point is that it’s philosophically satisfying:
The inference rules provide a complete foundation for propositional logic.
Nothing else is required:
the truth tables given earlier in the chapter,
are the most intuitive place to start learning logic,
and we treated them like definitions at the beginning of the chapter,
but it isn’t necessary to accept them as the fundamental definitions of the operators.
If we take instead the inference rules,
as the foundation of mathematical logic,
then truth tables are no longer definitions;
they merely summarise a lot of calculations using the rules.

To illustrate the idea,
let’s use the inference rules to calculate the value of True ∧ False.
We need to prove that True ∧ False,
is logically equivalent to False,
and also that it is not logically equivalent to True.
Recall that False is a primitive constant,
but True is defined as False → False.
First, here is a proof of the sequent \(\vdash\) True ∧ False → False:

\[\frac{\frac{(False \rightarrow False) \land False}{False}\{\land E_R\}}{((False \rightarrow False) \land False) \rightarrow False}\{\rightarrow I\}\]

Next, we prove the sequent \(\vdash\) False → ((False → False) ∧ False):

\[\frac{\frac{False}{(False \rightarrow False) \land False}\{CTR\}}{False \rightarrow ((False \rightarrow False) \land False)}\{\rightarrow I\}\]

Putting these results together, and reintroducing the abbreviations, we get

True ∧ False ↔︎ False

We have thereby calculated one of the lines of the truth table definition of ∧.
The complete truth tables, for all the logical operators,
can be inferred using similar calculations.

1.3.14 Proof checking by formal program

Informal arguments in English are full of imprecision and ambiguity,
so there is no hope of writing a computer program to determine whether one is valid.
People don’t always agree as to whether an argument holds water!
Formal proofs, however, are totally precise and unambiguous.
Formal arguments may become long and involved,
but computers are good at long and involved calculations.

Formal proofs provide confidence that a theorem is correct.
The language of propositional logic,
along with the natural deduction inference system,
form solid foundations for accurate reasoning.
However in manual proofs, everyone makes mistakes,
and even a tiny error would make a large proof completely untrustworthy.
To get the full benefit of formal logic,
we need computers to help with the proofs.

A proof checker is a computer program that:
reads in a theorem and a proof,
determines whether the proof is valid,
and actually establishes the theorem.

A theorem prover is a computer program that reads in a theorem,
and attempts to generate a proof from scratch.

Theorem prover can sometimes save the user a lot of work.
However, theorem provers don’t always succeed,
since there are plenty of true theorems,
whose proofs are too difficult for them to find.
A theorem prover may stop with the message ‘I cannot prove it’,
or it may go into an infinite loop and never stop at all.

The main advantage of a proof checker is that:
it can always determine whether a purported proof is valid or invalid.
Proof checkers are also suitable when you want to write a proof by hand,
and you also want to be sure it’s correct.

Proof tools are the subject of active current research,
but they are not limited to research:
they are also practical for real applications.
There are a number of ‘industrial-strength’ proof tools,
for various logical systems.
Proof tools have already been applied successfully,
to some very difficult and important real-world problems.

We’ll now look at a simple proof checking system implemented in Haskell.

You can also use the software to check your own proofs;
it’s nice to be sure your exercises are correct before handing them in!
Furthermore, you may find it interesting to study the implementation of the checker.

It uses some advanced features of Haskell that aren’t covered in the class,
though most of it isn’t too difficult to understand.
Haskell is well suited for writing proof checkers.
Indeed, most of the industrial strength proof tools are implemented in functional programming languages.

This is a substantial program, that includes an actual proof-checker:
Code-DMUC/Stdm06PropositionalLogic.hs

1.3.15 Method 3. Boolean Algebra: Equational Reasoning

1.4 Predicate logic

Code-DMUC/Stdm07PredicateLogic.hs

-- # Software Tools for Discrete Mathematics
module Stdm07PredicateLogic where

-- # Chapter 7.  Predicate Logic
forAll :: [Int] -> (Int -> Bool) -> Bool
forAll u p = all p u

exists :: [Int] -> (Int -> Bool) -> Bool
exists u p = any p u