1 Logic


1.1 Inspiration

https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667.html

1.2 Reading

Discrete Mathematics using a Comptuter (O’Donnell, Hall, Page) Chapter 6, 7

The Haskell Road to Logic, Math and Programming (Doets, van Eijck) Chapter 2, 3

Discrete Mathematics with Applications - Metric Edition (Epp) Chapter 2, 3

https://www.cs.carleton.edu/faculty/dln/book/ch03_logic_2021_September_08.pdf

https://www.cs.yale.edu/homes/aspnes/classes/202/notes.pdf Chapter 2

https://runestone.academy/ns/books/published/dmoi-4/ch_logic.html
https://runestone.academy/ns/books/published/dmoi-4/sec_logic-statements.html
https://runestone.academy/ns/books/published/dmoi-4/sec_logic-implications.html
https://runestone.academy/ns/books/published/dmoi-4/sec_logic-rules.html

https://runestone.academy/ns/books/published/ads/chapter3.html
https://runestone.academy/ns/books/published/ads/s-propositions-logic-operators.html
https://runestone.academy/ns/books/published/ads/s-truth-tables.html
https://runestone.academy/ns/books/published/ads/s-equivalence-implication.html
https://runestone.academy/ns/books/published/ads/s-logic-laws.html
https://runestone.academy/ns/books/published/ads/s-math-systems.html
https://runestone.academy/ns/books/published/ads/s-propositions-over-universe.html
https://runestone.academy/ns/books/published/ads/s-induction.html
https://runestone.academy/ns/books/published/ads/s-quantifiers.html
https://runestone.academy/ns/books/published/ads/s-proof-review.html

https://runestone.academy/ns/books/published/ads/chapter_13.html
https://runestone.academy/ns/books/published/ads/s-posets-revisited.html
https://runestone.academy/ns/books/published/ads/s-lattices.html
https://runestone.academy/ns/books/published/ads/s-boolean-algebras.html
https://runestone.academy/ns/books/published/ads/s-atoms-of-a-boolean-algebra.html
https://runestone.academy/ns/books/published/ads/s-finite-boolean-algebras-ntuples.html
https://runestone.academy/ns/books/published/ads/s-boolean-expressions.html
https://runestone.academy/ns/books/published/ads/s-logic-design.html

https://runestone.academy/ns/books/published/DiscreteMathText/chapter2.html
https://runestone.academy/ns/books/published/DiscreteMathText/logical_form2-1.html
https://runestone.academy/ns/books/published/DiscreteMathText/conditional2-2.html
https://runestone.academy/ns/books/published/DiscreteMathText/arguments2-3.html

https://runestone.academy/ns/books/published/DiscreteMathText/chapter3.html
https://runestone.academy/ns/books/published/DiscreteMathText/quantifiersI3-1.html
https://runestone.academy/ns/books/published/DiscreteMathText/negquant3-2.html
https://runestone.academy/ns/books/published/DiscreteMathText/multiplequant3-3.html
https://runestone.academy/ns/books/published/DiscreteMathText/argumentsquant3-4.html

https://en.wikipedia.org/wiki/Mathematical_logic
https://en.wikipedia.org/wiki/Propositional_calculus
https://en.wikipedia.org/wiki/First-order_logic
https://en.wikipedia.org/wiki/Truth_function
https://en.wikipedia.org/wiki/Boolean_function
https://en.wikipedia.org/wiki/Truth_table
https://en.wikipedia.org/wiki/Rule_of_replacement
https://en.wikipedia.org/wiki/Rule_of_inference
https://en.wikipedia.org/wiki/List_of_rules_of_inference

1.3 Why logic?

Logic provides a powerful tool for reasoning correctly about mathematics, algorithms, and computers.
You need to understand its basic concepts in order to study many of the more advanced subjects in computing.

In software engineering,
it is good practice to specify what a system should do before starting to code it.
Logic is frequently used for software specifications.

In safety-critical applications,
it is essential to establish that a program is correct.
Conventional debugging isn’t enough.
What we want is a proof of correctness.
Formal logic is the foundation of program correctness proofs.

In information retrieval,
including Web search engines,
logical propositions are used to specify the properties that should (or should not) be present,
in a piece of information in order for it to be considered relevant.

In artificial intelligence,
formal logic is used to simulate intelligent thought processes.
People don’t do their ordinary reasoning using mathematical logic,
but logic is a convenient tool for implementing certain forms of reasoning.

In digital circuit design and computer architecture,
logic is the language used to describe the signal values that are produced by components.
A common problem is that a first-draft circuit design written by an engineer is too slow,
so it has to be transformed into an equivalent circuit that is more efficient.
This process is often quite tricky, and logic provides the framework for doing it.

In database systems,
complex queries are built up from simpler components.
It’s essential to have a clear, precise way to express such queries,
so that users of the database can be sure they’re getting the right information.
Logic is the key to expressing such queries.

In compiler construction,
the typechecking phase uses logic.
It must determine whether the program being translated uses any variables or functions inconsistently.
It turns out that the method for doing this is similar to the method for performing logical inference.
Algorithms designed originally to perform calculations in mathematical logic are now embedded in modern compilers.

In programming language design,
a commonly used method for specifying the meaning of a computer program is the lambda calculus,
which is actually a formal logical system,
originally invented by a mathematician for purely theoretical research.

In computability theory,
logic is used both to specify abstract machine models and to reason about their capabilities.
There has been an extremely close interaction between mathematical logicians and theoretical computer scientists,
in developing a variety of machine models.

In scientific ontologies,
logic is used to construct knowledgebases,
and to perform reasoning upon them.

1.4 Theory

See basic table here:
https://en.wikipedia.org/wiki/Truth_table#Truth_table_for_most_commonly_used_logical_operators

True / T / 1
False / F / 0

Logic/logic-truth-table.jpg

Logical And (∧)
The logical and operator corresponds to the English conjunction ‘and’:
it is used to claim that two statements are both true.
Sometimes it is called ‘logical conjunction’, and its mathematical symbol is ∧.

Inclusive Logical Or (∨)
The logical or operator corresponds to the most common usage of the English word or.
It takes two arguments and returns True if either argument (or both) is True;
otherwise it returns False.
Other commonly used names for this operation are logical disjunction and inclusive or,
and the symbol for the operation is ∨.

Exclusive Logical Or (⊗)
In English, we often think of ‘A or B’ as meaning:
‘either A or B—one or the other, but not both’,
excluding the possibility that A is true at the same time as B.
This meaning of the word ‘or’ is called exclusive or,
because if one of the alternatives is true,
then this excludes the possibility that the other is also true.

Logical Not (¬)
The English word ‘not’ is used to assert that a statement is false.
It corresponds to the logical not (also called logical negation) operator,
whose symbol is ¬.
It takes one argument and returns the opposite truth value

Logical Implication (→)
The logical implication operator → corresponds to conditional phrases in English.
For example, an English sentence in the form ‘If A is true, then B is true.’
would be translated into the proposition A → B.
Logical implication mirrors the if … then … then construct in programming.
Logical implication says nothing about cause-and-effect relationships.
It may seem strange to you to define False → False to be True.

English sentences contain all sorts of connotations,
but logical propositions mean nothing more than what the defining truth tables say.
The truth table for → is just a definition.
It is meaningless to argue about whether the definition given above is ‘correct’;
definitions aren’t either right or wrong, they are just definitions.

The pertinent question to ask is:
Why is it convenient to define → so that False → False is true?
Logicians have been refining their definitions for hundreds of years,
and based on their experience, they feel this is the best way to define →.

However, here’s a more computer-oriented way to think about it:
consider the programming language statement if A then B.
If A turns out to be true,
then the statement B will be executed,
but if A is false,
then the machine won’t even look at B.
It makes no difference at all what B says.
The definition of → captures this indifference to B in the cases where A is false.

Logical Equivalence (↔︎)
The logical equivalence operator ↔︎ is used to claim that:
two propositions have the same value, either both are true, or both are false.
The statement A ↔︎ B could also be expressed by writing (A → B) ∧ (B → A).
Sometimes ↔︎ is considered to be an abbreviation for conjunction of two ordinary implications,
rather than a fundamental logical operator.

There are two distinct mathematical ways to claim propositions A and B have the same value.
One way is to write the equation A = B,
and the other way is to claim that the proposition A ↔︎ B has the value True.
Logical equivalence is similar to but not the same as ordinary equality.
The difference between them is important, and we’ll say more about it later.

1.4.1 Haskell Boolean basics

¬x is written in Haskell as not x;

a ∧ b is written either as a && b,
or in this codebase, alse as a /\ b;

a ∨ b is written either as a || b,
or in this codebase, also as a \/ b;

a → b in this codebase, is written as a ==> b;

a ↔︎ b in this codebase, is written as a <=> b.

This is the way that Booleans and Boolean functions are defined in Prelude.
I stole this directly from the Haskell source code:
Logic/DataBool.hs

-- This is copied from the actual implementation in Haskell.
-- So I hide the actual implementation from this namespace:
import qualified Prelude

-- This is all a boolean is in Haskel!
-- It's amazingly simple.
data Bool = False | True
  deriving (Prelude.Show)

-- | Boolean "and", lazy in the second argument
-- >>> True && True
-- True

-- | Boolean "and", lazy in the second argument
-- >>> True && False
-- False

-- | Boolean "and", lazy in the second argument
-- >>> False && True
-- False

-- | Boolean "and", lazy in the second argument
-- >>> False && False
-- False
(&&) :: Bool -> Bool -> Bool
True && x = x
False && _ = False

-- | Boolean "or", lazy in the second argument
-- >>> True || True
-- True

-- | Boolean "or", lazy in the second argument
-- >>> True || False
-- True

-- | Boolean "or", lazy in the second argument
-- >>> False || True
-- True

-- | Boolean "or", lazy in the second argument
-- >>> False || False
-- False
(||) :: Bool -> Bool -> Bool
True || _ = True
False || x = x

-- | Boolean "not"
-- >>> not True
-- False

-- | Boolean "not"
-- >>> not False
-- True
not :: Bool -> Bool
not True = False
not False = True

We define some extra logic operators for the sake of this class:
Code-DMUC/Stdm06LogicOperators.hs

-- # Software Tools for Discrete Mathematics
module Stdm06LogicOperators where

-- ∨ Disjunction / or defined for symbolic convenience.
infix 3 \/

(\/) :: Bool -> Bool -> Bool
(\/) = (||)

-- ∧ Conjunction / and defined for symbolic convenience.
infix 4 /\

(/\) :: Bool -> Bool -> Bool
(/\) = (&&)

-- ↔ Equivalence / If and only if
infix 1 <=>

-- |
-- >>> True <=> True
-- True

-- |
-- >>> True <=> False
-- False

-- |
-- >>> False <=> False
-- True

-- |
-- >>> False <=> True
-- False
(<=>) :: Bool -> Bool -> Bool
(<=>) a b = a == b

-- → Implication / Therefore / if
infix 2 ==>

-- |
-- >>> False ==> False
-- True

-- |
-- >>> False ==> True
-- True

-- |
-- >>> True ==> False
-- False

-- |
-- >>> True ==> True
-- True
(==>) :: Bool -> Bool -> Bool
(==>) True False = False
(==>) _ _ = True

-- Exclusive or
infixr 2 <+>

-- |
-- >>> False <+> False
-- False

-- |
-- >>> False <+> True
-- True

-- |
-- >>> True <+> False
-- True

-- |
-- >>> True <+> True
-- False
(<+>) :: Bool -> Bool -> Bool
x <+> y = x /= y

1.4.2 Substitutions and inference

To provide a preview of where we are headed,
like with elementary arithmetic,
substitutions can be performed with logical statements:

Natural inference:
Logic/logic-inference.png

Boolean algebra:
Logic/logic-laws.png

This will enable complex reasoning and proof-checking systems.
We will build toward a full understanding each entry in the table above.

1.4.3 Types of formal logic

There are several different kinds of formal logic,
and for now we will consider just the simplest one, called propositional logic.
After looking at the language of propositional logic,
we will detail three different mathematical systems for reasoning formally about propositions:

  1. Truth tables,
  2. Natural deduction, and
  3. Boolean algebra.

1.4.4 1. Truth tables

Truth tables define the meanings of the logical operators,
and they can be used to calculate the values of expressions,
and prove that two propositions are logically equivalent.
As truth tables directly express the underlying meaning of propositions,
they are a semantic technique.
Truth tables are easy to understand for small problems,
but they become impossibly large for most realistic problems.

1.4.5 2. Natural deduction

Natural deduction is a formalisation of the basic principles of reasoning.
It provides a set of inference rules,
that specify exactly what new facts you are allowed to deduce from some given facts.
There is no notion of the ‘value’ of propositions;
everything in this system is encapsulated in the inference rules.
Since the rules are based on the forms of propositions,
and we don’t work with truth values,
inference is a purely syntactic approach to logic.
Recent techniques in programming language theory employ more advanced logical systems,
that are related to natural deduction.

1.4.6 3. Boolean algebra

Boolean algebra is another syntactic formalisation of logic,
using a set of equations—the laws of Boolean algebra,
to specify that certain propositions are equal to each other.
Boolean algebra is an axiomatic approach,
similar to elementary algebra and geometry.
It provides an effective set of laws for manipulating propositions,
and is an essential tool for digital circuit design.

1.4.7 Propositional logic

A proposition is just a symbolic variable,
whose value must be either True or False,
and which stands for an English statement,
that could be either true or false.

1.4.8 Well Formed Formulae (WFF)

Before spending a lot of effort arguing about a statement somebody claims to be true,
it’s prudent to make sure the statement actually makes sense,
and to check that all the participants in the debate agree about what it means.

It might be fun to discuss:
‘International trade creates jobs’,

but there’s no point wasting time on:
‘Quickly raspberries running green’.
That one isn’t either true or false—it’s just gibberish.

A big advantage of using logic rather than English,
is that we define propositional logic formally, precisely, and unambiguously.
This allows us to check that a proposition makes sense,
before expending any further work deciding whether it’s true.

Because there is no formal definition of natural languages,
it’s fundamentally impossible to do that for English.
It isn’t even possible in all cases,
to decide definitively whether a statement in a natural language is grammatical:
English has no formal grammar.
While English-language ontologies do,
that is a more advanced topic in logic!

A proposition that ‘makes sense’ is called a well-formed formula,
which is often abbreviated WFF (pronounced ‘woof’).
Every WFF has a well-defined meaning,
and you can work out whether it’s true or false,
given the values of the propositional variables.

For example, (A → (B ∧ (¬A))) is a well-formed formula,
so we can calculate what it means,
and it’s worth spending a little time to decide correctness,
whether there are any possible values of A and B for which it’s true.
(It’s True, if A is False.)

On the other hand, what about ∨ A B ¬C?
This isn’t a WFF.
It’s just nonsense, so it has no truth value.

A term in propositional logic is sometimes called a formula,
and a term that is constructed correctly, following all the syntax rules,
is called a well-formed formula, abbreviated WFF.

We will never consider the meaning of ill-formed formulas;
these are simply considered to be meaningless.

For example, the formula (P ∧ (¬P )) is well-formed;
this means that we can go on to consider its meaning
(this WFF happens to be False).

However, the formula P ∨ → Q violates the syntax rules of the language,
so we refuse to be drawn into debates about whether it is True or False.

The situation is similar to that of programming languages:
we can talk about the behaviour of a program that is syntactically correct
(and the behaviour might be correct or incorrect),
but we never talk about the behaviour of a program with syntax errors.
The compiler would refuse to translate the program, so there is no run-time behaviour.

1.4.9 Inductive formalization of the WFF syntax

The set of well-formed formulas (WFFs) can be defined,
by saying precisely how you can construct them.

The constants False and True are WFFs.
The only examples are: False, True.

Any propositional variable is a WFF.
Examples: P, Q, R.

If a is a WFF, then (¬a) is a WFF.
Examples: (¬P ), (¬Q), (¬(¬P )).

If a and b are WFFs, then (a ∧ b) is a WFF.
Examples: (P ∧ Q), ((¬P ) ∧ (P ∧ Q)).

If a and b are WFFs, then (a ∨ b) is a WFF.
Examples: (P ∨ Q), ((P ∧ Q) ∨ (¬Q)).

If a and b are WFFs, then (a → b) is a WFF.
Examples: (P → Q), ((P ∧ Q) → (P ∨ Q)).

If a and b are WFFs, then (a ↔︎ b) is a WFF.
Examples: (P ↔︎ Q), ((P ∧ Q) ↔︎ (Q ∧ P )).

Any formula that cannot be constructed using these rules is not a WFF.
Examples: (P ∨ ∧Q), P → ¬.

The rules may be used repeatedly to build nested formulas.
This process is recursive!

For example, here is a demonstration using recursion,
that (P → (Q ∧ R)) is a WFF:
1. P, Q, and R are propositional variables, so they are all WFFs.
2. Because Q and R are WFFs, (Q ∧ R) is also a WFF.
3. Because P and (Q ∧ R) are both WFFs, so is (P → (Q ∧ R)).

1.4.10 Precedence of Logical Operators

Well-formed formulas are fully parenthesised,
so there is no ambiguity in their interpretation.
Often, however, it’s more convenient to omit some of the parentheses,
for the sake of readability.
For example, we may prefer to write:
P → ¬Q ∧ R rather than
(P → ((¬Q) ∧ R)).

The syntax rules given below define what an expression means,
when some of the parentheses are omitted.
These conventions are analogous to those of elementary algebra,
as well as most programming languages,
where there is a precedence rule that says:
a + b × c means a + (b × c) rather than (a + b) × c.
The syntax rules for propositional logic are straightforward:

  1. The most tightly binding operator is ¬.
    For example, ¬P ∧ Q means (¬P ) ∧ Q.
    Furthermore, ¬¬P means ¬(¬P ).

  2. The second highest precedence is the ∧ operator.
    In expressions combining ∧ and ∨, the ∧ operations come first.
    For example, P ∨ Q ∧ R means P ∨ (Q ∧ R).
    If there are several ∧ operations in a sequence,
    then they are performed left to right;
    for example, P ∧ Q ∧ R ∧ S means (((P ∧ Q) ∧ R) ∧ S).
    This property is described by saying ‘∧ associates to the left.’

  3. The ∨ operator has the next level of precedence,
    and it associates to the left.
    For example, P ∧ Q ∨ R ∨ U ∧ V means (((P ∧ Q) ∨ R) ∨ (U ∧ V)).
    This example would be more readable,
    if a few of the parentheses are removed:
    (P ∧ Q) ∨ R ∨ (U ∧ V ).

  4. The → operator has the next lower level of precedence.
    For example, P ∧ Q → P ∨ Q means (P ∧ Q) → (P ∨ Q).
    The → operator associates to the right;
    thus P → Q → R → S means (P → (Q → (R → S))).

  5. The ↔︎ operator has the lowest level of precedence,
    and it associates to the right,
    but we recommend that you use parentheses,
    rather than relying on the associativity.

In general it’s a good idea to omit parentheses that are clearly redundant,
but to include parentheses that improve readability.
Too much reliance on these syntax conventions leads to inscrutable expressions.

1.4.11 Object Language and Meta-Language

Propositional logic is a precisely defined language of well-formed formulas.
We need another richer language to use,
when we are reasoning about well-formed formulas.
This means we will be working simultaneously with two distinct languages,
that must be kept separate.
The WFFs of propositional logic will be called the object language,
because the objects we will be talking about are sentences in propositional logic.
The algebraic language of equations, substitutions, and justifications resembles arithmetic;
we will call it the meta-language,
because it ‘surrounds’ the object language,
and can be used to talk about propositions.

This distinction between object language and meta language is common.
For example, there are many programming languages,
and we need both to write programs in them,
and also to make statements about them.
We have already seen all the operators in the propositional logic object language;
these are ¬, ∧, ∨, → and ↔︎.

Later we will use several operators that belong to the meta-language:
\(\vDash\) (which will be used in the section on truth tables);
\(\vdash\) (which will be used in the section on natural deduction); and
= (which will be used in the section on Boolean algebra).
After studying each of those three major systems,
we will look briefly at some of the metalogical properties of:
\(\vDash\), \(\vdash\), and = when discussing meta-logic.

1.4.12 Method 1. Truth tables

The first method of reasoning in propositional logic is the semastic truth tables.
Truth tables provide an easy method for reasoning about propositions
(as long as they don’t contain too many variables).
The truth table approach treats propositions as expressions to be evaluated.
All the expressions have type Bool, the constants are True and False,
and the variables A, B, … stand for unknowns that must be either True or False.

When you write out truth tables by hand,
it’s really too tedious to keep writing True and False.
We recommend abbreviating True as 1 and False as 0;
as pointed out earlier,
T and F are also common abbreviations, but are harder to read.
However, it should be stressed that 0 and 1 are just abbreviations,
for the logical constants, whose type is Bool.
Truth tables don’t contain numbers.
Only a few truth tables will appear below,
so we will forgo the abbreviations,
and stick safely with True and False.
However, when truth tables are used for reasoning about digital circuits,
it is standard practice to use 0 and 1.

We have already been using truth tables to define the logical operators,
so the format should be clear by now.
Nothing could be simpler than using a truth table to evaluate a propositional expression:
you just calculate the proposition for all possible values of the variables,
write down the results in a table, and see what happens.
Generally, it’s useful to give several of the intermediate results in the truth table,
not just the final value of the entire proposition.

For example, let’s consider the proposition ((A → B) ∧ ¬B) → ¬A and find out when it’s true.
On the left side of the table we list all the possible values of A and B,
on the right side we give the value of the full proposition,
and in the middle (separated by double vertical lines) we give the values of the subexpressions.

For example, to solve this by hand: ((A → B) ∧ ¬B) → ¬A

A       B       A → B       ¬B      (A → B) ∧ ¬B        ¬A      ((A → B) ∧ ¬B) → ¬A
0       0       1           1       1                   1       1
0       1       1           0       0                   1       1
1       0       0           1       0                   0       1
1       1       1           0       0                   0       1

Code-DMUC/logicTableExample.hs

import Stdm06LogicOperators

-- |
-- >>> aBooleanProposition True True
-- True

-- |
-- >>> aBooleanProposition True False
-- True

-- |
-- >>> aBooleanProposition False True
-- True

-- |
-- >>> aBooleanProposition False False
-- True
aBooleanProposition a b = ((a ==> b) /\ not b) ==> not a

1.4.12.1 Tautologies and Contradictions

A tautology is a proposition that is always True, regardless of the values of its variables.

A contradiction is a proposition that is always False, regardless of the values of its variables.

You can find out whether a proposition is a tautology or a contradiction (or neither),
by writing down its truth table:
If a column contains nothing but True, then the proposition is a tautology.
If there is nothing but False, then it’s a contradiction.
If there is a mixture of True and False, then the proposition is neither.

For example,
P ∨ ¬P is a tautology,
but P ∧ ¬P is a contradiction,
and the following truth table proves it:

P       ¬P      P ∨ ¬P      P ∧ ¬P
0       1       1           0
1       0       1           0

Logic/tautologyContradiction.hs

-- |
-- >>> tautology True
-- True

-- |
-- >>> tautology False
-- True
tautology p = p || not p

-- |
-- >>> contradiction True
-- False

-- |
-- >>> contradiction False
-- False
contradiction p = p && not p

1.4.12.2 Meta-language

There is a special notation for expressing statements about propositions.
As it’s used to make statements about propositions,
this notation belongs to the meta-language,
and it uses the meta-operator \(\vDash\),
which is often pronounced ‘double-turnstile’
(to distinguish it from \(\vdash\), which you’ll see later,
and which is pronounced ‘turnstile’).

The notation P1, P2, …, Pn \(\vDash\) Q means:
if all of the propositions P1, P2, …, Pn are true,
then the proposition Q is also true.

The \(\vDash\) meta-operator makes a statement about the actual meanings of the propositions;
it’s concerned with which propositions are True and which are False.
Truth tables can be used to prove statements containing \(\vDash\).
The meaning of a proposition is called its semantics.
The set of basic truth values (True and False),
along with a method for calculating the meaning of any well-formed formula,
is called a model of the logical system.

There may be any number n of propositions P1, P2, …, Pn in the list of assumptions.
If n = 1 then the statement looks like P \(\vDash\) Q.
A particularly important case is when n = 0, so there are no assumptions at all, and the statement becomes simply \(\vDash\) Q.
This means that Q is always true, regardless of the values of the propositional variables inside it;
in other words, Q is a tautology.

1.4.12.3 Limitations of Truth Tables

The truth table method is straightforward to use;
it just requires some brute-force calculation.
You don’t need any insight into the proposition,
which may be true (or false, or sometimes true).
This is both a strength and a weakness of the method.
It’s comforting to know that you can reliably crank out a proof, given enough time.
However, a big advantage of the other two logical systems we will study,
natural deduction and Boolean algebra,
is that they give much better insight into why a theorem is true.

The truth table method requires one line for each combination of variable values.
If there are k variables, this means there are 2k lines in the table.
For one variable, you get 21 = 2 lines (for example, the truth table for the ¬ operator).
For two variables, there are 22 = 4 lines (like the definitions of ∨, ∧, → and ↔︎).
For three variables there are 23 = 8 lines, which is about as much as anybody can handle.

Since the number of lines in the table grows exponentially in the number of variables,
the truth table method becomes unwieldy for most interesting problems.
Later, we will prove a theorem whose truth table has 2129 lines.
That number is larger than the number of atoms in the known universe,
so you’ll be relieved to know that we will skip the truth table and use more abstract, efficient methods.

See how we might use functions to brute-force truth tables,
and why that is not an ideal solution:
Code-HRLMP/HL02TAMO.hs

{-# LANGUAGE FlexibleInstances #-}

module HL02TAMO where

-- Implication / Therefore / if
infix 1 ==>

-- |
-- >>> False ==> False
-- True

-- |
-- >>> False ==> True
-- True

-- |
-- >>> True ==> False
-- False

-- |
-- >>> True ==> True
-- True
(==>) :: Bool -> Bool -> Bool
x ==> y = (not x) || y

-- Equivalence / If and only if
infix 1 <=>

-- |
-- >>> True <=> True
-- True

-- |
-- >>> True <=> False
-- False

-- |
-- >>> False <=> False
-- True

-- |
-- >>> False <=> True
-- False
(<=>) :: Bool -> Bool -> Bool
x <=> y = x == y

-- Exclusive or
infixr 2 <+>

-- |
-- >>> False <+> False
-- False

-- |
-- >>> False <+> True
-- True

-- |
-- >>> True <+> False
-- True

-- |
-- >>> True <+> True
-- False
(<+>) :: Bool -> Bool -> Bool
x <+> y = x /= y

-- Examples:
p = True

q = False

-- |
-- >>> formula1
-- False
formula1 = (not p) && (p ==> q) <=> not (q && (not p))

-- | To prove a statement true, we can brute force check all combinations of True and False:
-- >>> formula2 True True
-- False

-- |
-- >>> formula2 True False
-- False

-- |
-- >>> formula2 False True
-- False

-- |
-- >>> formula2 False False
-- True
formula2 p q = ((not p) && (p ==> q) <=> not (q && (not p)))

-- | This checks any boolean formula (bf) which takes one value, for example:
-- >>> valid1 excluded_middle
-- True
valid1 :: (Bool -> Bool) -> Bool
valid1 bf = (bf True) && (bf False)

excluded_middle :: Bool -> Bool
excluded_middle p = p || not p

-- | This checks any logical proposition with two values, for example:
-- >>> valid2 form1
-- True

-- |
-- >>> valid2 form2
-- False
valid2 :: (Bool -> Bool -> Bool) -> Bool
valid2 bf =
  (bf True True)
    && (bf True False)
    && (bf False True)
    && (bf False False)

form1 p q = p ==> (q ==> p)

form2 p q = (p ==> q) ==> p

-- | This checks any logical statement with three values, for example:
-- >>>
valid3 :: (Bool -> Bool -> Bool -> Bool) -> Bool
valid3 bf =
  and
    [ bf p q r | p <- [True, False], q <- [True, False], r <- [True, False]
    ]

-- | This checks any logical statement with four values, for example:
-- >>>
valid4 :: (Bool -> Bool -> Bool -> Bool -> Bool) -> Bool
valid4 bf =
  and
    [ bf p q r s | p <- [True, False], q <- [True, False], r <- [True, False], s <- [True, False]
    ]

-- | Logical equivalence of boolean formuae with one input can be tested as follows:
-- >>> logEquiv1 bfBasic1a bfBasic1b
-- True
logEquiv1 :: (Bool -> Bool) -> (Bool -> Bool) -> Bool
logEquiv1 bf1 bf2 =
  (bf1 True <=> bf2 True) && (bf1 False <=> bf2 False)

bfBasic1a :: (Bool -> Bool)
bfBasic1a b = b || not b

bfBasic1b :: (Bool -> Bool)
bfBasic1b b = not b || b

-- | Logical equivalence of boolean formuae with two inputs can be tested as follows:
-- >>> logEquiv2 formula3 formula4
-- True

-- | Logical equivalence of boolean formuae with two inputs can be tested as follows:
-- >>> logEquiv2 formula3 formula5
-- False

-- | Wrapping it up with a validity check:
-- >>> valid2 formula5
-- True
logEquiv2 ::
  (Bool -> Bool -> Bool) ->
  (Bool -> Bool -> Bool) ->
  Bool
logEquiv2 bf1 bf2 =
  and
    [ (bf1 p q) <=> (bf2 p q) | p <- [True, False], q <- [True, False]
    ]

formula3 p q = p

formula4 p q = (p <+> q) <+> q

formula5 p q = p <=> ((p <+> q) <+> q)

-- | Logical equivalence of boolean formuae with three inputs can be tested as follows:
-- >>>
logEquiv3 ::
  (Bool -> Bool -> Bool -> Bool) ->
  (Bool -> Bool -> Bool -> Bool) ->
  Bool
logEquiv3 bf1 bf2 =
  and
    [ (bf1 p q r) <=> (bf2 p q r) | p <- [True, False], q <- [True, False], r <- [True, False]
    ]

-- For defining equivalence laws (Theorem 2.10).
-- Show mathy definition in book (page 45/56) next to code:

class TF p where
  valid :: p -> Bool
  lequiv :: p -> p -> Bool

instance TF Bool where
  valid = id
  lequiv f g = f == g

instance (TF p) => TF (Bool -> p) where
  valid f = valid (f True) && valid (f False)
  lequiv f g =
    (f True) `lequiv` (g True)
      && (f False) `lequiv` (g False)

-- Law of double negation
-- P 𠪪P
test1 = lequiv id (\p -> not (not p))

-- Laws of idempotence
-- P ∧ P ≡ P
-- P ∨ P ≡ P
test2a = lequiv id (\p -> p && p)

test2b = lequiv id (\p -> p || p)

-- (P ⇒ Q) ≡ ¬P ∨ Q
-- ¬(P ⇒ Q) ≡ P ∧ ¬Q
test3a = lequiv (\p q -> p ==> q) (\p q -> not p || q)

test3b = lequiv (\p q -> not (p ==> q)) (\p q -> p && not q)

-- Laws of contraposition
-- (¬P ⇒ ¬Q) ≡ (Q ⇒ P )
-- (P ⇒ ¬Q) ≡ (Q ⇒ ¬P )
-- (¬P ⇒ Q) ≡ (¬Q ⇒ P )
test4a = lequiv (\p q -> not p ==> not q) (\p q -> q ==> p)

test4b = lequiv (\p q -> p ==> not q) (\p q -> q ==> not p)

test4c = lequiv (\p q -> not p ==> q) (\p q -> not q ==> p)

-- (P ⇔ Q) ≡ ((P ⇒ Q) ∧ (Q ⇒ P )) ≡ ((P ∧ Q) ∨ (¬P ∧ ¬Q))
test5a =
  lequiv
    (\p q -> p <=> q)
    (\p q -> (p ==> q) && (q ==> p))

test5b =
  lequiv
    (\p q -> p <=> q)
    (\p q -> (p && q) || (not p && not q))

-- Laws of commutativity
-- P ∧ Q ≡ Q ∧ P
-- P ∨ Q ≡ Q ∨ P
test6a = lequiv (\p q -> p && q) (\p q -> q && p)

test6b = lequiv (\p q -> p || q) (\p q -> q || p)

-- DeMorgan laws
-- ¬(P ∧ Q) ≡ ¬P ∨ ¬Q
-- ¬(P ∨ Q) ≡ ¬P ∧ ¬Q
test7a =
  lequiv
    (\p q -> not (p && q))
    (\p q -> not p || not q)

test7b =
  lequiv
    (\p q -> not (p || q))
    (\p q -> not p && not q)

-- Laws of associativity
-- P ∧ (Q ∧ R) ≡ (P ∧ Q) ∧ R
-- P ∨ (Q ∨ R) ≡ (P ∨ Q) ∨ R
test8a =
  lequiv
    (\p q r -> p && (q && r))
    (\p q r -> (p && q) && r)

test8b =
  lequiv
    (\p q r -> p || (q || r))
    (\p q r -> (p || q) || r)

-- Distribution laws
-- P ∧ (Q ∨ R) ≡ (P ∧ Q) ∨ (P ∧ R)
-- P ∨ (Q ∧ R) ≡ (P ∨ Q) ∧ (P ∨ R)
test9a =
  lequiv
    (\p q r -> p && (q || r))
    (\p q r -> (p && q) || (p && r))

test9b =
  lequiv
    (\p q r -> p || (q && r))
    (\p q r -> (p || q) && (p || r))

-- Proof-related code (not for logic section):
square1 :: Integer -> Integer
square1 x = x ^ 2

square2 :: Integer -> Integer
square2 = \x -> x ^ 2

m1 :: Integer -> Integer -> Integer
m1 = \x -> \y -> x * y

m2 :: Integer -> Integer -> Integer
m2 = \x y -> x * y

solveQdr :: (Float, Float, Float) -> (Float, Float)
solveQdr = \(a, b, c) ->
  if a == 0
    then error "not quadratic"
    else
      let d = b ^ 2 - 4 * a * c
       in if d < 0
            then error "no real solutions"
            else
              ( (-b + sqrt d) / 2 * a,
                (-b - sqrt d) / 2 * a
              )

every, some :: [a] -> (a -> Bool) -> Bool
every xs p = all p xs
some xs p = any p xs

Code-HRLMP/Sol02.hs

module Sol02 where

import HL01GS
import HL02TAMO

-- 2.13 Theorem 2.12 p48/59

-- ¬> ≡ ⊥; ¬⊥ ≡ >
tst1a = not True <=> False

tst1b = not False <=> True

-- P ⇒ ⊥ ≡ ¬P
tst2 = logEquiv1 (\p -> p ==> False) (\p -> not p)

-- P ∨ > ≡ >; P ∧ ⊥ ≡ ⊥ (dominance laws)
tst3a = logEquiv1 (\p -> p || True) (const True)

tst3b = logEquiv1 (\p -> p && False) (const False)

-- P ∨ ⊥ ≡ P ; P ∧ > ≡ P (identity laws)
tst4a = logEquiv1 (\p -> p || False) id

tst4b = logEquiv1 (\p -> p && True) id

-- P ∨ ¬P ≡ > (law of excluded middle)
tst5 = logEquiv1 excluded_middle (const True)

-- P ∧ ¬P ≡ ⊥ (contradiction)
tst6 = logEquiv1 (\p -> p && not p) (const False)

-- The implementation uses excluded_middle; this is defined in Chapter 2 as a name for the function
-- (\ p -> p || not p). const k is the function which gives value k for any argument.
-- Note that the implementation of tst4a and tst4b makes clear why P ∨ ⊥ ≡ P and P ∧ > ≡ P are called laws of identity.

-- 2.15 A propositional contradiction is a formula that yields false for every combination of truth values for its proposition letters. Write Haskell definitions of contradiction tests for propositional functions with one, two and three variables.
contrad1 :: (Bool -> Bool) -> Bool
contrad1 bf = not (bf True) && not (bf False)

contrad2 :: (Bool -> Bool -> Bool) -> Bool
contrad2 bf = and [not (bf p q) | p <- [True, False], q <- [True, False]]

contrad3 :: (Bool -> Bool -> Bool -> Bool) -> Bool
contrad3 bf =
  and [not (bf p q r) | p <- [True, False], q <- [True, False], r <- [True, False]]

-- 2.51 Define a function unique :: (a -> Bool) -> [a] -> Bool that gives True for unique p xs just in case there is exactly one object among xs that satisfies p.
unique :: (a -> Bool) -> [a] -> Bool
unique p xs = length (filter p xs) == 1

-- 2.52 Define a function parity :: [Bool] -> Bool that gives True for parity xs just in case an even number of the xss equals True.
parity :: [Bool] -> Bool
parity [] = True
parity (x : xs) = x /= (parity xs)

-- 2.53 Define a function evenNR :: (a -> Bool) -> [a] -> Bool that gives True for evenNR p xs just in case an even number of the xss have property p. (Use the parity function from the previous exercise.)
evenNR :: (a -> Bool) -> [a] -> Bool
evenNR p = parity . map p

-- Alternative solution:
evenNR' :: (a -> Bool) -> [a] -> Bool
evenNR' p xs = even (length (filter p xs))

1.4.13 Method 2. Natural Deduction: Inference Reasoning

Natural deduction is a formal logical system,
that allows you to reason directly with logical propositions using inference,
without having to substitute truth values for variables or evaluate expressions.
Natural deduction provides a solid theoretical foundation for logic,
and it helps focus attention on why logical propositions are true or false.
Natural deduction is well suited for automatic proof checking by a program,
and it has a variety of applications in computer science.

In normal English usage,
the verb ‘infer’ means to reason about some statements,
in order to reach a conclusion.
This is an informal process,
and it can run into problems due to the ambiguities of English,
but the intent is to establish that the conclusion is definitely true.

Logical inference means reasoning formally about a set of statements,
in order to decide, beyond all shadow of doubt, what is true.
In order to make the reasoning absolutely clear cut,
we need to do several things:

The set of statements we’re reasoning about must be defined.
This is called the object language,
and a typical choice would be propositional expressions.

The methods for inferring new facts from given information must be specified precisely.
These are called the inference rules.

The form of argument must be defined precisely,
so that if anybody claims to have an argument that proves a fact,
we can determine whether it actually is a valid argument.
This defines a meta-language,
in which proofs of statements in the object language can be written.
Every step in the reasoning must be justified by an inference rule.

There are several standard ways to write down formal proofs.
In this book we’ll use the form which is most commonly used in computer science.

The notation P1, P2, …, Pn \(\vdash\) Q is called a sequent,
and it means that if all of the propositions P1, P2, …, Pn are known,
then the proposition Q can be inferred formally,
using the inference rules of natural deduction.

We have seen two similar notations:

With truth tables,
we had meta-logical statements with the \(\vDash\) operator,
like P \(\vDash\) Q → P .
This statement means that if P is True,
then the proposition Q → P is also True.
It says nothing about how we know that to be the case.

In contrast, the notation P \(\vdash\) Q → P , which will be used in this section,
means there is a proof of Q → P, and the proof assumes P .
Both \(\vDash\) and \(\vdash\) are used to state theorems;
the distinction is that \(\vDash\) is concerned with the ultimate truth of the propositions,
while \(\vdash\) is concerned with whether we have a proof.
We will return to the relationship between \(\vDash\) and \(\vdash\) later.

In formal logic, you can’t just make intuitive arguments,
and hope they are convincing.
Every step of reasoning you make has to be backed up by an inference rule.
Intuitively, an inference rule says,
“If you know that Statement1 and Statement2 are established (either assumed or proven),
then you may infer Statement3,”
and furthermore, the inference constitutes a proof.
Statement1 and Statement2 are called the assumptions of the inference rule,
and Statement3 is called the conclusion.
There may be any number of assumptions.
there don’t have to be just two of them.
Here’s an example of an inference rule about the ∧ operator,
written informally in English:

If you know some proposition a, and also the proposition b,
then you are allowed to infer the proposition a ∧ b.

In this example, there are two assumptions—a and b—and the conclusion is a ∧ b.
Note that we have expressed this inference using meta-variables a and b.
These meta-variables could stand for any WFF;
thus a might be P, Q → R, etc.
The variables P, Q, etc., are propositional variables,
belonging to the object language,
and their values are True or False.
We will adopt the following convention:

Meta-Variables belong to the meta-language,
and are written as lower-case letters a, b, c, …
The value of a meta-variable is a WFF.
For example a might have the value P ∧ Q.

Propositional variables belong to the object language,
and are written as upper-case letters A, B, …, P, Q, R, …
The value of a propositional variable is either True or False.

1.4.13.1 Formalizing inference

Inference rules are expressed by writing down the assumptions above a horizontal line,
and writing the conclusion below the line:

\[\frac{Assumption1 \qquad Assumption2}{Conclusion3}\]

This says that if you can somehow establish the truth of Assumption1 and Assumption2,
then Conclusion3 is guaranteed (by the inference rule) to be true.
The inference about ∧ would be written formally as

\[\frac{a \qquad b}{a \land b}\]

An inference rule works in only one direction—for example,
if you have established a ∧ b, then you cannot use the rule above to infer a.
A different inference rule, which we will see later, would be needed.

The below list summarizes all the inference rules of propositional logic.
In the next several sections we will work systematically through them all.

And Introduction \(\{ \land I \}\):
\[\frac{a \qquad b}{a \land b} \{ \land I \}\]

And Elimination \(\{ \land E_L \}, \{ \land E_R \}\):
\[\frac{a \land b}{a} \{ \land E_L \}\]
\[\frac{a \land b}{b} \{ \land E_R \}\]

Imply Elimination \(\{ \rightarrow E \}\):
\[\frac{a \qquad a \rightarrow b}{b} \{ \rightarrow E \}\]

Imply Introduction \(\{ \rightarrow I \}\):
\[\frac{a \vdash b}{a \rightarrow b} \{ \rightarrow I \}\]

Or Introduction \(\{ \lor I_L \}, \{ \lor I_R \}\):
\[\frac{a}{a \lor b} \{ \lor I_L \}\]
\[\frac{b}{a \lor b} \{ \lor I_R \}\]

Or Elimination \(\{ \lor E \}\):
\[\frac{a \lor b \qquad a \vdash c \qquad b \vdash c}{c} \{ \lor E \}\]

Identity \(\{ ID \}\):
\[\frac{a}{a} \{ ID \}\]

Contraduction \(\{ CTR \}\):
\[\frac{False}{a} \{ CTR \}\]

Reductio ad Absurdum \(\{ RAA \}\):
\[\frac{\neg a \vdash False}{a} \{ RAA \}\]

Many of the rules fall into two categories.

The introduction rules have a conclusion,
into which a new logical operator has been introduced;
they serve to build up more complex expressions from simpler ones.

The elimination rules require an assumption,
that contains a logical operator which is eliminated from the conclusion;
these are used to break down complex expressions into simpler ones.

Introduction rules tell you what you need to know,
in order to introduce a logical operator.

Elimination rules tell you what you can infer,
from a proposition containing a particular operator.

1.4.13.2 Definitions of True, ¬, and ↔︎

Natural deduction works with a very minimal set of basic operators.
In fact, the only primitive built-in objects are:
the constant, False, and the three operators ∧, ∨ and →.

Everything else is an abbreviation!
It’s particularly intriguing that:
False is the only primitive logic value in the natural deduction system.

The constant, True, and the operators ¬ and ↔︎ are abbreviations defined as follows:

True = False → False
¬a = a → False
a ↔︎ b = (a → b) ∧ (b → a)

You can check all of these definitions semantically, using truth tables.

In natural deduction, we will manipulate them only using the inference rules.

Notice that ¬False is defined to be False → False,
which happens to be the definition of True.
In other words, ¬False = True.
Going the other direction,
¬True becomes True → False when the ¬ abbreviation is expanded out,
and that becomes (False → False) → False when the True is expanded out.
Later we will see an inference rule (called Reductio ad Absurdum)
which will allow us to infer False from (False → False) → False.
The result is that ¬True = False and ¬False = True.

We will write propositions with True, ¬ and ↔︎ just as usual,
but sometimes to make a proof go through,
it will be necessary to expand out the abbreviations,
and work just with the primitives.

1.4.13.3 And Introduction \(\{ \land I \}\):

\[\frac{a \qquad b}{a \land b} \{ \land I \}\]

The And Introduction inference rule says that:
if you know that some proposition a is true,
and you also know that b is true,
then you may infer that the proposition a ∧ b is true.
As the name ‘And Introduction’ suggests,
the rule specifies what you have to know,
in order to infer a proposition with a new occurrence of ∧.

We will now work through several examples,
starting with a theorem saying that:
the conclusion P ∧ Q can be inferred from the assumptions P and Q.

Theorem 35.
P, Q \(\vdash\) P ∧ Q

Proof. The theorem is proved by just one application of the {∧I} inference;
it’s the simplest possible example of how to use the {∧I} inference rule.

\[\frac{P \qquad Q}{P \land Q} \{ \land I \}\]

Notice that the theorem above involves two specific propositional variables, P and Q.
The {∧I} rule does not require any particular propositions.
It uses the meta-variables a and b, which can stand for any well-formed formula.
For example, the following theorem has the same structure as the previous one,
and is also proved with just one application of the {∧I} rule,
but this time the meta-variable a stands for R → S and b stands for ¬P .

Theorem 36.
(R → S), ¬P \(\vdash\) (R → S) ∧ ¬P

Proof. \[\frac{(R \rightarrow S) \qquad \neg P}{(R \rightarrow S) \land \neg P} \{ \land I \}\]

Usually you need several steps to prove a theorem,
and each step requires explicit use of an inference rule.
You can build up a proof in one diagram.
When a subproof results in a conclusion that serves as an assumption for a larger proof,
then the entire subproof diagram appears above the line,
for the main inference in the large proof.
Here is an example:

Theorem 37.
P, Q, R \(\vdash\) (P ∧ Q) ∧ R

Proof. The main inference here has two assumptions: P ∧ Q and R.
However, the first assumption P ∧ Q is not one of the assumptions of the entire theorem;
it is the conclusion of another {∧I} inference with assumptions P and Q.
The entire proof is written in a single diagram.
The conclusion P ∧ Q of the first inference is above the longer line
(the line belonging to the main inference).

\[\frac{\frac{P \qquad Q}{P \land Q} \{ \land I \} \qquad R}{(P \land Q) \land R} \{ \land I \}\]

Inference proofs have a natural tree structure:
assumptions of the theorem are like the leaves of a tree,
and subproofs are like the nodes (forks) of a tree.
The method we are using here for writing proofs makes this tree structure explicit.

There is an alternative format for writing logical inference proofs,
where each inference is written on a numbered line.
When the conclusion of one inference is required as an assumption to another one,
this is indicated by explicit references to the line numbers.
This format looks like a flat list of statements
(much like an assembly language program),
while the tree format we are using has a nested structure
(much like a program in a block structured language).

Logical inference has many important applications in computer science.
The tree format is normally used for computing applications,
so we will use that notation in this text,
in order to help prepare you for more advanced studies.

1.4.13.4 And Elimination \(\{ \land E_L \}, \{ \land E_R \}\):

\[\frac{a \land b}{a} \{ \land E_L \}\]
\[\frac{a \land b}{b} \{ \land E_R \}\]

Two inference rules allow the elimination of an And operation:

if a ∧ b is known to be true,
then a must be true,
and also b must be true.

The ‘And Elimination Left’ {∧EL } rule retains the left argument of a ∧ b in the result.

The ‘And Elimination Right’ {∧ER } rule retains the right argument.

Here is a simple example using And Elimination:

Theorem 38.
P, Q ∧ R \(\vdash\) P ∧ Q

Proof. Two inferences are required.
First the ‘Left’ form of the And Elimination rule {∧EL } is used,
to obtain the intermediate result Q,
and the And Introduction rule is then used,
to combine this with P to obtain the result P ∧ Q.

\[\frac{P \qquad \frac{Q \land R}{Q} \{ \land E_L \} }{P \land Q} \{ \land I \}\]

The next example contains several applications of And Elimination,
including both the Left and Right forms of the rule.
As these two examples show,
the three rules {∧I}, {∧EL} and {∧ER} can be used systematically,
to deduce consequences of logical conjunctions.

Theorem 39.
(P ∧ Q) ∧ R \(\vdash\) R ∧ Q

Proof.
\[\frac{\frac{(P \land Q) \land R}{R} \{ \land E_R\} \qquad \frac{\frac{(P \land Q) \land R}{P \land Q} \{ \land E_L\}}{Q} \{ \land E_R \}}{R \land Q} \{ \land I \}\]

The ∧ operator is commutative:
the proposition P ∧ Q is equivalent to Q ∧ P.
The following theorem establishes this.

Theorem 40.
P ∧ Q \(\vdash\) Q ∧ P

Proof.
The idea behind this proof is to infer Q and P, in that order,
with Q to the left of P, above the main line,
so {∧I} can be used to infer Q ∧ P.
The intermediate results Q and P are obtained by And Elimination.

\[\frac{\frac{P \land Q}{Q} \{ \land E_R\} \qquad \frac{P \land Q}{P} \{ \land E_L\} }{Q \land P} \{ \land I\}\]

Inference proof trees can become quite large,
but you can always work through them systematically,
one inference at a time,
until you see how all the parts of the proof fit together.
You can check the individual inferences in any order you like,
either top-down or bottom-up or any other order you like.

Below is a theorem proved with the And Introduction and Elimination rules,
with a larger proof:

Theorem 41.
For any well formed propositions a, b and c,
a ∧ (b ∧ c) \(\vdash\) (a ∧ b) ∧ c

Proof.
\[\frac{\frac{\frac{a \land (b \land c)}{a} \{\land E_L \} \qquad \frac{\frac {a \land (b \land c)}{b \land c} \{\land E_R\} }{b} \{\land E_L\}}{a \land b} \{\land I\} \qquad \frac{\frac{a \land (b \land c)}{b \land c} \{\land E_R\}}{c} \{\land E_R\} }{(a \land b) \land c} \{ \land I \}\]

1.4.13.5 Imply Elimination \(\{ \rightarrow E \}\):

\[\frac{a \qquad a \rightarrow b}{b} \{ \rightarrow E \}\]

The Imply Elimination rule {→ E} says that:
if you know a is true,
and also that a implies b,
then you can infer b.
The traditional Latin name for the rule is Modus Ponens.

The following theorem provides a simple example of the application of {→ E}.

Theorem 42.
Q ∧ P, P ∧ Q → R \(\vdash\) R.

Proof.
\[\frac{\frac{\frac{Q \land P}{P}\{\land E_R\} \qquad \frac{Q \land P}{Q}\{\land E_L\}}{P \land Q}\{\land I\} \qquad P \land Q \rightarrow R}{R} \{\rightarrow E\}\]

Often we have chains of implication of the form:
a → b, b → c, etc.
The following theorem says that:
given a, and these linked implications,
then you can infer c.

Theorem 43.
For all propositions a, b, and c,
a, a → b, b → c \(\vdash\) c.

Proof.
\[\frac{\frac{a \qquad a \rightarrow b}{b}\{\rightarrow E\} \qquad b \rightarrow c}{c}\{\rightarrow E\}\]

1.4.13.6 Imply Introduction \(\{ \rightarrow I \}\):

\[\frac{a \vdash b}{a \rightarrow b} \{ \rightarrow I \}\]

The Imply Introduction rule {→ I} says that,
in order to infer the logical implication a → b,
you must have a proof of b using a as an assumption.

We will first give a simple example of Imply Introduction:

Theorem 44.
\(\vdash\) (P ∧ Q) → Q.

Proof.
First consider the sequent P ∧ Q \(\vdash\) Q,
which is proved by the following And Elimination inference:
\[\frac{P \land Q}{Q}\{\land E_R\}\]

Now we can use the sequent,
the theorem established by the inference,
and the {→ I} rule:
\[\frac{P \land Q \vdash Q}{P \land Q \rightarrow Q}\{\rightarrow I\}\]

It is crucially important to be careful about what we are assuming.
The reason for having the sequent notation with the \(\vdash\) operator,
is to write down the assumptions of a theorem,
just as precisely as the conclusion.

In the {∧ER} inference we assumed P ∧ Q;
without that assumption we could not have inferred Q.
This assumption appears in the sequent to the left of the \(\vdash\).
The entire sequent P ∧ Q \(\vdash\) Q does not rely on any further assumptions;
it is independently true.
Therefore we can put it above the line of the {→ I} rule ‘for free,’
without actually making any assumptions.
Since nothing at all needed to be assumed,
to support the application of the {→ I} rule,
we end up with a theorem that doesn’t require any assumptions.
The sequent that expresses the theorem,
\(\vdash\) P ∧ Q → Q,
has nothing to the left of the \(\vdash\) operator.

It is customary to write the entire proof as a single tree,
where a complete proof tree appears above the line of the {→ I} rule.
That allows us to prove \(\vdash\) P ∧ Q → Q with just one diagram:

\[\frac{\frac{P \land Q}{Q}\{\land E_R\}}{P \land Q \rightarrow Q}\{\rightarrow I\}\]

From this diagram,
it looks like there is an assumption P ∧ Q.
However, that was a temporary, local assumption,
whose only purpose was to establish P ∧ Q \(\vdash\) Q.
Once that result is obtained the assumption P ∧ Q can be thrown away.
An assumption that is made temporarily,
only in order to establish a sequent,
and which is then thrown away, is said to be discharged.
A discharged assumption does not need to appear first,
to the left of the \(\vdash\) of the main theorem.
In our example, the proposition P ∧ Q → Q is always true,
and it doesn’t matter whether P ∧ Q is true or false.

In big proof trees,
keep track which assumptions have been discharged and which have not.
We will indicate the discharged assumptions,
by putting a box around them.
A more common notation is to draw a line through the discharged assumption,
but for certain propositions that leads to a highly unreadable result.
Following this convention, the proof becomes:

\[\frac{\frac{\boxed{P \land Q}}{Q}\{\land E_R\}}{P \land Q \rightarrow Q}\{\rightarrow I\}\]

Recall the proof of Theorem 43,
which used the chain of implications a → b and b → c to infer c,
given also that a is true.
A more satisfactory theorem would just focus on the chain of implications,
without relying on a actually being true.
The following theorem gives this purified chain property:
it says that if a → b and b → c, then a → c.

Theorem 45.
(Implication chain rule).
For all propositions a, b and c,

a → b, b → c \(\vdash\) a → c

Proof.
Because we are proving an implication a → c,
we need to use the {→ I} rule.
There is no other way to introduce the → operator!
That rule requires a proof of c given the assumption a,
which is essentially the same proof tree used before in Theorem 43.
The assumption, a, is discharged when we apply {→ I}.
The other two assumptions, (a → b and b → c), are not discharged.
Consequently a doesn’t appear to the left of the \(\vdash\) in the theorem;
instead it appears to the left of the → in the conclusion of the theorem.

\[\frac{\frac{\frac{\boxed{a} \qquad a \rightarrow b}{b}\{\rightarrow E\} \qquad b \rightarrow c}{c}\{\rightarrow E\}}{a \rightarrow c}\{\rightarrow I\}\]

Sometimes in large proofs,
it can be confusing to see just where an assumption has been discharged.
You may have an assumption P with a box around it,
but there could be all sorts of implications P → ··· which came from somewhere else.
In such cases it’s probably clearer to build up the proof in stages,
with several separate proof tree diagrams.

A corollary of this implication chain rule is the following theorem:
if a → b, but you know that b is false,
then a must also be false.
This is an important theorem,
which is widely used to prove other theorems,
and its traditional Latin name is Modus Tollens.

Theorem 46.
(Modus Tollens).
For all propositions a and b,
a → b, ¬b \(\vdash\) ¬a

Proof.
First we need to expand out the abbreviations,
using the definition that ¬a means a → False.
This results in the following sequent to be proved:
a → b, b → False  a → False.
This is an instance of Theorem 45.

1.4.13.7 Or Introduction \(\{ \lor I_L \}, \{ \lor I_R \}\):

\[\frac{a}{a \lor b} \{ \lor I_L \}\]
\[\frac{b}{a \lor b} \{ \lor I_R \}\]

Like all the introduction rules,
Or Introduction specifies what you have to establish,
in order to infer a proposition containing a new ∨ operator.
If the proposition a is true,
then both, a ∨ b, and b ∨ a, must also be true.
You can see this by checking the truth table definition of ∨.
Or Introduction comes in two forms, Left and Right.

Theorem 47.
P ∧ Q \(\vdash\) P ∨ Q

Proof.
The proof requires the use of Or Introduction.
There are two equally valid ways to organise the proof.
One method begins by establishing P and then uses \(\{\lor I_L\}\) to put the P to the left of ∨:

\[\frac{\frac{P \land Q}{P}\{\land E_L\}}{P \lor Q}\{\lor I_L\}\]

An alternative proof first establishes Q and then uses \(\{\lor I_R\}\) to put the Q to the right of ∨:

\[\frac{\frac{P \land Q}{Q}\{\land E_R\}}{P \lor Q}\{\lor I_R\}\]

Normally, of course, you would choose one of these proofs randomly;
there is no reason to give both.

1.4.13.8 Or Elimination \(\{ \lor E \}\):

\[\frac{a \lor b \qquad a \vdash c \qquad b \vdash c}{c} \{ \lor E \}\]

The Or Elimination rule specifies what you can conclude,
if you know that a proposition of the form a ∨ b is true.
We can’t conclude anything about either a or b,
because either of those might be false even if a ∨ b is true.
However, suppose we know a ∨ b is true,
and also suppose there is some conclusion c,
that can be inferred from a,
and can also be inferred from b,
then c must also be true.

Or Elimination is a formal version of proof by case analysis.
It amounts to the following argument:
(0) Either a or b is true.
(1) if a is true, then c holds;
(2) if b is true then c holds.
Therefore c is true.
Here is an example:

Theorem 48.
(P ∧ Q) ∨ (P ∧ R) \(\vdash\) P

Proof.
There are two proofs above the line.
The first proof assumes P ∧ Q in order to infer P.
However, that inference is all that we need;
P ∧ Q is discharged, and is not an assumption of the main theorem.
For the same reason, P ∧ R is also discharged.
The only undischarged assumption is (P ∧ Q) ∨ (P ∨ R),
which therefore must appear to the left of the \(\vdash\) in the main theorem.

\[\frac{(P \land Q) \lor (P \land R) \qquad \frac{\boxed{P \land Q}}{P}\{\land E_L\} \qquad \frac{\boxed{P \land R}}{P}\{\land E_L\}}{P}\{\lor E\}\]

Finally, here is a slightly more complex example:

Theorem 49.
(a ∧ b) ∨ (a ∧ c) \(\vdash\) b ∨ c

Proof.
\[\frac{(a \land b) \lor (a \land c) \qquad \frac{\frac{\boxed{a \land b}}{b}\{\land E_R\}}{b \lor c}\{\lor I_L\} \qquad \frac{\frac{\boxed{a \land c}}{c}\{\land E_R\}}{b \lor c}\{\lor I_R\}}{b \lor c}\{\lor E\}\]

1.4.13.9 Identity \(\{ ID \}\):

\[\frac{a}{a} \{ ID \}\]

The Identity rule {ID} says, rather obviously,
that if you know a is true,
then you know a is true.

Although the Identity rule seems trivial,
it is also necessary.
Without it, for example,
there would be no way to prove the following theorem,
which we would certainly want to be true.

Theorem 50.
P \(\vdash\) P

Proof.
\[\frac{P}{P}\{ID\}\]

An interesting consequence of the Identity rule is that True is true,
as the following theorem states.

Theorem 51.
\(\vdash\) True

Proof.
Recall that True is an abbreviation for False → False.
We need to infer this implication using the {→ I} rule,
and that in turn requires a proof of False,
given the assumption False.
This can be done with the {ID} rule.
It could also be done with the Contradiction rule,
which we’ll study shortly.

\[\frac{\frac{\boxed{False}}{False}\{ID\}}{False \rightarrow False}\{\rightarrow I\}\]

Notice that we had to assume False in order to prove this theorem,
but fortunately that assumption was discharged when we inferred False → False.
The assumption of False was just temporary,
and it doesn’t appear as an assumption of the theorem.
That’s a relief, as it would never do if we had to assume that:
False is true in order to prove that True is true!

1.4.13.10 Contraduction \(\{ CTR \}\):

\[\frac{False}{a} \{ CTR \}\]

The Contradiction rule says that you can infer anything at all,
given the assumption that False is true.

In effect, this rule says that False is untrue,
and it expresses that fact,
purely through the mechanism of logical inference.
It would be disappointing if we had to describe the fundamental falseness of False,
by making a meaningless statement outside the system,
such as ‘False is wrong.’

The goal is to describe the process of logical reasoning formally,
using a small set of clearly specified inference rules.
It would also be a bad idea to try to define False as equal to ¬True.
Since True is already defined to be ¬False,
that would be a meaningless and useless circular definition.
The Contradiction rule describes the untruthfulness of False indirectly,
by saying that everything would become provable,
if False is ever assumed or inferred.

Theorem 52.
P, ¬P \(\vdash\) Q

Proof.
Recall that ¬P is just an abbreviation for P → False.
That means we can use the {→ E} rule to infer False,
and once that happens we can use {CTR} to support any conclusion we feel like,
even Q, which isn’t even mentioned in the theorem’s assumptions!

\[\frac{\frac{P \qquad P \rightarrow False}{False}\{\rightarrow E\}}{Q}\{CTR\}\]

The Identity and Contradiction rules often turn up in larger proofs.
A typical example occurs in the following theorem,
which states an important property of the logical Or operator ∨.
This theorem says that if a ∨ b is true, but a is false,
then b has to be true.
It should be intuitively obvious,
but the proof is subtle and merits careful study.

Theorem 53.
For all propositions a and b,
a ∨ b, ¬a \(\vdash\) b

Proof.
As usual, the ¬a abbreviation should be expanded to a → False.
Because we’re given a ∨ b,
the basic structure will be an Or Elimination,
and there will be two smaller proof trees,
corresponding to the two cases for a ∨ b.
In the first case, when a is assumed temporarily,
we obtain a contradiction with the theorem’s assumption of ¬a:
False is inferred,
from which we can infer anything else (and here we want b).
In the second case, when b is assumed temporarily,
the desired result of b is an immediate consequence of the {ID} rule.

\[\frac{a \lor b \qquad \frac{\frac{\boxed{a} \qquad a \rightarrow False}{False}\{\rightarrow E\}}{b}\{CTR\} \qquad \frac{\boxed{b}}{b}\{ID\}}{b}\{\lor E\}\]

Note that there are two undischarged assumptions, a ∨ b and a → False,
and there are two temporary assumptions that are discharged by the {∨E} rule.

1.4.13.11 Reductio ad Absurdum \(\{ RAA \}\):

\[\frac{\neg a \vdash False}{a} \{ RAA \}\]

The Reductio ad Absurdum (reduce to absurdity) rule says that:
if you can infer False from an assumption ¬a,
then a must be true.
This rule underpins the proof by contradiction strategy:
if you want to prove a, first assume the contradiction ¬a and infer False;
the {RAA} rule then allows you to infer a.

Theorem 54.
(Double negation).
¬¬a \(\vdash\) a

Proof.
Our strategy is to use a proof by contradiction.
That is, if we can assume ¬a and infer False,
then the {RAA} rule would yield the inference a.
Because we are given ¬¬a,
the contradiction will have the following general form:

\[\frac{\neg a \qquad \neg\neg a}{False}\]

To make this go through,
we need to replace the abbreviations by their full defined values.
Recall that ¬a is an abbreviation for a → False,
so ¬¬a actually means (a → False) → False.
Once we expand out these definitions,
the inference becomes much clearer:
it is just a simple application of {→ E}:

\[\frac{\frac{\boxed{a \rightarrow False} \qquad (a \rightarrow False) \rightarrow False}{False}\{\rightarrow E\}}{a}\{RAA\}\]

Both requirements for the {RAA} rule have now been provided,
and the {RAA} gives the final result a.

1.4.13.12 Inferring operator truth tables

Inference rules of natural deduction serve as formal foundation for logical reasoning.
This raises two fundamental questions:
Are the inference rules actually powerful enough?
Will they ever allow us to make mistakes?
These are profound and difficult questions.
Modern research in mathematical logic addresses such questions,
and has led to some astonishing results.
In this section we start with an easy version of the first question:
To re-calculate the truth tables of the logical operators,
are the inference rules powerful enough?

Answering this question will provide practice in using the inference rules.

A deeper point is that it’s philosophically satisfying:
The inference rules provide a complete foundation for propositional logic.
Nothing else is required:
the truth tables given earlier in the chapter,
are the most intuitive place to start learning logic,
and we treated them like definitions at the beginning of the chapter,
but it isn’t necessary to accept them as the fundamental definitions of the operators.
If we take instead the inference rules,
as the foundation of mathematical logic,
then truth tables are no longer definitions;
they merely summarise a lot of calculations using the rules.

To illustrate the idea,
let’s use the inference rules to calculate the value of True ∧ False.
We need to prove that True ∧ False,
is logically equivalent to False,
and also that it is not logically equivalent to True.
Recall that False is a primitive constant,
but True is defined as False → False.
First, here is a proof of the sequent \(\vdash\) True ∧ False → False:

\[\frac{\frac{(False \rightarrow False) \land False}{False}\{\land E_R\}}{((False \rightarrow False) \land False) \rightarrow False}\{\rightarrow I\}\]

Next, we prove the sequent \(\vdash\) False → ((False → False) ∧ False):

\[\frac{\frac{False}{(False \rightarrow False) \land False}\{CTR\}}{False \rightarrow ((False \rightarrow False) \land False)}\{\rightarrow I\}\]

Putting these results together, and reintroducing the abbreviations, we get

True ∧ False ↔︎ False

We have thereby calculated one of the lines of the truth table definition of ∧.
The complete truth tables, for all the logical operators,
can be inferred using similar calculations.

1.4.14 Proof checking by formal program

Informal arguments in English are full of imprecision and ambiguity,
so there is no hope of writing a computer program to determine whether one is valid.
People don’t always agree as to whether an argument holds water!
Formal proofs, however, are totally precise and unambiguous.
Formal arguments may become long and involved,
but computers are good at long and involved calculations.

Formal proofs provide confidence that a theorem is correct.
The language of propositional logic,
along with the natural deduction inference system,
form solid foundations for accurate reasoning.
However in manual proofs, everyone makes mistakes,
and even a tiny error would make a large proof completely untrustworthy.
To get the full benefit of formal logic,
we need computers to help with the proofs.

A proof checker is a computer program that:
reads in a theorem and a proof,
determines whether the proof is valid,
and actually establishes the theorem.

A theorem prover is a computer program that reads in a theorem,
and attempts to generate a proof from scratch.

Theorem prover can sometimes save the user a lot of work.
However, theorem provers don’t always succeed,
since there are plenty of true theorems,
whose proofs are too difficult for them to find.
A theorem prover may stop with the message ‘I cannot prove it’,
or it may go into an infinite loop and never stop at all.

The main advantage of a proof checker is that:
it can always determine whether a purported proof is valid or invalid.
Proof checkers are also suitable when you want to write a proof by hand,
and you also want to be sure it’s correct.

Proof tools are the subject of active current research,
but they are not limited to research:
they are also practical for real applications.
There are a number of ‘industrial-strength’ proof tools,
for various logical systems.
Proof tools have already been applied successfully,
to some very difficult and important real-world problems.

We’ll now look at a simple proof checking system implemented in Haskell.

You can also use the software to check your own proofs;
it’s nice to be sure your exercises are correct before handing them in!
Furthermore, you may find it interesting to study the implementation of the checker.

It uses some advanced features of Haskell that aren’t covered in the class,
though most of it isn’t too difficult to understand.
Haskell is well suited for writing proof checkers.
Indeed, most of the industrial strength proof tools are implemented in functional programming languages.

1.4.14.1 Proof checker

The proof checker is a Haskell function named check_proof which takes two arguments:
a Theorem and a Proof.

check_proof :: Theorem -> Proof -> IO ()

The function checks to see whether the proof is valid,
and also whether it serves to establish the truth of the theorem.
You might expect the checker to return a result of type Bool,
where True means the proof is valid,
and False means the proof is incorrect.
However, it’s much better for the checker to give a detailed explanation,
if something is wrong with the proof,
and to do that it needs to perform some Input/Output.
Thus, the function returns a Haskell value of type IO () instead of a simple Bool.

1.4.14.2 Propositions

WFFs can be represented in Haskell as an algebraic data type.
The compiler checks that a formula is indeed well-formed,
and it also provides a way to express terms that appear in logical proofs.

It is common in mathematics to define structures recursively,
as WFFs were defined in the previous section.

data Prop
  = FALSE
  | TRUE
  | A | B | C | D | E | G | H | I | J | K | L | M | N | O | P | Q | R | S | U | V | W | X | Y | Z
  | Pvar String
  | And Prop Prop
  | Or Prop Prop
  | Not Prop
  | Imp Prop Prop
  deriving (Show)

In reference to our previous style:

P                       P

P ∨ Q                   Or P Q
                        P `Or` Q

P ∧ Q                   And P Q
                        P `And` Q

(P ∧ Q) ∨ (R ∧ S)       Or (And P Q) (And R S)
                        ((P `And` Q) `Or` (R `And` S)

P → Q                   Imp P Q
                        P `Imp` Q

¬P                      Not P

The Boolean constants are represented by FALSE and TRUE.
Intuitively, these correspond in meaning to the familiar Boolean constants False and True.
Use False (or True) when you want to write Boolean expressions to be evaluated;
use FALSE (or TRUE) when you want to write a WFF and reason about it.
We will come back to this subtle but important distinction at the end of the section.

The traditional names used for propositional variables are upper-case letters P, Q, R …
We allow every upper-case letter to be used as a propositional variable,
except that F and T are disallowed because someone reading the code might wonder,
whether these are supposed to be variables or the constants false and true.
In addition, you can make any string into a propositional variable by writing Pvar “name”.

The logical expression P ∧ Q is written And P Q.

In a similar way, P ∨ Q is written Or P Q,

and P → Q is written Imp P Q,

and the negation ¬P is written Not P.

Arguments that are not simple logical variables,
should be enclosed in parentheses.
For example, if the arguments to the And are themselves logical expressions,
they should be surrounded by parentheses, for example: And (And P Q) R.

1.4.14.3 Theorems

Theorems are: a list of propositions, and a proposition:

data Theorem = Theorem [Prop] Prop
  deriving (Eq, Show)

For example:
P, Q \(\vdash\) P ∧ Q
is coded as:

Theorem [P,Q] (P `And` Q)

1.4.14.4 Proofs

A proof is represented by another Haskell algebraic data type.
Technically, a proof is a data structure that contains a proposition,
along with a formal argument for the truth of the proposition.
There is a separate constructor for every kind of proof:

data Proof
  = Assume Prop
  | AndI (Proof, Proof) Prop
  | AndEL Proof Prop
  | AndER Proof Prop
  | OrIL Proof Prop
  | OrIR Proof Prop
  | OrE (Proof, Proof, Proof) Prop
  | ImpI Proof Prop
  | ImpE (Proof, Proof) Prop
  | ID Proof Prop
  | CTR Proof Prop
  | RAA Proof Prop
  | Use Theorem [Proof] Prop
  | UseTh (Theorem, Proof) [Proof] Prop
  | UseDB Theorem [Proof] Prop
  deriving (Eq, Show)

Assume Prop
The simplest way to establish that a proposition is true is to assume it.
No further justification is necessary,
and any proposition at all may be assumed.
Note that assumptions may be discharged by the Imply Introduction rule {→ I},
so there is a limited and well-defined scope for all assumptions.
Unfortunately however, it isn’t easy to see what the scope of an assumption is,
just by looking at it;
you need to study what it means to discharge an assumption.

AndI (Proof, Proof) Prop
The And Introduction inference rule {∧I} requires two separate proofs above the line,
as well as the claimed conclusion (which is a proposition).

AndEL Proof Prop
The And Elimination Left inference rule {∧EL} has just one proof above the line;
like all the inference rules,
it has exactly one conclusion, which is a proposition.

AndER Proof Prop
The And Elimination Right rule {∧ER}.
This is the Right version of And Elimination {∧ER}

OrIL Proof Prop
The Or Introduction Left rule {∨IL };

OrIR Proof Prop
The Or Introduction Right rule {∨IR };

OrE (Proof, Proof, Proof) Prop
The Or Elimination rule {∨E} requires three proofs above the line.
The first one is a proof whose conclusion has the form a ∨ b,
the second is a proof of some conclusion c given a,
and the third is a proof of the same conclusion c given b.
The conclusion of the rule must be the proposition c.

ImpI Proof Prop
The Imply Introduction rule {→ I}.

ImpE (Proof, Proof) Prop
The Imply Elimination rule {→ E}.

ID Proof Prop
The Identity rule {ID}.

CTR Proof Prop
The Contradiction rule {CTR}.

RAA Proof Prop
The Reductio ad Absurdum rule {RAA}.

This is a substantial program, that includes an actual proof-checker:
Code-DMUC/Stdm06PropositionalLogic.hs

1.4.15 Method 3. Boolean Algebra: Equational Reasoning

We have already looked at two major approaches to propositional logic:

  1. the semantic approach with truth tables, and

  2. the syntactic approach with the inference rules of natural deduction.

We now look at the third major system, Boolean algebra,
which is an axiomatic approach to logic.

The earliest attempts to develop a formal logic system were based on inference.
The most famous of these was Aristotle’s Theory of Syllogisms,
which profoundly affected the entire development of logic and philosophy,
for more than two thousand years.

During the past several centuries, however,
a completely different style of mathematical reasoning appeared:
algebra.

Algebraic techniques were enormously successful for reasoning,
about numbers, polynomials, and functions.
One of the most appealing benefits of algebra is that:
many problems can be expressed as an equation,
involving an unknown quantity x,
that you would like to determine;
you can then use algebraic laws,
to manipulate the equation systematically,
in order to solve for x.

A natural question is:
Can the power of algebraic techniques be applied to other areas of mathematics?

Perhaps the most famous such application is Descartes’ analytical geometry.

George Boole, a nineteenth century British mathematician,
saw that algebraic methods might also be applied,
to make formal logical reasoning easier,
and he attempted to create such a system.
A successful modern algebraic approach to logic has been developed,
and is named in his honour.

Boolean algebra is a form of equational reasoning.
There are two crucial ideas:

  1. you show that two values are the same,
    by building up chains of equalities, and

  2. you can substitute equals for equals,
    in order to add a new link to the chain.

For example, a chain of equalities relies on the fact that:
if you know a = b and also b = c,
then you can deduce formally that a = c.

The following chain of equations demonstrates a = e:

a = b
  = c
  = d
  = e

Substituting equals for equals means that:
if you know x = y, and
if you have a big expression that contains x,
then you can replace x by y,
without changing the value of the big expression.

For example, suppose you’re given that:
x = 2 + p, and
y = 5 × x + 3.
Then you replace x by 2 + p,
resulting in:
y = 5 × (2 + p) + 3.

Notice an important point in the example above:
Given the order of operations,
you must use parentheses properly,
to ensure that the value you substitute into the expression,
sticks together as one value.
In this example, we had to put parentheses around the value 2 + p,
because otherwise we would have written the incorrect equation:
y = 5 × 2 + p + 3,
which has a quite different meaning.

When we build a chain of equations using Boolean algebra,
it’s good practice to give a incremental justification,
for each step in the chain.
The justifications help a reader to understand the proof,
and they also make it easier to check that the proof is valid.

A standard way to write chains of equations is:
to start each line (except the first) with the = sign,
followed by the next expression in our chain,
followed by the justification,
which explains how we obtained it from the previous expression.

1.4.15.1 Laws of Boolean algebra

Modern Boolean algebra is based on a set of equations,
that describe the basic algebraic properties of propositions.
These equations are called laws;
a law is a proposition that is always true,
for every possible assignment of truth values to the logical variables.

The laws of Boolean Algebra are analogous to ordinary algebraic laws.

For example,
elementary algebra has commutative equations for addition and multiplication,
x + y = y + x, and
x × y = y × x.

There are analogous commutative laws for ∨ and ∧,
saying that:
x ∨ y = y ∨ x, and
x ∧ y = y ∧ x.

It can be enlightening to compare the laws of Boolean algebra,
with those of elementary algebra,
but don’t get carried away:
there are differences as well as similarities.

There is one particularly dangerous trap.
In many ways, logical And (∧) behaves like multiplication (×)
while logical Or (∨) behaves like addition (+).

These similarities tempted many people to use the + symbol for Or,
and the × symbol (or ·) for And.

George Boole carried similar analogies very far,
much too far—in his original work.

And (∧) does not behave like multiplication (×) in all respects,
and ∨ does not behave like + in all respects.
See below.

Reading too much significance into similarities,
between laws on numeric and Boolean operations,
can lead you astray.

The essence of algebra is not that:
there are fundamental addition and multiplication operators,
that appear everywhere.

The essential idea is that:
we can use equations to state axioms on a set of operators,
and then use equational replacement reasoning,
to explore the properties of the resulting system.
Some algebraic systems have addition and multiplication operators,
and some algebraic systems don’t.
Boolean algebra doesn’t.

If we were mainly concerned with the foundations of algebra,
then our aim would be to take the smallest possible set of equations as axioms,
and to derive other ones as theorems.
However, we will focus on practical applications,
of Boolean algebra in computer science,
so below, we list a richer set of laws,
that are easier to use for practical calculation,
than a minimal set of axioms would be.

Operations with constants:

a ∧ False           = False               {∧ null}
a ∨ True            = True                {∨ null}
a ∧ True            = a                   {∧ identity}
a ∨ False           = a                   {∨ identity}

Basic properties of ∧ and ∨:

a                   → a ∨ b               {disjunctive implication}
a ∧ b               → a                   {conjunctive implication}
a ∧ a               = a                   {∧ idempotent}
a ∨ a               = a                   {∨ idempotent}
a ∧ b               = b ∧ a               {∧ commutative}
a ∨ b               = b ∨ a               {∨ commutative}
(a ∧ b) ∧ c         = a ∧ (b ∧ c)         {∧ associative}
(a ∨ b) ∨ c         = a ∨ (b ∨ c)         {∨ associative}

Distributive and DeMorgan’s laws:

a ∧ (b ∨ c)         = (a ∧ b) ∨ (a ∧ c)   {∧ distributes over ∨}
a ∨ (b ∧ c)         = (a ∨ b) ∧ (a ∨ c)   {∨ distributes over ∧}
¬(a ∧ b)            = ¬a ∨ ¬b             {DeMorgan's law}
¬(a ∨ b)            = ¬a ∧ ¬b             {DeMorgan's law}

Laws on negation:

¬True               = False               {negate True}
¬False              = True                {negate False}
a ∧ ¬a              = False               {∧ complement}
a ∨ ¬a              = True                {∨ complement}
¬(¬a)               = a                   {double negation}

Laws on implication:

a ∧ (a → b)         → b                   {Modus Ponens}
(a → b) ∧ ¬b        → ¬a                  {Modus Tollens}
(a ∨ b) ∧ ¬a        → b                   {disjunctive syllogism}
(a → b) ∧ (b → c)   → a → c               {implication chain}
(a → b) ∧ (c → d)   → (a ∧ c) → (b ∧ d)   {implication combination}
(a ∧ b) → c         = a → (b → c)         {Currying}
a → b               = ¬a ∨ b              {implication}
a → b               = ¬b → ¬a             {contrapositive}
(a → b) ∧ (a → ¬b)  = ¬a                  {absurdity}

a ↔ b               = (a → b) ∧ (b → a)   {equivalence}
1.4.15.1.1 Example proofs:

(False ∧ P ) ∨ Q == Q

(False ∧ P ) ∨ Q
  = (P ∧ False) ∨ Q     {∧ commutative}
  = False ∨ Q           {∧ null}
  = Q ∨ False           {∨ commutative}
  = Q                   {∨ identity}

P ∧ ¬(Q ∨ P) == False

P ∧ ¬(Q ∨ P)
  = P ∧ (¬Q ∧ ¬P)       {DeMorgan's law}
  = P ∧ (¬P ∧ ¬Q)       {∧ commutative}
  = (P ∧ ¬P) ∧ ¬Q       {∧ associative}
  = False ∧ ¬Q          {∧ complement}
  = ¬Q ∧ False          {∧ commutative}
  = False               {∧ null}

Haskell code demonstrating proofs for each of the laws of Boolean algebra:
Code-DMUC/Stdm07PredicateLogic.hs

module Stdm06BooleanAlgebra where

import Stdm06LogicOperators

-- First, we prove a variety of basic laws of Boolean algebra:

-- | {/\ null}
-- >>> and [andNull a | a <- [True, False]]
-- True
andNull a = (a /\ False) == False

-- | {\/ null}
-- >>> and [orNull a | a <- [True, False]]
-- True
orNull a = (a \/ True) == True

-- | {/\ identity}
-- >>> and [andIdentity a | a <- [True, False]]
-- True
andIdentity a = (a /\ True) == a

-- | {\/ identity}
-- >>> and [orIdentity a | a <- [True, False]]
-- True
orIdentity a = (a \/ False) == a

-- | {disjunctive implication}
-- >>> and [disjunctiveImplication a b | a <- [True, False], b <- [True, False]]
-- True
disjunctiveImplication a b = a ==> (a \/ b)

-- | {conjunctive implication}
-- >>> and [conjunctiveImplication a b | a <- [True, False], b <- [True, False]]
-- True
conjunctiveImplication a b = (a /\ b) ==> a

-- | {/\ idempotent}
-- >>> and [andIdempotent a b | a <- [True, False], b <- [True, False]]
-- True
andIdempotent a b = (a /\ a) == a

-- | {\/ idempotent}
-- >>> and [orIdempotent a b | a <- [True, False], b <- [True, False]]
-- True
orIdempotent a b = (a \/ a) == a

-- | {/\ commutative}
-- >>> and [andCommutative a b | a <- [True, False], b <- [True, False]]
-- True
andCommutative a b = (a /\ b) == (b /\ a)

-- | {\/ commutative}
-- >>> and [orCommutative a b | a <- [True, False], b <- [True, False]]
-- True
orCommutative a b = (a \/ b) == (b \/ a)

-- | {/\ associative}
-- >>> and [andAssociative a b c | a <- [True, False], b <- [True, False], c <- [True, False]]
-- True
andAssociative a b c = ((a /\ b) /\ c) == (a /\ (b /\ c))

-- | {\/ associative}
-- >>> and [orAssociative a b c | a <- [True, False], b <- [True, False], c <- [True, False]]
-- True
orAssociative a b c = ((a \/ b) \/ c) == (a \/ (b \/ c))

-- | {/\ distributes over \/}
-- >>> and [andDistributesOverOr a b c | a <- [True, False], b <- [True, False], c <- [True, False]]
-- True
andDistributesOverOr a b c = (a /\ (b \/ c)) == ((a /\ b) \/ (a /\ c))

-- | {\/ distributes over /\}
-- >>> and [orDistributesOverAnd a b c | a <- [True, False], b <- [True, False], c <- [True, False]]
-- True
orDistributesOverAnd a b c = (a \/ (b /\ c)) == ((a \/ b) /\ (a \/ c))

-- | {DeMorgan’s law}
-- >>> and [deMorgansAnd a b | a <- [True, False], b <- [True, False]]
-- True
deMorgansAnd a b = (not (a /\ b)) == (not a \/ not b)

-- | {DeMorgan’s law}
-- >>> and [deMorgansOr a b | a <- [True, False], b <- [True, False]]
-- True
deMorgansOr a b = (not (a \/ b)) == (not a /\ not b)

-- | {negate True}
-- >>> negateTrue
-- True
negateTrue = (not True) == False

-- | {negate False}
-- >>> negateFalse
-- True
negateFalse = (not False) == True

-- | {/\ complement}
-- >>> and [andComplement a | a <- [True, False]]
-- True
andComplement a = (a /\ not a) == False

-- | {\/ complement}
-- >>> and [orComplement a | a <- [True, False]]
-- True
orComplement a = (a \/ not a) == True

-- | {double negation}
-- >>> and [doubleNegation a | a <- [True, False]]
-- True
doubleNegation a = (not (not a)) == a

-- | {Modus Ponens}
-- >>> and [modusPonens a b | a <- [True, False], b <- [True, False]]
-- True
modusPonens a b = (a /\ (a ==> b)) ==> b

-- | {Modus Tollens}
-- >>> and [modusTollens a b | a <- [True, False], b <- [True, False]]
-- True
modusTollens a b = ((a ==> b) /\ not b) ==> (not a)

-- | {disjunctive syllogism}
-- >>> and [disjunctiveSyllogism a b | a <- [True, False], b <- [True, False]]
-- True
disjunctiveSyllogism a b = ((a \/ b) /\ not a) ==> b

-- | {implication chain}
-- >>> and [implicationChain a b c | a <- [True, False], b <- [True, False], c <- [True, False]]
-- True
implicationChain a b c = ((a ==> b) /\ (b ==> c)) ==> (a ==> c)

-- | {implication combination}
-- >>> and [implicationCombination a b c d | a <- [True, False], b <- [True, False], c <- [True, False], d <- [True, False]]
-- True
implicationCombination a b c d = ((a ==> b) /\ (c ==> d)) ==> ((a /\ c) ==> (b /\ d))

-- | {Currying}
-- >>> and [currying a b c | a <- [True, False], b <- [True, False], c <- [True, False]]
-- True
currying a b c = ((a /\ b) ==> c) == (a ==> (b ==> c))

-- | {implication}
-- >>> and [implication a b | a <- [True, False], b <- [True, False]]
-- True
implication a b = (a ==> b) == (not a \/ b)

-- | {contrapositive}
-- >>> and [contrapositive a b | a <- [True, False], b <- [True, False]]
-- True
contrapositive a b = (a ==> b) == (not b ==> not a)

-- | {absurdity}
-- >>> and [absurdity a b | a <- [True, False], b <- [True, False]]
-- True
absurdity a b = ((a ==> b) /\ (a ==> not b)) == (not a)

-- | {equivalence}
-- >>> and [equivalence a b | a <- [True, False], b <- [True, False]]
-- True
equivalence a b = (a <=> b) == ((a ==> b) /\ (b ==> a))

1.4.16 Meta-Logic

Meta-Logic is concerned with stepping outside the language of logic,
so that we can make general statements about the properties of the logical system itself.
Meta-logic enables us to talk about logic rather than just saying things in logic.
Within the logical system, the only elements of our vocabulary are:
the propositional variables and logical operators;
we can say things like A ∧ B → A ∨ B but that’s all.
The purpose of meta-logic is to enable us to think about deeper questions.

We have covered three quite different methods for reasoning about propositions:
1) truth tables,
2) logical inference, and
3) Boolean algebra.
These methods have completely different styles:

  1. Truth tables enable us to calculate the values (or meanings) of propositions,
    given the values of their variables.
    The basic technique is calculation,
    and it results in a logical value (True or False).
    The meaning of an expression is called its semantics,
    so calculation with truth tables is a form of semantic reasoning.

  2. Inference rules enable us to prove theorems.
    The basic technique involves matching the structures of propositions,
    with the structure of the formulas in the inference rules.
    The structure of an expression is called its syntax,
    so logical inference is a form of syntactic reasoning.

  3. Boolean algebra allows the use of equational reasoning,
    to prove the equality of two expressions,
    or to calculate the values of expressions.
    It applies the power of algebra to the subject of logic.

Propositional logic needs a vocabulary for talking about truth values (∧, ∨ etc.),
and meta-logic needs a vocabulary for talking about logical reasoning itself.
There are two fundamental operator symbols in meta-logic, \(\vDash\) and \(\vdash\),
which correspond to semantic and syntactic reasoning respectively.
We have already defined these operators.
Recall that:

\(P_1, P_2, ..., P_n \vdash Q\)
means that there is a proof which infers the conclusion Q,
from the assumptions \(P_1, ..., P_n\)
using the formal inference rules of natural deduction;

\(P_1, P_2, ..., P_n \vDash Q\)
means that Q must be True if \(P_1, ..., P_n\) are all True,
but it says nothing about whether we have a proof,
or indeed even whether a proof is possible.

The \(\vDash\) and the \(\vdash\) operators describe two different notions of truth,
and it’s important to know whether they are in fact equivalent.
Is it possible to prove a theorem that is false?
Is there a true theorem for which no proof exists?

Definition 13.
A formal system is consistent,
if the following statement is true,
for all well-formed formulas a and b:

If \(a \vdash b\) then \(a \vDash b\).

In other words, the system is consistent,
if each proposition,
that is provable using the inference rules,
is actually true.

Definition 14.
A formal system is complete,
if the following statement is true,
for all well formed formulas a and b:

If \(a \vDash b\) then \(a \vdash b\).

In other words, the system is complete,
if the inference rules are powerful enough,
to prove every proposition which is true.

It turns out that propositional logic is both consistent and complete.
This means that:
you can’t prove false theorems,
and if a theorem is true,
then it has a proof.

Theorem 56.
Propositional logic is consistent and complete.

It is interesting to note, that this is a (meta-logical) theorem,
that is proved mathematically;
it isn’t just an assumption.

There are many logical systems,
and some of them are much richer and more expressive than propositional logic.

Next, we will look at a more powerful logical system,
called predicate logic, which is also consistent and complete.

It turns out that any logical system,
that is powerful enough to express ordinary arithmetic,
must be either inconsistent or incomplete.
This means it’s impossible to capture all of mathematics in a safe logical system.
This result, the famous Gödel’s Theorem,
has had a profound impact on the philosophy of mathematics,
and is also relevant to theoretical computer science.

1.4.17 Logic in computer science

There are many applications of logic in computer science:

1.4.17.1 Formal correctness of software.

Many large programming projects have been devastated by ineradicable bugs.
There has been much discussion of the ‘Software Crisis’:
How can we write software that works correctly?
One approach to this problem is to use mathematics,
to specify what the software is supposed to do,
and to prove its correctness.
There are many different methods for doing this,
but ultimately they are all based on formal logical reasoning.
It doesn’t work very well to take a poorly-written program,
and try to prove it correct.
Even if the program contains no bugs,
then the sloppy structure will make the logical correctness proof impossibly difficult.
There has been success in using the formal logical reasoning,
to help derive the software from a mathematical specification.
There are some specifications suites that allow transpiling to executable code!

1.4.17.2 The Curry-Howard Isomorphism and type systems.

Many modern programming languages,
especially functional languages like Haskell,
have powerful and expressive type systems.
We need effective methods to help deal with type systems:
to help programmers understand them,
to help compiler writers implement them,
and to help language designers to specify them.
There is a remarkable connection between the inference rules of logic,
and the typing rules of programming languages;
in fact, they are essentially the same!
This connection was observed in the 1950s by the logicians Curry and Howard,
and has ultimately resulted in great improvements in programming languages.

1.4.17.3 Linear logic and access control.

Often in computing, we are concerned with controlling the access to some resource.
One example arises in functional programming,
where array operations can be implemented more efficiently,
if the compiler can guarantee that the program observes certain constraints,
on the way the array is accessed.
There is a logical system, called linear logic,
which keeps track of where each intermediate result in an inference proof is used.
The inference rules of linear logic discharge every assumption exactly once,
and intermediate results must also be discharged.
The system is able to express certain kinds of resource utilization,
through the inference rules.
There is research on the application of linear logic to language design and compilers.

1.4.17.4 Digital hardware design.

Computers are built out of digital hardware.
These circuits are very complex, containing enormous numbers of components.
Discrete mathematics is an essential tool for designing digital circuits.
Using discrete mathematics makes it possible to prove that a circuit is correct.

Take a break from logic to cover this:
CircuitDesign.html
Then, return here:

1.5 Predicate logic

It is frequently necessary to reason logically about statements,
of the form:
everything has the property p,
or something has the property p. 
One of the oldest and most famous pieces of logical reasoning,
which was known to the ancient Greeks, is an example:

“All men are mortal. Socrates is a man. Therefore Socrates is mortal.”

In general, propositional logic is not expressive enough to support such reasoning.
We could define a proposition P to mean ‘all men are mortal’,
but P is an atomic symbol, it has no internal structure,
so we cannot do any formal reasoning that makes use of the meaning of ‘all’.

Predicate logic, also called first order logic,
is an extension to propositional logic,
that adds two quantifiers that allow statements like the examples above to be expressed.
Everything in propositional logic is also in predicate logic:
all the definitions, inference rules, theorems, algebraic laws, etc., still hold.

1.5.1 The language of predicate logic

The formal language of predicate logic consists of propositional logic,
augmented with variables, predicates, and quantifiers.

1.5.1.1 Predicates

A predicate is a statement that an object x has a certain property.
Such statements may be either true or false.

For example, the statement x > 5 is a predicate,
and its truth depends on the value of x.

A predicate can be extended to several variables;
for example, x > y is a predicate about x and y.

The conditional expressions that control execution of programs are predicates.
The Haskell expression:
if x < 0 then -x else x
uses the predicate x < 0 to make a decision.

It is traditional to write predicates concisely in the form F(x),
where F is the predicate, and x is the variable it is applied to.

A predicate containing two variables could be written G(x, y).
A predicate is essentially a function that returns a Boolean result.

We will use Haskell notation for function applications,
which does not require parentheses:
for example, f x is the application of f to x.
For predicate logic, we will use the traditional notation with parentheses, F(x).

Definition 15.
Any term in the form F(x), where F is a predicate name,
and x is a variable name, is a well-formed formula.
Similarly, F(x1, x2, …, xk) is a well-formed formula;
this is a predicate containing k variables.

When predicate logic is used to solve a reasoning problem,
the first step is to translate from English (or mathematics),
into the formal language of predicate logic.
This means the predicates are defined.

For example we might define:
F(x) ≡ x > 0
G(x, y) ≡ x > y

The universe of discourse, is often called the universe, or abbreviated U.
U is the set of possible values that a variables can have.
Usually the universe is specified just once,
at the beginning of a piece of logical reasoning,
but this specification cannot be omitted.

For example, consider the statement:
‘For every x there exists a y such that x = 2 * y’.
If the universe is the set of even integers,
or the set of real numbers,
then the statement is true.
However, if the universe is the set of natural numbers,
then the statement is false (let x be any odd number).
If the universe doesn’t contain numbers at all,
then the statement is not true or false;
it is meaningless.

Several notational conventions are very common in predicate logic:

The universe is called U,
and its constants are written as lower-case letters,
typically c and p
(to suggest a constant value, or a particular value).

Variables are also lower-case letters,
typically x, y, z.

Predicates are upper-case letters F, G, H, …

For example, F(x) is a valid expression in the language of predicate logic,
and its intuitive meaning is ‘the variable x has the property F’.

Generic expressions are written with a lower-case predicate.
For example f(x) could stand for any predicate, f, applied to a variable x.

1.5.1.2 Quantifiers

There are two quantifiers in predicate logic;
these are the special symbols ∀ and ∃.

The upside-down A symbol is intended to remind you of All.

The backwards E symbol is intended to remind you of Exists.

1.5.1.2.1 For all: ∀

Definition 16.
If F(x) is a well-formed formula containing the variable x, then ∀ x.
F(x) is a well-formed formula called a universal quantification.

This is a statement that everything in the universe has a certain property:
‘For all x in the universe, the predicate F(x) holds’.

An alternative reading is:
‘Every x has the property F’.

Universal quantifications are often used to state required properties.
For example, if you want to say formally that:
a computer program will give the correct output for all inputs,
then you would use ∀.

Example 1.
Let U be the set of even numbers.
Let E(x) mean x is even.
Then ∀ x.
E(x) is a well-formed formula,
and its value is true.

Example 2.
Let U be the set of natural numbers.
Let E(x) mean x is even.
Then ∀ x.
E(x) is a well-formed formula,
and its value is false.

1.5.1.2.2 There exists ∃

Definition 17.
If F(x) is a well-formed formula containing the variable x, then ∃ x.
F(x) is a well-formed formula called an existential quantification.

This is a statement that something in the universe has a certain property:
‘There exists an x in the universe, for which the predicate F(x) holds’.

An alternative reading is:
‘Some x has the property F’.

Existential quantifications state properties that occur at least once.

For example:
If we state that a database contains a record for a certain person;
this would be done with ∃.

Example 3.
Let U be the set of natural numbers.
Let F(x, y) be defined as 2 * x = y.
Then ∃ x. F(x, 6) is a well-formed formula;
it says that there is a natural number x, which gives 6 when doubled;
3 satisfies the predicate, so the formula is true.
However, ∃ x. F(x, 7) is false.

Quantified expressions can be nested.

For example, let the universe be the set of integers, and define:
F(x) = ∃ y. x < y
Thus F(x) means:
‘There is some number larger than x’.
Now we can say that every integer has this property, by stating:
∀ x. F(x)
An equivalent way to write this is:
∀ x. (∃ y. x < y)
The parentheses are not required,
since there is no ambiguity in writing:
∀ x. ∃ y. x < y
All the quantified variables must be members of the universe.

In the example above, both x and y are integers.
However, it is often useful to have several variables,
that are members of different sets.

For example, suppose we are reasoning about people who live in cities,
and want to make statements like:
‘There is at least one person living in every city’.
It is natural to define L(x, y) to mean:
‘The person x lives in the city y’,
and the expression ∀ x. ∃ y. L(y, x) then means:
‘For all cities there is somebody living in it’.
But what is the universe?

The way to handle this problem is to:
define a separate set of possible values for each variable.

For example, let C = {London, Paris, Los Angeles, München} be the set of cities,
and let P = {Smith, Jones, ··· } be the set of people.
Now we can let the universe contain all the possible variable values:
U = C ∪ P.
Quantified expressions need to restrict each variable,
to the set of relevant values,
as it is no longer intended that a variable x could be any element of U.
This is expressed by writing:
∀ x ∈ S. F(x) or
∃ x ∈ S. F(x),
which say that x must be an element of S
(and therefore also a member of the universe).
Now the statement:
‘For all cities, there exists a person from the set of people, living in it.’
which is written:
∀ x ∈ C. ∃ y ∈ P. L(y, x).

Universal quantification (∀) over an empty set is vacuously true,
and existential quantification over an empty set is vacuously false.
Often we require that the universe is non-empty,
so quantifications over the universe are not automatically true or false.

1.5.1.2.3 Expanding Quantified Expressions

If the universe is finite (or if the variables are restricted to a finite set),
expressions with quantifiers can be interpreted as ordinary terms in propositional logic.
Suppose U = {c1, c2, …, cn},
where the size of the universe is n. 
Then quantified expressions can be expanded as follows:

(7.1) ∀ x.F(x) = F(c1) ∧ F(c2) ∧ ··· ∧ F(cn)
(7.2) ∃ x.F(x) = F(c1) ∨ F(c2) ∨ ··· ∨ F(cn)

With a finite universe, the quantifiers are just syntactic abbreviations.
With a small universe, one can reason directly with the expanded expressions.
In many computing applications the universe is finite,
but may contain millions of elements;
in this case the quantifiers are needed to make logical reasoning practical,
although they are not needed in principle.

If the variables are not restricted to a finite set,
then it is impossible, even in principle,
to expand a quantified expression.

It may be intuitively clear to write
F(c1) ∧ F(c2) ∧ F(c3) ∧ ··· ,
but this is not a well-formed formula.
Every well-formed formula has a finite size,
although there is no bound on how large a formula may be.
This means that in the presence of an infinite universe,
quantifiers make the language of predicate logic more expressive than propositional logic.

The expansion formulas, Equations 7.1 and 7.2, are useful for computing with predicates.

Example 4.
Let the universe U = {1, 2, 3},
and define the predicates even and odd as follows:

even x ≡ (x mod 2 = 0)
odd x ≡ (x mod 2 = 1)

Two quantified expressions will be expanded and evaluated,
using these definitions:

∀ x. even x → ¬(odd x)
  = (even 1 → ¬(odd 1)) ∧ (even 2 → ¬(odd 2)) ∧ (even 3 → ¬(odd 3))
  = (False → ¬True) ∧ (True → ¬False) ∧ (False → ¬True)
  = True ∧ True ∧ True
  = True
∃ x. (even x ∧ odd x)
  = (even 1 ∧ odd 1) ∨ (even 2 ∧ odd 2) ∨ (even 3 ∧ odd 3)
  = (False ∧ True) ∨ (True ∧ False) ∨ (False ∧ True)
  = False ∨ False ∨ False
  = False

Example 5.
Let S = {0, 2, 4, 6} and R = {0, 1, 2, 3}.
Then we can state that:
every element of S is twice some element of R as follows:
∀ x ∈ S. ∃ y ∈ R. x = 2 * y

This can be expanded into a quantifier-free expression in two steps.
The first step is to expand the outer quantifier:

(∃ y ∈ R. 0 = 2 × y)
∧ (∃ y ∈ R. 2 = 2 × y)
∧ (∃ y ∈ R. 4 = 2 × y)
∧ (∃ y ∈ R. 6 = 2 × y)

The second step is to expand all four of the remaining quantifiers:
((0 = 2 × 0) ∨ (0 = 2 × 1) ∨ (0 = 2 × 2) ∨ (0 = 2 × 3))
∧ ((2 = 2 × 0) ∨ (2 = 2 × 1) ∨ (2 = 2 × 2) ∨ (2 = 2 × 3))
∧ ((4 = 2 × 0) ∨ (4 = 2 × 1) ∨ (4 = 2 × 2) ∨ (4 = 2 × 3))
∧ ((6 = 2 × 0) ∨ (6 = 2 × 1) ∨ (6 = 2 × 2) ∨ (6 = 2 × 3))

Two short cuts have been taken in the notation here.

  1. Since every quantified variable is restricted to a set,
    the universe was not stated explicitly;
    however we can define U = S ∪ R.

  2. Instead of defining F(x, y) to mean x = 2 * y,
    and writing ∀ x ∈ S. ∃ y ∈ R. F(x, y),
    we simply wrote the expression x = 2 × y inside the expression.
    Both short cuts are frequently used in practice.

1.5.1.2.4 The Scope of Variable Bindings

Quantifiers bind variables by assigning them values from a universe.
A dangling expression without explicit quantification,
such as x + 2, has no explicit variable binding.
If such an expression appears,
then x is assumed implicitly to be an element of the universe,
and the author should have told you explicitly somewhere what the universe is.

The extent of a variable binding is called its scope.

For example, re-inserting the parentheses illustrates binding:
the scope of x in the expression ∃ x. F(x) is the sub-expression F(x).
For ∀ x ∈ S. (∃ y ∈ R. F(x, y)),
the scope of x is ∃ y ∈ R. F(x, y),
and the scope of y is F(x, y).
This example was not ambiguous without the parenteses,
so they could be excluded.

Use parentheses to make expressions un-ambiguous, where needed:
The expression:
∀ x. p(x) ∨ q(x)
is not clear: it can be read in two different ways. It could mean either
∀ x. (p(x) ∨ q(x))
or
(∀ x. p(x)) ∨ q(x).

It is probably best to use parentheses in case of doubt,
but there is a convention that resolves unclear expressions:
the quantifier extends over the smallest subexpression possible,
unless parentheses indicate otherwise.
In other words, the scope of a variable binding is the smallest possible.

So, in the assertion given above,
the variable x in q(x) is not bound by the ∀,
so it must have been bound at some outer level
(i.e., this expression has to be embedded inside a bigger one).
Often the same quantifier is used several times in a row,
to define several variables:
∀ x. ∀ y. F(x, y)
It is common to write this in an abbreviated form,
with just one use of the ∀ operator,
followed by several variables separated by commas.
For example, the previous expression would be abbreviated as follows:
∀ x, y. F(x, y)
This abbreviation may be used for any number of variables,
and it can also be used if the variables are restricted to be in a set,
as long as they all have the same restriction.
For example, the abbreviated expression:
∀ x, y, z ∈ S. F(x, y, z)
is equivalent to the full expression
∀ x ∈ S. ∀ y ∈ S. ∀z ∈ S. F(x, y, z).

1.5.1.2.5 Translating Between English and Logic

Sometimes it is straightforward to translate an English statement into logic.
If an English statement has no internal structure that is relevant to the reasoning,
then it can be represented by an ordinary propositional variable:

EB ≡ Elephants are big.
LF ≡ Lizards are furry.
LP ≡ Lizards are good pets.

An English statement built up with words like and, or, not, therefore, and so on,
where the meaning corresponds to the logical operators,
can be represented by a propositional expression.

¬EB ≡ Elephants are small.
EB ∧ LF ≡ Elephants are big and Lizards are furry.
LF → LP ≡ If Lizards are furry, then they make good pets.

Notice in the examples above,
that no use has been made of the internal structure of the English statements.
(The sentence ‘elephants are small’ may appear to violate this,
but it could just as easily have been written:
‘it is untrue that elephants are big’,
which corresponds exactly to ¬A.)

When general statements are made about classes of objects,
then predicates and quantifiers are needed in order to draw conclusions.
For example, suppose we try these definitions in propositional logic,
without using predicates:

SP ≡ Small animals are good pets.
LA ≡ Lizards are animals.
LS ≡ Lizards are small.

In ordinary conversation,
it would be natural to conclude that Lizards are good pets,
but this cannot be concluded with propositional logic.
All we have are three propositions:
SP, LA, and LS are known, but nothing else,
and the only conclusions that can be drawn are uninteresting ones,
like SP ∧ LA, LS ∨ SP, and the like.
The substantive conclusion, that Lizards are good pets,
requires reasoning about the internal structure of the English statements.
The solution is to use predicates,
to give a more refined translation of the sentences:

A(x) ≡ x is an animal.
L(x) ≡ x is a Lizard.
S(x) ≡ x is small.
GP(x) ≡ x is a good pet.

Now a much richer kind of English sentence can be translated into predicate logic:

∀ x. L(x) → A(x) ≡ Lizards are animals.
∀ x. L(x) → S(x) ≡ Lizards are small.
∀ x. L(x) → S(x) ∧ A(x) ≡ Lizards are small animals.
∀ x. S(x) ∧ A(x) → GP (x) ≡ Small animals are good pets.

Logic to english:
It is straightforward to translate from formal predicate logic into English,
by turning each logical operator directly into an English word or phrase.
For example, where U is the set of all physical things:

∀ x ∈ U. S(x) ∧ A(x) → GP(x)

could be translated into English literally:

  1. For every thing,
    if that thing is small,
    and that thing is an animal,
    then that thing is a good pet.

This is graceless English,
but at least it’s comprehensible and correct.
The style can be improved:

  1. Everything which is small,
    and which is an animal,
    is a good pet.

Even better would be:

  1. Small animals make good pets.

Such stylistic improvements in the English are optional.
Improvement to the literary style should not affect the meaning,
but this is a question of proper usage of natural language,
not of formal logic.

English to logic:
It is sometimes trickier to translate from English into formal logic,
because the English usually does not reliably map,
to the logical quantifiers and operators.
Sentence (1) above can be translated straightforwardly into logic,
but sentence (3) is harder.
The difficulty is not really in the logic;
it is in figuring out exactly what the English sentence says.

Often the real difficulty in translating English into logic,
is in figuring out what the English says,
or what the speaker meant to say.

For example, many people make statements like:
‘All people are not rich’.
What this statement actually says is:

∀ x. ¬R(x),

where the universe is the set of people, and
R(x) means ‘x is rich’.
What is usually meant, however, by such a statement is

¬∀ x. R(x),

that is, it is not true that all people are rich
(alternatively, not all people are rich).
The intended meaning is equivalent to

∃ x. ¬R(x).

Such problems of ambiguity or incorrect grammar in English cannot be solved mathematically,
but they do illustrate one of the benefits of mathematics:
simply translating a problem from English into formal logic,
may expose confusion or misunderstanding.

This is true of writing computer models to capture the real world,
formal proofs, and many other attempts at formalizing the informal.

Example 6.
Consider the translation of the following sentence into logic:
‘Some birds can fly’
Let the universe be a set that contains all animals:

Let:
B(x) mean ‘x is a bird’
F(x) mean ‘x can fly’

Then ‘Some birds can fly’ is translated as
∃ x ∈ U. (B(x) ∧ F(x))

Warning! A common pitfall is to translate ‘Some birds can fly’ as

∃ x ∈ U. (B(x) → F(x)) Wrong translation!

To see why this is wrong,
let p be a frog.
Now B(p) is false, so B(p) → F(p) is true
(remember False → False = True).

This is just saying:
‘If that frog were a bird,
then it would be able to fly’,
which is true;
it doesn’t mean the frog actually is a bird,
or that it actually can fly.
However, we have now found a value of x, namely the frog p,
for which B(x) → F(x) is true,
and that is enough to satisfy ∃ x ∈ U. (B(x) → F(x)),
even if all the birds in the universe happen to be penguins (which cannot fly).

1.5.2 Logical Inference with Predicates

The inference rules for propositional logic can be extended,
to handle predicate logic as well.
Four additional rules are required:
an introduction rule and an elimination rule,
for both of the quantifiers ∀ and ∃.

Summary of rules:

Universal Introduction {∀I}:
\[\frac{F(x) \qquad \{x\ arbitrary\}}{\forall.F(x)}\{\forall I\}\]

Universal Elimination {∀E}:
\[\frac{\forall x.F(x)}{F(p)}\{\forall E\}\]

Existential Introduction {∃I}:
\[\frac{F(p)}{\exists x.F(x)}\{\exists I\}\]

Existential Elimination {∃E}:
\[\frac{\exists x.F(x) \qquad F(x) \vdash A \qquad \{x\ not\ free\ in\ A\}}{A}\{\exists E\}\]

A good way to understand the inference rules of predicate logic is:
to view them as generalisations of the corresponding rules of propositional logic.

For example,
there is a similarity between inferring F(P) ∧ F(Q) in propositional logic,
and inferring ∀ x. F(x) in predicate logic.

If the universe is finite,
then the predicate logic is not necessary, in principle.
We could express ∀ x. F(x) by:
F(p1) ∧ F(p2) ∧ … ∧ F(pn),
where n is the size of the universe.
If the universe is infinite however,
then the inference rules of predicate logic allow more deductions,
that would be impossible using just the propositional logic rules.

We will always assume that the universe is non-empty.
This is especially important with the rule for forall elimination,
which is invalid when the universe of discourse is empty.
Also, in the rule for exists elimination,
the conclusion of the rule
(which is also the conclusion of the sequent in the hypothesis of the rule)
must be a proposition that does not depend, in any way,
on the parameter of the universe of discourse.
That is, the proposition A in the statement of the rule,
must not be a predicate that depends on x.

1.5.2.1 Universal Introduction {∀I}:

\[\frac{F(x) \qquad \{x arbitrary\}}{\forall.F(x)}\{\forall I\}\]

A standard proof technique,
which is used frequently in mathematics and computer science,
is to state and prove a property about an arbitrary value x,
which is an element of the universe,
and then to interpret this as a statement about all elements of the universe.
A typical example is the following simple theorem about Haskell lists,
which says that there are two equivalent methods,
for attaching a singleton x in front of a list xs.
The definition of (++) is

(++) :: [a] -> [a] -> [a]
[] ++ ys = ys
(x:xs) ++ ys = x : (xs ++ ys)

Theorem 57.
Let x :: a and xs :: [a].
Then x : xs = [x] ++ xs.

This theorem is stating a property about arbitrary x and xs.
It really means that the property holds for all values of the variables,
and this could be stated more formally with an explicit quantifier:

Theorem 58.

∀ x :: a
∀ xs :: [a]
x : xs = [x] ++ xs

These two theorems have exactly the same meaning;
the only difference is the style in which they are expressed:
the first is a little more like English,
and the second is a little more formal.
Both styles are common.
For a theorem in the first style,
the use of arbitrary variables means an implicit ∀ is meant.
Now, consider the proof of this theorem;
the following proof could be used for either the formal,
or the informal statement of the theorem:

[x] ++ xs
    = (x : []) ++ xs    def. of notation
    = x : ([] ++ xs)    (++).2
    = x : xs            (++).1

Again, there is a significant point about this proof:
it consists of formal reasoning about one value x and one value xs,
but these values are arbitrary,
and the conclusion we reach at the end of the proof,
is that the theorem is true for all values of the variables.
If we can prove a theorem for an arbitrary variable,
then we infer that the theorem is true,
for all possible values of that variable.

These ideas are expressed formally by the inference rule,
which says that if the expression a
(which may contain a variable x)
can be proved for arbitrary x,
then we may infer the proposition ∀ x. a.
Since this rule specifies what we need to know,
in order to infer an expression containing ∀, its name is {∀I}.

To clarify exactly what this rule means,
we will compare two examples:
one where it can be used,
and one where it cannot.

Let the universe be the set of natural numbers N,
and let E(x) be the proposition ‘x is even’.
First, consider the following theorem:

Theorem 59.
\(\vdash\) ∀ x. E(x) → E(x) ∨ ¬E(x)

The proof uses the ∀ introduction rule.
The important point is that:
this inference does not depend on the particular value of p;
thus the value of p is arbitrary,
and the {∀I} rule allows us to infer:
∀ x. F(x) ∨ ¬F(x).

Proof.
\[\frac{\frac{\frac{\boxed{E(p)}}{E(p) \lor \lnot E(p)} \{\lor I_L\}}{E(p) \rightarrow E(p) \lor \lnot E(p)} \{\rightarrow I\}}{\forall x.E(x) \rightarrow E(x) \lor \lnot E(x)}\{\forall I\}\]

Now consider the following incorrect proof,
which purports to show that all natural numbers are even:

Wrong:
\[\frac{\frac{}{E(2)}}{\forall x.E(x)}\{\forall I\}\]

The theorem E(2) is established for a particular value, 2.
However, 2 is not arbitrary:
the proof that 2 is even relies on its value,
and we could not substitute 3 without invalidating it.
Because we have not proved E(x) for an arbitrary value,
the requirements of the {∀I} inference rule are not satisfied,
and we cannot conclude ∀ x.E(x).

Arbitrary variables are used in semi-formal treatments of mathematical logic.
In a completely formal presentation of logic,
for example, in proof-checking software,
the requirement that variables be arbitrary is replaced,
by a purely syntactic constraint.

The problem with talking about arbitrary variable,
is that arbitrary is an adjective, like blue or green.
How can you look at a variable,
and decide whether it is or is not “arbitrary”?
In short, you cannot,
because arbitrariness is an external property,
of the context in which the variable appears,
and not the variable itself.

The syntactic constraint on the {∀I} rule is that:
that the variable x is not allowed to appear “free”
(that is, unbound by a quantifier)
in any undischarged hypothesis that was used to infer F(x).
For example, here is another wrong inference:

Wrong:
\[\frac{\frac{x=2}{E(2)}}{\forall x.E(x)}\{\forall I\}\]

In this case, we have assumed x = 2,
and then inferred E(x).
This would actually require several inference steps,
or recourse to an auxiliary theorem,
but it is shown here as a single inference.
Now, we have E(x) above the line,
so it looks like we can use the {∀I} rule to infer ∀ x. E(x).
However, the proof above the line of E(x) was based on the assumption x = 2,
and x appears free in that assumption.
Furthermore, the assumption was not discharged.
The syntactic requirement for the {∀I} rule has not been satisfied,
and the inference is not valid.

The syntactic constraint is really just a more formal version,
the same thing as the “arbitrary variable” requirement.
You can explore these issues in more detail in the proof checker software.

1.5.2.2 Universal Elimination {∀E}:

\[\frac{\forall x.F(x)}{F(p)}\{\forall E\}\]

The universal elimination rule says that:
if you have established ∀ x. F(x),
and p is a particular element of the universe,
then you can infer F(p).

The following theorem allows you to apply a universal implication,
in the form ∀ x. F(x) → G(x),
to a particular proposition F(p),
and its proof illustrates the {∀E} inference rule.

Theorem 60. F(p), ∀ x. F(x) → G(x) \(\vdash\) G(p)

Proof.
\[\frac{F(p) \qquad \frac{\forall x.F(x) \rightarrow G(x)}{F(p) \rightarrow G(p)}\{\forall E\}}{G(p)}\{\rightarrow E\}\]

Previously, the implication chain theorem was proved;
this says that from a → b and b → c you can infer a → c.
The {∀I} inference rule also can be used,
to prove the corresponding theorem on universal implications:
from ∀ x. F(x) → G(x) and ∀ x. G(x) → H(x),
you can infer ∀ x. F(x) → H(x).

However, in order to use {∀I} we have to establish first,
for an arbitrary p in the universe, that F(p) → H(p),
and the proof of that proposition requires using the {∀E} rule twice,
to prove the particular propositions:
F(p) → G(p) and G(p) → H(p).

Theorem 61. ∀ x. F(x) → G(x), ∀ x .G(x) → H(x) \(\vdash\) ∀ x. F(x) → H(x)

Proof.

\[\frac{\frac{\forall x.F(x) \rightarrow G(x)}{F(p) \rightarrow G(p)}\{\forall E\} \qquad \frac{\forall x.G(x) \rightarrow H(x)}{G(p) \rightarrow H(p)}\{\forall E\}}{\frac{F(p) \rightarrow H(p)}{\forall x.F(x) \rightarrow H(x)} \{\forall I\}}\{Impliciation\ chain\}\]

The following theorem says that:
you can change the order in which the variables are bound in:
∀ x. ∀ y. F(x, y).
This theorem is simple but extremely important.

Theorem 62. ∀ x. ∀ y. F(x, y) \(\vdash\) ∀ y. ∀ x. F(x, y)

Proof.

\[\frac{\frac{\frac{\forall x. \forall y. F(x,y)}{\forall y. F(p,y)}\{\forall E\}}{F(p,q)}\{\forall E\}}{\frac{\forall x. F(x,q)}{\forall y. \forall x. F(x,y)}\{\forall I\}}\{\forall I\}\]

This theorem says that:
if, for all x, a proposition P implies f(x),
then P implies ∀ x. f(x).
This allows you to pull a proposition P,
which does not use x,
out of an implication bound by ∀.

Theorem 63. ∀ x. P → f(x) \(\vdash\) P → ∀ x. f(x)

Proof.

\[\frac{\frac{\frac{\boxed{P} \qquad \frac{\forall x.P \rightarrow f(x)}{P \rightarrow f(arbitrary\ c)}\{\forall E\}}{f(c)}\{\rightarrow E\}}{\forall x.f(x)}\{\forall I\}}{P \rightarrow \forall x.f(x)}\{\rightarrow I\}\]

1.5.2.3 Existential Introduction {∃I}:

\[\frac{F(p)}{\exists x.F(x)}\{\exists I\}\]

The {∃I} rule says that:
if f(p) has been established for a particular p,
then you can infer ∃ x. f(x).

The following theorem says that:
if F(x) holds for all elements of the universe,
then it must hold for one of them.
Recall that we require the universe of discourse to be non-empty;
otherwise this theorem would not hold.

Theorem 64. ∀ x. F(x) \(\vdash\) ∃ x. F(x)

Proof.

\[\frac{\frac{\forall x.F(x)}{F(p)}\{\forall E\}}{\exists x.F(x)}\{\exists I\}\]

1.5.2.4 Existential Elimination {∃E}:

\[\frac{\exists x.F(x) \qquad F(x) \vdash A \qquad \{x\ not\ free\ in\ A\}}{A}\{\exists E\}\]

Recall the {∨E} inference rule of propositional logic;
this says that if you know a ∨ b,
and also that c follows from a and c follows from b,
then you can infer c.

If the universe is finite,
then ∃ x. F(x) can be expressed in the form:
F(p1) ∨ … ∨ F(pn),
where the universe is {p1, …, pn}.
We could extend the {∨E} rule, so that:
if we know that F(pi) holds for some i,
and furthermore that A must hold,
if F(x) holds for arbitrary x,
then A can be inferred.

The existential elimination rule {∃E} captures this idea,
and it provides a much more convenient tool for reasoning,
than repeated applications of {∨E}.
Its fundamental importance, however, is that:
{∃E} may also be used if the universe is infinite.

This means it is more powerful than {∨E},
as that can be used only for an limited ∨ expression,
with a finite number of terms.
Recall that a proof must have a finite length.

The following theorem gives an example of {∃E}.
It says that:
if P(x) always implies Q(x),
and also that P(x) holds for some x,
then Q(x) also holds for some x.

Theorem 65.
∃ x. P(x), ∀ x. P(x) → Q(x) \(\vdash\) ∃ x. Q(x)

Proof.

\[\frac{\frac{\exists x.P(c) \qquad \frac{\boxed{P(c)} \qquad \frac{\forall x.P(x) \rightarrow Q(x)}{P(c) \rightarrow Q(c)}\{\forall E\}}{Q(c)}\{\rightarrow E\}}{Q(c)}\{\exists E\}}{\exists x. Q(X)}\{\exists I\}\]

The following theorem says that:
a ∀ directly inside an ∃,
can be brought outside the ∃.

Theorem 66.
∃ x. ∀ y. F(x, y) \(\vdash\) ∀ y. ∃ x. F(x, y)

\[\frac{\exists x. \forall y. F(x,y) \qquad \frac{\frac{\frac{\boxed{\forall y. F(p,y)}}{F(p,q)}\{\forall E\}}{\exists x. F(x,q)}\{\exists I\}}{\forall y. \exists x. F(x,y)}\{\forall I\}}{\forall y. \exists x. F(x,y)}\{\exists E\}\]

1.5.3 Algebraic Laws of Predicate Logic

We covered predicate logic as a natural deduction inference system.
An alternative style of reasoning,
is based on a set of algebraic laws about propositions with predicates,
listed here:

∀ x. f(x)                  →   f(c)
f(c)                       →   ∃ x. f(x)

∀ x. ¬ f(x)                =   ¬∃ x. f(x)
∃ x. ¬ f(x)                =   ¬∀ x. f(x)

Provided that x does not occur free in q:
(∀ x. f(x)) ∧ q            =   ∀ x. (f(x) ∧ q)
(∀ x. f(x)) ∨ q            =   ∀ x. (f(x) ∨ q)
(∃ x. f(x)) ∧ q            =   ∃ x. (f(x) ∧ q)
(∃ x. f(x)) ∨ q            =   ∃ x. (f(x) ∨ q)

(∀ x. f(x)) ∧ (∀ x. g(x))  =   ∀ x. (f(x) ∧ g(x))
(∀ x. f(x)) ∨ (∀ x. g(x))  →   ∀ x. (f(x) ∨ g(x))
∃ x. (f(x) ∧ g(x))         →   (∃ x. f(x)) ∧ (∃ x. g(x))
(∃ x. f(x)) ∨ (∃ x. g(x))  =   ∃ x. (f(x) ∨ g(x))

This is not the minimal possible set of laws;
some of them correspond to inference rules,
and others are provable as theorems.
The focus in this section however,
is on practical calculations using the laws,
rather than on theoretical foundations.

The following two laws express, in algebraic form,
the {∀E} and {∃I} inference rules.
As they correspond to inference rules,
these laws are logical implications, not equations.

∀ x. f(x) → f(c)
f(c) → ∃ x. f(x)

In both of these laws, x is bound by the quantifier,
and it may be any element of the universe.
The element c is any fixed element of the universe.

Thus the first law says that:
if the predicate f holds for all elements of the universe,
then it must hold for a particular one c.

The second law says that:
if f holds for an arbitrarily chosen element c,
then it must hold for all elements of the universe.

The following theorem combines these two laws,
and is often useful in proving other theorems.
Its proof uses the line-by-line style,
which is standard when reasoning about predicate logic with algebraic laws.

Theorem 67.
∀ x. f(x) → ∃ x. f(x)

Proof.

∀ x. f(x)
  → f(c)
  → ∃ x. f(x)

The next two laws state how the quantifiers combine with logical negation.

The first one says that:
if f(x) is always false,
then it is never true;

the second says that:
if f(x) is ever untrue,
then it is not always true.

∀ x. ¬ f(x) = ¬∃ x. f(x)
∃ x. ¬ f(x) = ¬∀ x. f(x)

The following four laws show how:
a predicate f(x) combines with a proposition q,
that does not contain x.
These are useful for moving constant terms,
into or out of quantified expressions.

(∀ x. f(x)) ∧ q = ∀ x. (f(x) ∧ q)
(∀ x. f(x)) ∨ q = ∀ x. (f(x) ∨ q)
(∃ x. f(x)) ∧ q = ∃ x. (f(x) ∧ q)
(∃ x. f(x)) ∨ q = ∃ x. (f(x) ∨ q)

The final group of laws concerns combinations,
including quantifiers with ∧ and ∨.
Two of them are equations (or double implications),
whereas the other two are implications.
Therefore they can be used in only one direction,
and they must be used at the top level,
not on subexpressions:

(∀ x. f(x)) ∧ (∀ x. g(x)) = ∀ x. (f(x) ∧ g(x))
(∀ x. f(x)) ∨ (∀ x. g(x)) → ∀ x. (f(x) ∨ g(x))
∃ x. (f(x) ∧ g(x)) → (∃ x. f(x)) ∧ (∃ x. g(x))
(∃ x. f(x)) ∨ (∃ x. g(x)) = ∃ x. (f(x) ∨ g(x))

Example 9.
The following equation can be proved algebraically:

∀ x. (f(x) ∧ ¬g(x)) = ∀ x.f(x) ∧ ¬∃ x. g(x)

This is established through a sequence of steps.
Each step should be justified by one of the algebraic laws,
or by another equation that has already been proved.
When the purpose is actually to prove a theorem,
the justifications should be written explicitly.
Often this kind of reasoning is used informally,
like a straightforward algebraic calculation,
and the formal justifications are sometimes omitted.

∀ x. (f(x) ∧ ¬g(x))
  = ∀ x. f(x) ∧ ∀ x. ¬g(x)
  = ∀ x. f(x) ∧ ¬∃ x. g(x)

Example 10.
The following equation says that:
if f(x) sometimes implies g(x),
and f(x) is always true,
then g(x) is sometimes true.

∃ x. (f(x) → g(x)) ∧ (∀ x. f(x)) → ∃ x. g(x)

The first step of the proof replaces the local variable x, by y in the ∀ expression.
This is not actually necessary,
but it may help to avoid confusion;
when the same variable plays different roles in different expressions,
and there seems to be a danger of getting them mixed up,
then it is safest just to change the local variable.
In the next step,
the ∀ expression is brought inside the ∃;
in the following step,
it is now possible to pick a particular value for y:
namely, the x bound by ∃.

∃ x. (f(x) → g(x)) ∧ (∀ x. f(x))
  = (∃ x. (f(x) → g(x))) ∧ (∀ y. f(y)) change of variable
  = ∃ x. ((f(x) → g(x)) ∧ (∀ y. f(y)))
  = ∃ x. ((f(x) → g(x)) ∧ f(x)
  → ∃ x. g(x) {Modus Ponens}
1.5.3.0.1 Computing with Quantifiers

Code-DMUC/Stdm07PredicateLogic.hs

-- # Software Tools for Discrete Mathematics
module Stdm07PredicateLogic where

-- # Chapter 7.  Predicate Logic

-- For all: ∀
forAll :: [Int] -> (Int -> Bool) -> Bool
forAll u p = all p u

-- There exists: ∃
exists :: [Int] -> (Int -> Bool) -> Bool
exists u p = any p u