Boolean Arithmetic

Mathematics can be highly abstract, even when it remains applicable to daily life. I want to show this with the mathematics behind logic puzzles, such as how to derive a conclusion using all of the following premisses:

  1. Babies are illogical.
  2. Nobody is despised who can manage a crocodile.
  3. Illogical persons are despised.

The example, from Terence Tao’s blog, is attributed to Lewis Carroll. By the first and third premisses, babies are despised; by the second premiss then, babies cannot manage crocodiles.

George Boole, The Laws of Thought (1854), Open Court, 1940

In the puzzle, if there were a fourth premiss, whereby babies could manage crocodiles, this would not be a contradiction; we could however conclude that there were no babies. From a fifth premiss, say “The child is the father of the man,” we might conclude that there was nobody at all; this would be contradicted by a sixth premiss, “Socrates is a man.”

If we are given an infinite list of premisses, and we cannot find a contradiction in the list, no matter how far we look, then, under certain conditions, all premisses on the list can be true together at once. This is the Compactness Theorem in logic; or let us say it is a particular compactness theorem, depending on what we mean precisely by a premiss. Such a theorem was first proved implicitly by Thoralf Skolem in 1922. Kurt Gödel proved it explicitly in 1930, though not by the name of compactness. We shall establish the simplest version of a compactness theorem, namely the Propositional Compactness Theorem. Expressed in different language, this is the simplest case of the Tychonoff Theorem (1930) in topology; however, I shall say little about topology, and I do not expect the reader to know (or to learn) anything about it. As far as I know, nobody made the connection between logic and topology until Alfred Tarski did, when addressing the International Congress of Mathematicians in 1950.

At the end, there will be an itemized summary of what we shall have done mathematically. On the way there, we shall consider mathematical symbolism as such, and its origin in the work of René Descartes. The logical aspect of our work originates with George Boole.

The infinite

I say that the mathematics to be discussed here will be connected to daily life. This may be a stretch, when we are talking about infinite lists. However, there is a simple example of an infinite list of premisses to which the Compactness Theorem does not apply:

  1. There is a counting number.
  2. It is not 1.
  3. It is not 2.
  4. It is not 3.
  5. It is not 4.

Not all of these assertions can be true at once about the same number. However, the first n of the assertions (numbered 0 to n − 1) are true about n, whatever counting number n may be. Thus not all of the assertions will meet the condition of being a premiss for a compactness theorem.

We commonly recognize the infinite, in the form of an immortal soul. Possibly we recognize this immortality, only to deny it: in this case, we say that one’s existence ends with the death of one’s body. Such a restricted choice of alternatives is perhaps part of a mathematical approach to life. It is however an example of the “fallacy of the Stock Dilemma,” in the terminology of The Presocratics (Prentice Hall, 1997, pages 17–19), by Philip Wheelwright—who observes that the fallacy inhibits our understanding of the ancient Greeks.

Two thousand years of Christian thought and controversy have conditioned most of us to framing religious questions in prescribed terms; with the usual result that a certain pair of answers is seen as representing the reasonable alternatives, one of them to be accepted and the other to be rejected, while other possible answers are ignored or dismissed as idle fancies …

Two examples of the Stock Dilemma are particularly important to recognize for anyone reared in the traditions of Christendom who would examine non-Christian religious phenomena thoughtfully and fairly. There is the problem of God, and there is the problem of survival after death …

… “Is man immortal or not?” means, to such contemporaries as may pause to consider the question, “Will my personal existence somehow continue forever, or will it cease to exist at the time of that event which we call death?” But clearly the alternatives are not exhaustive, for there is also the logically intermediate alternative of survival after death for a limited time, and also in any of a variety of ways having nothing to do with moral deserts …

The first thing to be aware of in attempting to understand ancient Greek religion, as indeed almost any religion outside the Judaeo-Christian area of influence, is the complementing importance of sky-religion and earth-religion, or, as they are commonly termed when referring to ancient Greece, Olympian or ouranian religion, and chthonic religion …

One of the ancient Greeks, Euclid, did prove a theorem about infinity, a theorem that G. H. Hardy used to illustrate the beauty of his subject in A Mathematician’s Apology (Cambridge University Press, 1940/1967). According to Hardy, what Euclid proved is “the existence of an infinity of prime numbers.” Euclid himself said, “The prime numbers are more than any given multitude of prime numbers” (Οἱ πρῶτοι ἀριθμοὶ πλείους εἰσὶ παντὸς τοῦ προτεθέντος πλήθους πρώτων ἀριθμῶν). The proof is that we can construct the endless list of true statements that starts out as follows:

  1. The number 2 is prime.
  2. The number 2 + 1, or 3, is prime.
  3. The number 2 ⋅ 3 + 1, or 7, is prime.
  4. The number 2 ⋅ 3 ⋅ 7 + 1, or 43, is prime.
  5. The number 2 ⋅ 3 ⋅ 7 ⋅ 43 + 1, or 1807, is not prime, but has a prime factor, 13.
  6. The number 2 ⋅ 3 ⋅ 7 ⋅ 43 ⋅ 13 + 1, or 23479, is not prime, but has a prime factor, 53.

At each step, we take the product of the primes that we know, and we add unity. Unity then is the remainder when, from our sum, we take away the greatest possible multiple of any of the primes that we know. Thus the sum is not a multiple of any of the primes that we already know. Therefore the sum either is prime itself or has a prime factor other than the primes that we already know. This is Euclid’s proof.

We cannot know all of the primes, in the sense of having written them down. And yet there is an infinite list of statements that begins:

  1. The number 2 is prime.
  2. The numbers 2 and 3 are prime.
  3. The numbers 2, 3, and 7 are prime.
  4. The numbers 2, 3, 7, and 43 are prime.
  5. The numbers 2, 3, 7, 43, and 13 are prime.
  6. The numbers 2, 3, 7, 43, 13, and 53 are prime.

We know how to continue the list ad infinitum, and we allow ourselves to say the whole list is true at once. This is a simple instance of the Compactness Theorem.

What we need

An earlier posting of mine defined the Binary Tree as consisting of all of the finite binary sequences, namely those sequences, of each of which

  • the length is a natural number, and
  • the entries are zeroes and ones.

We can write such a sequence as

(e0, …, en−1).

Of this sequence,

  • the length is a natural number n, and
  • the entries are ek, where
    • k ranges over the natural numbers that are less than n, and
    • each ek itself is either 0 or 1.

Possibly n = 0, in which case the sequence is empty, though it is still something—an empty box—, which we can write as ( ).

A branch of the Binary Tree consists of all of the finite initial segments of an infinite sequence

(e0, e1, e2, …)

of zeros and ones. We shall now study these sequences alone, calling them infinite binary sequences, if not just binary sequences. Strictly speaking, they are binary sequences of length ω. Here ω (“omega”) is the symbol for the set {0, 1, 2, …} of all natural numbers. The Binary Tree that we are considering is the full binary tree of height ω. There are binary trees of greater height, but we shall not consider them. Nonetheless, by the end, we are going to have passed through several levels of abstraction or generalization or construction.

I intend not to require any particular knowledge of mathematics on the part of the reader, beyond some memory of polynomial equations as studied in high school. However, I have already assumed more: that the reader knows that a prime number is a number with no factors, other than itself and unity; and in this context, unity is not a number, and in particular is not prime, though otherwise we count unity as the first or least counting number.

I have also mentioned the set of all natural numbers, and I shall be mentioning other sets. A set is some things, gathered together, at least in mind, so as to be considered as composing a whole. A chess set comprises a board and 32 pieces. To describe some sets, English may use relative pronouns, where modern mathematics uses bound variables. Thus there is a set of toys that go “whirr” when they stand still; it is the set of all toys x such that x goes “whirr” when x stands still; hence the set might be written as {x ∈ T : φ(x)}, where T is the set of all toys, and φ is the predicate, “goes ‘whirr’ when standing still.”

The set comprising 0 and 1 is indifferently {0, 1} or {1, 0}. A sequence or list like (e0, …, en−1) might be called an ordered set; we shall see a way to understand it precisely as a mere set, though not the set {e0, …, en−1}.

The challenge

Even though I say that what I am going to prove in the end is only the simplest form of a theorem, the result is still, as we say, highly nontrivial. I hope at least to be able to give some sense, either of how far mathematics can go beyond high school, or else of what sophistication can be found lurking, even in the mathematics of high school.

In Mathematics: A Very Short Introduction (2002), Timothy Gowers raises the question,

What is it that causes many people to give mathematical subjects up as soon as they possibly can and remember them with dread for the rest of their lives?

I shall look at Gowers’s answer, but first I suggest that Gowers may be mistaken when he prefaces his question with the observation,

One does not often hear people saying that they have never liked biology, or English literature.

Not being a specialist in such fields, perhaps Gowers is not likely to hear people complaining about them. Still, as he must know, many persons do not read, because they do not like it. Moreover, many of us are afraid of writing, perhaps because it will reveal or belie our social class, or we think we are not artistic. In any case, horror of mathematics may arise as Gowers will suggest, and it may therefore be caused by the present article.

“Because mathematics continually builds on itself,” Gowers says,

it is important to keep up when learning it. For example, if you are not reasonably adept at multiplying two-digit numbers together, then you probably won’t have a good intuitive feel for the distributive law … Without this, you are unlikely to be comfortable with multiplying out the brackets in an expression such as (x + 2)(x + 3), and then you will not be able to understand quadratic equations properly. And if you do not understand quadratic equations, then you will not understand why the golden ratio is (1 + √5)/2.

There are many chains of this kind, but there is more to keeping up with mathematics than just maintaining technical fluency. Every so often, a new idea is introduced which is very important and markedly more sophisticated than those that have come before, and each one provides an opportunity to fall behind. An obvious example is the use of letters to stand for numbers, which many find confusing but which is fundamental to all mathematics above a certain level. Other examples are negative numbers, complex numbers, trigonometry, raising to powers, logarithms, and the beginnings of calculus. Those who are not ready to make the necessary conceptual leap when they meet one of these ideas will feel insecure about all the mathematics that builds on it.

The use of literal (lettered) expressions like ek is essential to the present article; Gowers’s other examples are not, except raising to powers. In this case, we shall use even ω and then 2ω as exponents, and for special reasons we shall write the latter as ω2. Moreover, what we shall do is not only mathematics, but logic, and this subject horrifies even some mathematicians, such as one or two of my professors in graduate school.

We shall not really be doing much logic as such; rather, we shall establish some of the mathematical infrastructure of logic. Where we end up will be intimately connected with pure mathematics, in the form of theorems of infinitesimal calculus, such as that a continuous function on a closed and bounded subset of the real line attains a maximum and a minimum value. However, I shall not explore that connection in this article.

Two arithmetics

Just as we do with numbers in school, we can add, subtract, and multiply binary sequences: we can do arithmetic with them. There are two different meaningful and interesting arithmetics here, which can be called respectively Boolean and dyadic. The former will be our subject; but I want first to give some indication of the latter.

We can consider every finite binary sequence as a natural number, written out in base two in the reverse of the usual order. For example, starting our count with zero, when we note that the sequence (1, 0, 1, 1) is nonzero precisely in the “zeroth,” second, and third places, then we can take the sequence as an expression for the number 20 + 22 + 23, or 1 + 4 + 8: this is 13 in the usual base-ten system. Then (1, 0, 1, 1, 0) also stands for 13. We can add and multiply finite binary sequences, using algorithms like those learned at school for base-ten numerals. In particular, we must use a rule of “carrying” whereby, for example, (1) + (1) = (0, 1) (this is 1 + 1 = 2 in base ten). We can apply the same algorithms to infinite binary sequences. Thus we obtain dyadic arithmetic; but this is a subject for another article.

Arithmetic of two

Henceforth we shall consider only what I am calling Boolean arithmetic of binary sequences. Here we combine sequences term by term. This means everything will follow from the arithmetic of the two-element set {0, 1}, as defined below. Addition and multiplication of sequences of these elements will be as in dyadic arithmetic, but there will be no carrying. Thus, by definition,

(e0, e1, e2, …) + (f0, f1, f2, …) = (e0 + f0, e1 + f1, e2 + f2, …),
(e0, e1, e2, …) ⋅ (f0, f1, f2, …) = (e0 ⋅ f0, e1 ⋅ f1, e2 ⋅ f2, …),

where, for individual terms, we use the rule

1 + 1 = 0,

and otherwise there will be no new rules. The standard rules of arithmetic for zeros and ones that we shall continue to use are:

  1. Addition of zero does nothing.
  2. Multiplication by one does nothing.
  3. Multiplication by zero reduces everything to zero.

In symbols,

P + 0 = P,
P ⋅ 1 = P,
P ⋅ 0 = 0.

Here P can be understood as a variable taking the value of 0 or 1; these are now the only values we need. The equations being true in either case, they are identities. The addition and multiplication tables are thus:

+

0

1

0

0

1

1

1

0

 

×

0

1

0

0

0

1

0

1

Because 1 + 1 = 0, this means −1 = 1, and therefore subtraction is the same as addition. Division by a nonzero number is possible, but trivial, since now the only nonzero number is one. We have then 1 ⁄ 1 = 1 and 0 ⁄ 1 = 0. Pointing this out is still worthwhile, since now the two-element set {0, 1}, with its arithmetic, is a structure that is analogous to the structure of the real numbers as studied in high school. In a word, each of these structures is a field, because it is equipped with operations of addition and multiplication that behave in certain nice ways. We shall return to this analogy later, when we consider polynomial equations.

Even and odd

We may consider 0 as a “generic” even number; 1, a “generic” odd number. Indeed, there is an arithmetic of even and odd numbers, tabulated as follows:

±

even

odd

even

even

odd

odd

odd

even

 

×

even

odd

even

even

even

odd

even

odd

These have the same pattern as the tables for 0 and 1.

Propositional logic

We may consider 1 also as the value of a true proposition, and 0 as the value of a false proposition. This is why the variable above was the letter P: it stands for “proposition.” We are now following the idea of George Boole in The Laws of Thought (1854), although he interprets 0 and 1 as Nothing and Universe, for reasons indicated below. Interpreting 0 and 1 as False and True, we must understand addition as strict disjunction, and multiplication as conjunction. This means:

  • P + Q is true if exactly one of P and Q is true; otherwise false.
  • P ⋅ Q is true if both P and Q are true; otherwise false.

Boolean arithmetic of binary sequences

Having defined the arithmetic of the two-element set {0, 1}, we extend it to sequences of elements. Thus, if we allow ourselves to denote the same binary sequence (a0, a1, a2, …) either by

(ak : k ∈ ω)

or simply by a, then we define

a ∗ b = (ak ∗ bk : k ∈ ω),

whether ∗ stands for addition or multiplication or subtraction. Subtraction is still the same as addition. There is no division, except by the sequence whose every entry is 1; and this division has no effect.

For a mnemonic, one can read the sign ∈ as the letter E, standing for element, since the expression k ∈ ω means k is an element (or member) of the set ω. However, in Volume One of Principia Mathematica (1910), Russell and Whitehead explain the sign ∈ as the first letter—namely epsilon—of the Greek ἐστί “is.” The origin of the sign seems to be “The Principles of Arithmetic” (1889) of Giuseppe Peano, although he does not explain why he chooses that particular sign.

Sets of natural numbers

Why might we want to define an arithmetic of binary sequences? The notation (ak : k ∈ ω) explicitly describes the function that takes the value ak at every argument k that belongs to ω, this having been defined as the set {0, 1, 2, …} of natural numbers. Again we may use the bare letter a to denote the function just described. Since each value ak is either 0 or 1, the function a determines a certain subset of ω, namely the set of those natural numbers k such that ak = 1. This set is a−1(1), the “inverse image of 1 under a.” By definition then,

a−1(1) = {k ∈ ω : ak = 1}.

Conversely, if A is a set of natural numbers—a subset of ω—, then we obtain a binary sequence, which we may denote by

χA,

χ being the minuscule Greek letter chi, standing for the word “characteristic.” The binary sequence χA is the characteristic function of A as a subset of ω: this means that the value of χA at an argument k from ω is 1, if k belongs to A, or k ∈ A; otherwise, if k does not belong to ω, or k ∉ A, then the value of χA at k is 0. In either case, we denote the value of χA at k by χA(k).

The binary sequences are thus the functions on the domain ω that take values in the set {0, 1}. This being a two-element set, we shall henceforth denote it by the symbol 2. We are thus defining the number 2 as the set {0, 1}. More generally, we can define any natural number n as the set {0, 1, 2, …, n − 1} of natural numbers that are less than it is. In case n = 0, this is the empty set, or ∅. John von Neumann offered this definition in his 1923 paper, “On the Introduction of Transfinite Numbers.” I like to use the definition, for reasons that I discuss in “On Commensurability and Symmetry,” in the Journal of Humanistic Mathematics.

We shall denote by

ω2

the set of all binary sequences. The set of all subsets of ω is

℘(ω).

Here ℘ stands for power set. A reason for the terminology is that the set ℘(A) of subsets of a finite set A has size 2n, or “2 to the power n,” if A itself has size n. The beauty of von Neumann’s definition is that it makes n itself a set having size n, though this set does not require n in its definition. If we denote by n2 the set of functions from n to 2, then n2 has size 2n, though they are not the same set, unless n = 0. For example,

22 = {(0, 0), (1, 0), (0, 1), (1, 1)},
22 = {0, 1, 2, 3},

where 0 = ∅, 1 = {0}, 2 = {0, 1}, and 3 = {0, 1, 2}, while each of the sequences (a, b) belonging to the set 22 is the function f given by f(0) = a and f(1) = b. We can understand such a function f to be the set {⟨0, a⟩, ⟨1, b⟩}, whose elements are not sequences, but ordered pairs. The object denoted by ⟨e, c⟩ can be defined as any of the following one- or two-element sets:

  • {{{e}, 0}, {{c}}}, in the 1914 definition of Norbert Weiner, or
  • {{e, 1}, {c, 2}}, in the 1914 definition of Hausdorff; or
  • {{e, c}, {e}}, in the 1921 definition of Kuratowski.

My source here is Jean van Heijenoort, From Frege to Gödel: A Source Book in Mathematical Logic, 1879–1931 (Harvard University Press, 1967), which includes Weiner’s 1914 paper, “A simplification of the logic of relations” (as well as Peano’s paper mentioned above). The definition of an ordered pair as a set works a Copernican revolution in logic; however, we shall not have use for any particular definition. All we need to know is that ⟨a, b⟩ = ⟨x, y⟩ if and only if a = x and b = y.

The same then is true if we replace ⟨a, b⟩ and ⟨x, y&#x27e9 with (a, b) and (x, y), where for example

(a, b) = {⟨0, a⟩, ⟨1, b⟩}.

More generally,

(a0, …, an−1) = {⟨0, a0⟩, …, ⟨n − 1, an−1⟩}.

(a0, a1, a2, …) = {⟨0, a0⟩, ⟨1, a1⟩, ⟨2, a2⟩, …}.

We have defined:

  • the function from ω2 to ℘(ω) that converts a to a−1(1);
  • the function from ℘(ω) to ω2 that converts A to χA.

Either of these two functions undoes the other: it is the inverse of the other. In symbols,

A)−1(1) = A,
χ(a−1(1)) = a.

Symbolism

The reader who is intent on continuing with the mathematics may skip this section; but I want to pause and consider the possibility that this mathematics might be understood better without all of the symbolism. Technical symbolism would seem to be essential to modern mathematics. However, ancient mathematics did not use it, and we may misunderstand that mathematics when we translate it into our symbolism. Even some outstanding modern mathematicians have difficulty in writing out the precise symbolism for their own work.

Language as such is not originally symbolic, if a symbol is something that has meaning by convention. You need a language in order to establish a convention. Collingwood argues the point in The Principles of Art (1938), at the head of the chapter “Language,” on page 225:

In its original or native state, language is imaginative or expressive: to call it imaginative is to describe what it is, to call it expressive is to describe what it does. It is an imaginative activity whose function is to express emotion. Intellectual language is this same thing intellectualized, or modified so as to express thought …

One way of putting [the distinction between these two functions of language] is to distinguish language proper from symbolism. A symbol (as the Greek word indicates) is something arrived at by agreement and accepted by the parties to the agreement as valid for a certain purpose. This is a fair account of how the words in an intellectualized language come by their meanings, so far as they are thoroughly intellectualized, which in fact is seldom very far; but it cannot be a true account of language as such, for the supposed agreement by which the meaning of a given word is settled implies a previous discussion out of which the agreement is arrived at; and unless language already exists and is already capable of stating the point at issue the discussion cannot arise.

In particular, language as such is not a “system for communication,” at least not if this implies that we can always figure out what we want to say, before we come up with the language to say it with. We use language for the figuring out in the first place. But this may be a minority view. According to the ninth edition of the Concise Oxford Dictionary of Current English (1995), language is

The method of human communication, either spoken or written, consisting of the use of words in an agreed way.

By such a definition, baby talk is not language. Neither, for that matter, is what I am saying now, since you and I have never agreed on how I am to say it. The Grolier International Dictionary (1981) is somewhat better, saying language is

The aspect of human behavior that involves the use of vocal sounds in meaningful patterns and, when they exist, corresponding written symbols to form, express, and communicate thoughts and feelings.

This definition still forgets sign language. Instead of “vocal sounds in meaningful patterns and, when they exist, corresponding written symbols,” it would be simpler and better to say “gestures, sounds, and marks,” thus: “Language is the aspect of human behavior that involves the use of gestures, sounds, and marks to form, express, and communicate thoughts and feelings.”

I interject the scholarly complaint that the Grolier International Dictionary omits to confess that it is the American Heritage Dictionary in a different cover, presumably for sale outside either the United States or the whole of North America or even the Americas. My copy of Wheelwright’s Presocratics (mentioned above) is another example of restricted distribution: according to a sticker inside, the book is “not for sale or distribution in the U.S.A. or Canada.” The publishers sin by not revealing that the original publication date of the book was 1966; they give only their own publication date of 1997. One may infer from the Preface that something is amiss, since Wheelwright here thanks his graduate seminar of the fall term of 1964–5. My copy of Principia Mathematica is worse, being a photographic reproduction in which the publishers have replaced with their own name (Merchant Books) the original publisher (Cambridge at the University Press) on the title page, even while keeping the original date of 1910 just below.

In classical mathematics books published by Project Gutenberg, a valiant attempt has been made to reproduce the books’ original appearance, down to typeface and layout, even through the use of LaTeX. This means the books have been retyped, or “rekeyboarded,” and therefore errors are possible. Owing to appearances, the reader may be misled into thinking that the errors are in the original. I would therefore urge an adaptation of what I understand as current practice in archeological restoration (though unfortunately not always in Turkey): make the new parts harmonize with the original, but not so as to be confused with the original. Wikisource is commendable in this regard, for providing scans of original texts, so that readers can verify or correct the newly typeset versions.

I return to Collingwood on language as such. He continues on his next page:

Symbolism or intellectualized language thus presupposes imaginative language or language proper. There must, therefore, be a corresponding relation between the theories of the two. But in the traditional theory of language these relations are reversed, with disastrous results.

One of the disastrous results is the opinion of “a (possibly mythical) schoolmaster” that, because of the illogical way they talk to them, “ ‘parents are the last people in the world who ought to be allowed to have children.’ ” Nonetheless, Wikipedia continues to describe language as a system for communication. I used to try to correct this, but another editor would prevent me. He claimed to be, like Collingwood, a teacher of philosophy at Oxford. I could not confirm this, since the fellow used a pseudonym. He did allow that what I tried to say about language might go in the article about Collingwood. Unfortunately I cannot edit Wikipedia now, since Turkey is blocking it, though I have ways of reading it.

Mathematics is not fundamentally or originally about symbolism. Ancient Greek mathematics demonstrates the point, to which we shall return. Meanwhile, symbolism may be useful, and then it may come indeed to express what we feel about our mathematics. Here is Collingwood from his page 268, near the end of the “Language” chapter (I see that David Corfield quoted the same passage, in a blog post called “Mathematical Emotion”):

A symbol is language and yet not language. A mathematical or logical or any other kind of symbol is invented to serve a purpose purely scientific; it is supposed to have no emotional expressiveness whatever. But when once a particular symbolism has been taken into use and mastered, it reacquires the emotional expressiveness of language proper. Every mathematician knows this. At the same time, the emotions which mathematicians find expressed in their symbols are not emotions in general, they are the peculiar emotions belonging to mathematical thinking.

Collingwood’s ensuing paragraph will be useful for us, since we can understand letters such as the P and Q above as the “atomic propositions” that Collingwood refers to:

The same applies to technical terms. These are invented solely to serve the purpose of a particular scientific theory; but as they begin to pass current in the scientist’s speech or writing they express to him and to those who understand him the peculiar emotions which that theory yields. Often, when invented by a man of literary ability, they are chosen from the first with an eye to expressing these emotions as directly and obviously as possible. Thus, a logician may use a term like ‘atomic propositions’ as part of his technical vocabulary. The word ‘atomic’ is a technical term, that is, a word borrowed from elsewhere and turned into a symbol by undergoing precise definition in terms of the theory … But, as we find it occurring in the logician’s discourse, it is full of emotional expressiveness. It conveys to the reader, and is meant to convey, a warning and a threat, a hope and a promise. ‘Do not try to analyse these; renounce the dream of analysing to infinity; that way delusion lies, and the ridicule of people like myself. Walk boldly, trusting in the solida simplicitas of these propositions; if you use them confidently as bricks out of which to build your logical constructions, they will never betray you.’

In the Tractatus Logico-Philosophicus (1922), Wittgenstein says history consists of “atomic facts.” In Speculum Mentis (1924), Collingwood ridicules this notion, and I think he is right. I discussed the matter in “Thales of Miletus.”

As I mentioned in “The Tree of Life,” symbolism may be used to give the feeling of understanding, without the reality. Some mathematics talks consist largely of page after page of equations, projected on a screen. I do not think one can learn much from such a talk, although some persons in the room may be impressed.

Boolean algebra

Perhaps we have not really answered the question of why we might want to define an arithmetic of binary sequences. However, we have established a certain one-to-one correspondence. We can describe it either as being between the binary sequences and the sets of natural numbers, or as being between the sets ω2 and ℘(ω). I am making a linguistic point here: we can say a set of things is X, or the things themselves are X. We can even say the set are X.

Some contemporary writers or editors do seem to have a horror of using a plural verb with a singular noun. Such persons say that a married couple “is” doing something, even though it is their something. This is seen in the headline, “55 Years After Their Honeymoon Road Trip, This Couple is Taking Their Old VW Bug on One Last Adventure.”

A binary sequence a corresponds to the set that we have called a−1(1), whose elements are natural numbers; a set A of natural numbers corresponds to the sequence χA. Through this correspondence, the Boolean arithmetic of binary sequences induces an arithmetic of sets of natural numbers: To add or multiply two sets, we convert them to sequences, add or multiply the sequences, then convert back.

By an independent definition, of two sets of natural numbers, call them A and B,

  • the product is the intersection, A ∩ B, consisting of the numbers that the two sets have in common;
  • the sum is the symmetric difference, A △ B, consisting of the numbers, each of which belongs to exactly one of the sets.

These definitions are natural, to the extent that sets themselves are natural. It does not actually matter what the elements of A and B are; they need not be numbers. Now our one-to-one correspondence between binary sequences and sets of natural numbers is an isomorphism, because it respects the arithmetics of ω2 and ℘(ω): we have the identities

(a + b)−1(1) = a−1(1) △ b−1(1),
(a ⋅ b)−1(1) = a−1(1) ∩ b−1(1),

and therefore also

χ(A △ B) = χA + χB,
χ(A ∩ B) = χA ⋅ χB.

We define on ℘(ω) also the operation ∪ of taking unions, where A ∪ B is the set of natural numbers, each of which belongs to at least one of A and B. Also the complement of A in ω is the set of natural numbers that are not in A. The complement of A is thus ω △ A. We may denote it also by ω − A; more generally,

A − B = A ∩ (ω △ B),

this being the difference of A from B. Then

A ∪ B = ω △ ((ω △ A) ∩ (ω △ B)),
A △ B = (A ∪ B) − (A ∩ B) = (A − B) ∪ (B − A).

Thus, being able to take intersections and complements is enough to let us also take differences, unions, and symmetric differences. As equipped with all of these operations, the set ℘(ω) is the kind of structure called a Boolean algebra.

George Boole

Boolean algebras of sets are what Boole studied, though his sets—he called them classes—consisted not of numbers, but of objects. He did not introduce the new symbol ∩ for the product or intersection of two classes; he just used juxtaposition, as in algebra. Here he is, from page 31 of The Laws of Thought:

As the combination of two literal symbols in the form xy expresses the whole of that class of objects to which the names or qualities represented by x and y are together applicable, it follows that if the two symbols have exactly the same signification, their combination expresses no more than either of the symbols taken alone would do. In such case we should therefore have

xy = x.

As y is, however, supposed to have the same meaning as x, we may replace it in the above equation by x, and we thus get

xx = x.

Now in common Algebra the combination xx is more briefly represented by x². Let us adopt the same principle of notation here; for the mode of expressing a particular succession of mental operations is a thing in itself quite as arbitrary as the mode of expressing a single idea or operation … In accordance with this notation, then, the above equation assumes the form

(2)x² = x

and is, in fact, the expression of a second general law of those symbols by which names, qualities, or descriptions, are symbolically represented.

Boole’s first general law is xy = yx. Boole pursues the analogy with algebra (pages 37–8):

Now of the symbols of Number there are but two, viz. 0 and 1, which are subject to the same formal law [namely (2) above]. We know that 0² = 0, and that 1² = 1; and the equation x² = x, considered as algebraic, has no other roots than 0 and 1. Hence, instead of determining the measure of formal agreement of the symbols of Logic with those of Number generally, it is more immediately suggested to us to compare them with symbols of quantity admitting only of the values 0 and 1. Let us conceive, then, of an Algebra in which the symbols x, y, z, &c. admit indifferently of the values 0 and 1, and of these values alone. The laws, the axioms, and the processes of such an Algebra will be identical in their whole extent with the laws, the axioms, and the processes of an Algebra of Logic. Difference of interpretation will alone divide them. Upon this principle the method of the following work is established.

Boole thus discovered the analogy between the set of all real numbers and the two-element set {0, 1} that we encapsulated above by calling each of the sets a “field.”

Polynomials

In school one learns for example that the fractions 2 ⁄ 3 and 4 ⁄ 6 are interchangeable. We saw above that certain sequences and sets were interchangeable, though not in the same way. They are interchangeable not individually, but collectively. We can talk indifferently about A ∩ B or χA ⋅ χB, but we assign no meaning to χA ⋅ B or A ∩ χB.

As Boole observes, certain polynomials are interchangeable: xy is interchangeable with xy, and in Logic, though not in Number, x² is interchangeable with x. Polynomials define functions, taking as many arguments as there are variables in the polynomials. Thus x(y + z) and xy + xz, different as polynomials, define the same function, which, at any ordered triple (a, b, c) of numbers, takes the value a(b + c), which is equal to ab + ac. That the two distinct polynomials here do define the same function is an expression of the distributivity of multiplication over addition. We have to be able to recognize the polynomials as different, before we can understand them as interchangeable; otherwise the specific meaning of the polynomials is lost.

Letters as symbols

The expressions a, b, and c are not numbers, but letters; however, they stand for numbers, by a convention that goes back to Descartes’s Géometrie of 1637. We can trace the mathematical use of letters further back to Euclid, who used them usually to indicate points in a diagram, but sometimes also line segments, angles, or whole figures. Euclid wrote with the letters that we now call capital or upper-case; there were no other letters yet. Descartes developed an arithmetic of line segments, and he designated each of these segments with a minuscule or lower-case letter.

I shall quote now from the 1991 republication by Éditions Jacques Gabay of the 1886 Hermann edition of La Géometrie de René Descartes. I was pleased to meet Monsieur Gabay by chance, in his shop in Paris, down the street from where I stayed with Ayşe for a mathematical meeting in 2015. In the Hermann edition, Descartes’s own archaic spelling is modernized. The original spelling can be seen in the facsimile provided with the translation by Smith and Latham, published in 1925 by Open Court, republished in 1954 by Dover.

Here now, from Descartes, is the origin of the modern notion of, and notation for, an algebraic equation. The bold emphasis is mine:

… souvent on n’a pas besoin de tracer ainsi ces lignes sur le papier, et il suffit de les désigner par quelques lettres, chacune par une seule. Comme pour ajouter la ligne BD à GH, je nomme l’une a et l’autre b, et écris a + b; et a − b pour soustraire b de a; et ab pour les multiplier l’une par l’autre …

Ainsi, voulant résoudre quelque problème, on doit d’abord le considérer comme déja fait, et donner des noms à toutes les lignes qui semblent nécessaires pour le construire, aussi bien à celles qui sont inconnues qu’aux autres. Puis, sans considérer aucune différence entre ces lignes connues et inconnues, on doit parcourir la difficulté selon l’ordre qui montre le plus naturellement de tous en quelle sorte elles dépendent mutuellement les unes des autres, jusques à ce qu’on ait trouvé moyen de d’exprimer une même quantité en deux façons, ce qui se nomme une équation … Ce que j’écris en cette sorte:

z = b,

ou  

z² = −az + b²,

ou

c’est-à-dire z, que je prends pour la quantité inconnue, est égale à b; ou le carré de z est égal au carré de b moins a multiplié par z

“Desiring to solve a problem, we must consider it as already solved; without distinguishing between knowns and unknowns, [we find a way to] express the same quantity in two ways, which I call an equation.”

The Hermann edition of Descartes also replaces Descartes’s symbol for equality (it looks something like ∞) with the sign of two parallel lines that we use today. For more on this sign, see the paper mentioned above, “On Commensurability and Symmetry.”

The implicit practice of Descartes is to let letters from the beginning of the alphabet stand for known quantities; from the end, unknown. Giving unknowns (variables) an equal footing with knowns (constants) is the essence of the analytic method.

Curves and surfaces

In analytic geometry, a polynomial equation like

ax² + by² = 1

defines a curve in the so-called Cartesian plane. The curve consists of the ordered pairs (s, t) of real numbers that satisfy or solve the equation: here satisfaction or solution means that the equation as² + bt² = 1 should be true. The set of such (s, t) happens to trace out the curve called an ellipse, if both of the coefficients a and b are positive; an hyperbola, if one of these coefficients is negative. However, we are not interested in such details here.

We can rewrite any polynomial equation f = g in the form f − g = 0; the points that satisfy this are the zeros of the polynomial f − g. If this polynomial features three different variables, such as x, y, and z, then the zeros of the polynomial lie on a surface in three-dimensional space; for example, the zeros of ax² − by² − c²z lie on a so-called hyperbolic paraboloid.

In interpreting the polynomial ax² − by² − c²z as the function that assigns to the ordered triple (s, t, u) the value as² − bt² − c²u, we understand the variables x, y, and z as being ordered in just this way, alphabetically. To prevent any ambiguity, and to allow for more variables, we may write the polynomial instead as

ax0² − bx1² − c²x2.

We now adjust our understanding of arbitrary polynomials. We build them up from constants, and from variables xk, where k is from ω, by applying symbols for the arithmetical operations of addition, subtraction, and multiplication, and using parentheses as needed to show order of operations. We can use exponents as we have been doing, to abbreviate the product of more than one copy of the same variable or constant. We can interpret the variable xk as a function whose argument can be any finite sequence (a0, a1, a2, …, an−1) of numbers, as long as k < n; and then the value of the function is ak. We can also let the argument of xk be an infinite sequence (a0, a1, a2, …); the value of the function here is still ak.

Propositional logic

We apply the same ideas to Boolean arithmetic. As before, we shall write our variables as P, Q, and R, or now more generally as Pk, where again k ∈ ω. We can interpret this variable as the function from ω2 to 2 that extracts ek from (e0, e1, e2, …). We can combine variables using symbols for the arithmetic operations that we have defined. However, it is standard now to denote multiplication in this context by ∧, and not to denote addition as such, but to use instead two new operations, denoted by ∨ and ¬, corresponding to taking unions and complements of sets, so that the new operations are given by the rules

P ∨ 0 = P,
P ∨ 1 = 1,
¬P = P △ 1.

Then in particular

P ∨ Q = ¬(¬P ∧ ¬Q).

We may refer to a combination of constants 0 and 1 and variables Pk with the symbols ∧, ∨, and ¬ as a Boolean polynomial; also, more suggestively perhaps, as a propositional formula.

Given a Boolean polynomial or propositional formula F, we may want to solve the equation F = 1, or equivalently ¬F = 0; the solutions will be the elements of ω2 where F takes the value 1, meaning the formula is true there. We may refer to such elements as models of F. Thus for example every sequence that begins as (0, …) or (1, 1, …) is a model of ¬P0 ∨ P1. All this means is that the formula is true when P0 is false or P1 is true.

We have now reached another level of abstraction. If we denote by

Mod(F)

the set of models of a propositional formula F, then Mod(F) is a subset of ω2 and thus an element of ℘(ω2). Moreover,

Mod(F) ∩ Mod(G) = Mod(F ∧ G),
℘(ω) △ Mod(F) = Mod(¬F).

Thus the set of all sets of models of propositional formulas is a Boolean sub-algebra of ℘(ω2).

Cardinality

We have shown that ω2 is in one-to-one correspondence with ℘(ω). This was shown also in “The Tree of Life,” where it was shown also that ω2 is uncountable. A set A is always strictly smaller—less in cardinality—than ℘(A): this means that the former embeds in the latter (as by the function converting x to {x}), but is never in one-to-one correspondence with the latter. This result is called Cantor’s Theorem by Zermelo in his 1908 paper, “Investigations in the Foundations of Set Theory,” which lays out most of the axioms that we now use for sets. The idea of the proof is that, if f is a function from A to ℘(A), then the element {x ∈ A : x ∉ f(x)} of ℘(A) can never be f(b) for any b in A, because if it is, then there arises the contradiction that b ∉ f(b) if and only if b ∈ f(b). With the same idea, we obtain the Russell Paradox (which Russell explained in a letter to Frege in 1902, but Zermelo apparently knew independently), that if x ranges over all sets, then the collection {x : x ∉ x} of sets cannot be a set itself.

So ω2 is strictly larger than the infinite set ω, and ℘(ω2) is larger still. However, the sub-algebra of ℘(ω2) that we have found—namely the algebra of sets of models of propositional formulas—is isomorphic to a sub-algebra of ℘(ω). For the record, though we shall not pursue the idea, we can obtain such a sub-algebra by redefining Mod(Pk) as the set of natural numbers whose remainders after division by 2k + 1 are less than 2k. Then for example Mod(P0) would be {0, 2, 4, …}, and Mod(P1) would be {0, 1, 4, 5, 8, 9, …}, and Mod(P2) would be {0, 1, 2, 3, 8, 9, 10, 11, 16, 17, 18, 19, …}, and so on, and Mod(¬P0 ∧ P1 ∧ ¬P2 ∧ ¬P3) would consist of the natural numbers having remainder 20 + 22 + 23, or 13, after division by 16.

Compactness

Let us instead develop another level of generalization, which will give us the important theorem that will conclude this article. If H is now a set of propositional formulas, possibly infinite, then the models of H are those binary sequences, each of which is a model of each formula in H. If every finite subset of H has a model, let us say that H is consistent.

If we are given an infinite consistent set of formulas, but the whole set has no model, we have no practical way to prove this. However, it can never happen. To prove this, we start with a consistent set H of formulas. We shall show that H does have a model.

For any formula F, one of H ∪ {F} and H ∪ {¬F} is consistent. This is an application of the Law of the Excluded Middle, whereby H ∪ {F} is consistent or not. Suppose it is not. Then for some finite subset H0 of H, the set H0 ∪ {F} has no model. In this case, ¬F must be true in every model of H0.

It follows then that H ∪ {¬F} is consistent, as desired. Indeed, if not, then for some finite subset H1 of H, the set H1 ∪ {¬F} has no model. However, being still a finite subset of the consistent set H of formulas, the union H0 ∪ H1 has a model a. Then ¬F is true in a, since this is a model of H0. Thus a is a model of H0 ∪ H1 ∪ {¬F}, and therefore of H1 ∪ {¬F}, contrary to our finding that it had no model. We conclude that H ∪ {¬F} is consistent, if H ∪ {F} is not.

We now recursively define a sequence (Fn : n ∈ ω) of formulas by the rule that, for each natural number k, if H ∪ {F0, …, Fk − 1} is consistent, then Fk is defined to be one of Pk and ¬Pk, as long as it ensures (as it can, by what we showed) that H ∪ {F0, …, Fk} is consistent. When k = 0, then H ∪ {F0, …, Fk − 1} is just H, which we have assumed consistent to get us started.

By induction then, for each natural number m, the set H ∪ {F0, …, Fm − 1} of formulas is consistent. For, it is consistent when m = 0; and if it is consistent when m = k, then, by construction, it is consistent when m = k + 1.

We can conclude that the whole set H ∪ {Fn : n ∈ ω} is consistent. Indeed, its every finite subset is, for some m, a subset of H ∪ {F0, …, Fm − 1}; and we have just shown this to be consistent.

In the set {Fn : k ∈ ω}, again, each formula Fk is either Pk or ¬Pk. By this alone, the whole set has a model, a unique model, namely b, where bk = 1 if Fk is Pk, and otherwise bk = 0.

We now show that b is a model of H. We need only show that every formula G in H is true in b. Let n be large enough that, for every propositional variable Pk that occurs in G, we have k < n. Being a finite subset of H ∪ {Fn : n ∈ ω}, the set

{G, F0, …, Fn−1}

must have a model c. For each k that is less than n, since Fk is now true in both b and c, but Fk itself is Pk or ¬Pk, we must have bk = ck. Now there is a key point, which some books will spell out formally as a lemma: the value of G at b or c depends only on (b0, …, bn−1) or (c0, …, cn−1). These sequences being equal to one another, and G being true in c, we conclude that G is true in b as well.

The theorem just proved is the Propositional Compactness Theorem.

Application

The theorem lets us conclude something suggested earlier in concrete terms involving primes: that the set {P0 ∧ P1 ∧ … ∧ Pn : n ∈ ω} of formulas has a model. We hardly need a general theorem for this conclusion; the set obviously has a model, namely the sequence (1, 1, 1, …). The point of the theorem is that every example will be like this: we shall never be able to find some strange set of propositional formulas that has no model, despite being consistent.

If we let N denote the set {1, 2, 3, …} of counting numbers, we can write an example considered earlier as

{“nN”} ∪ {“nk” : kN}.

This says there is a counting number, called n, that is not 1 or 2 or 3 or any other counting number. The set of formulas is consistent, but has no model. However, the formulas are not propositional formulas; the way the formulas are tied together by the letter n is not possible with propositional logic. (I have written the elements of the set above with quotation marks to ensure that they are understood as individuals.)

Generalization

There is a more useful, and deeper, First-order Compactness Theorem. In first-order logic, the atomic propositions, like the atoms of modern physics, do have parts. The parts of an atom are protons, neutrons, and electrons, not atoms themselves; the parts of an atomic proposition are not propositions. Some of the parts may be variables standing for individuals, or objects; other parts may be symbols like + and × for arithmetic operations, or 0 and 1 for constants. These symbols may be uncountably numerous: the cardinality of the set of them may exceed that of ω. It is the countable case of the First-order Compactness Theorem that Skolem established implicitly in 1922, and Gödel explicitly in 1930. Anatoly Maltsev gave the general case in 1941.

There is no compactness theorem for second-order logic, where there are variables standing for sets of objects. In the last example above, the formula nN is implicitly of second order, since the meaning of N includes the Induction Axiom, whereby every set that contains 1, and contains k + 1 for its every element k, contains every counting number.

The set R of real numbers is a model of the axioms of a complete ordered field; but the Completeness Axiom—every nonempty set of real numbers with an upper bound has a least upper bound—is of second order. If we combine the axioms of a complete ordered field with the set

{0 < a} ∪ {a < 1 ⁄ n : nN},

we cannot get a model. The number a here would be a positive infinitesimal; but there is no such real number. However, if we take all of the first-order formulas that are true in R, we can get a model of these in which there are positive infinitesimals. Thus arises Abraham Robinson’s “non-standard” analysis, in which the original conception of calculus by Leibniz and Newton is rigorously justified.

The Propositional Compactness Theorem can be understood as the purely topological result that the sets of models of sets of formulas are precisely the closed subsets of ω2 in a topology that is compact. The topology has a name: the Tychonoff topology. As a topological result, the Propositional Compactness Theorem is a special case of the Tychonoff Theorem.

Logic exists for the sake of illuminating our reasoning, mathematical and otherwise; but then mathematics illuminates logic.

We defined the Boolean arithmetic of 2, then ω2, then ℘(ω); but we have used it mainly for ℘(ω2). We have obtained a topology on ω2, though we have not said exactly what it means; let me make only the suggestion that a topology lets us do a lot of what the linear ordering of the real numbers lets us do in calculus (if we know calculus). In particular, the Boolean operations on ω2 are continuous in the topology.

Recapitulation

Here I try to analyze what we have done into all of its levels of abstraction or complication. In describing the levels, I shall introduce yet more notations and definitions.

  1. We start with two things, called 0 and 1.

  2. We consider 0 and 1 together as a pair, a set with just two elements. The pair is {0, 1}, but we denote it also by 2.

  3. The set called 2 has operations of addition, subtraction, multiplication, and division that make it analogous to the set Q of rational numbers, the set R of real numbers, and the set C of complex numbers. Each of R, Q, and C is what is called a ring, because in it certain identities hold, namely those identities ensuring that addition and multiplication are associative and commutative, multiplication distributes over addition, 0 is an additive identity, 1 is a multiplicative identity, and every element has a negative. Each of the rings is moreover a field, because nonzero elements have reciprocals, and there are nonzero elements, since 1 ≠ 0. Then 2 is also a field, and as such, it may be written as F2. It has the peculiarity of admitting the identity x² = x; this makes F2 a Boolean ring. In every Boolean ring, 1 + 1 = 0, since 1 + 1 = (1 + 1)² = 1 + 1 + 1 + 1.

  4. We generalize the definition of 2 as the set {0, 1}. Thus 0 itself is the empty set (also called ∅); 1 is the set {0}, whose unique element is 0; and whenever a set n has been defined, we let n + 1 be n ∪ {n}, which the smallest set that both includes and contains n. This means the elements of n + 1 are precisely n and its elements.

  5. We now have a definition for every element of the set {0, 1, 2, …} of natural numbers. We call this set ω.

  6. For every n in ω, we can understand a binary sequence of length n to be simply a function from n to 2.

  7. Such a function is a set {⟨0, e0⟩, …, ⟨n − 1, en−1⟩} of ordered pairs, where each ek is in 2.

  8. The ordered pair ⟨a, b⟩ is the set {{a}, {a, b}}.

  9. The binary sequences of length n compose a set, denoted by n2.

  10. The ring operations on F2 pass termwise to n2, making this a Boolean ring as well, though not a field, unless n = 1.

  11. The elements of the union of two sets are the objects, each of which belongs to one or both of the sets. A set of sets also has a union, whose elements are the objects, each of which belongs to at least one of the sets. In this sense, the set of all finite binary sequences is the union of the set {n2 : n ∈ ω}. Another way to denote this union is by ω>2.

  12. The set ω>2 is partially ordered by the rule that (e0, …, em − 1) < (f0, …, fn−1) just in case m < n and (e0, …, em − 1) = (f0, …, fm − 1). Then the set ω>2 is a tree, because for each of its elements, the set of elements that are less than it is linearly ordered and is moreover well-ordered, in the sense that every nonempty subset has a least element. (We need make explicit the last condition only when considering infinite trees. The set ω is well-ordered by the relation ∈, though we usually just write < for this.)

  13. The set of branches—maximal linearly ordered subsets—of ω>2 is in one-to-one correspondence with the set ω2 of binary sequences of length ω.

  14. As each set n2 is a Boolean ring, so is ω2.

  15. The set ω2 is in one-to-one correspondence with the set ℘(ω) of subsets of ω.

  16. The set ℘(ω) therefore inherits the structure of a Boolean ring from ω2.

  17. The set ℘(ω) also has its own native structure as a Boolean algebra, when it is considered as being equipped with the operations of taking intersections and complements.

  18. The two structures—ring and algebra—on ℘(ω) are interdefinable in the sense that intersections in the algebra are the same as products in the ring, while x + y in the ring is (x ∪ y) − (x ∩ y) in the algebra, and the complement of x in the algebra is 1 + x in the ring. (Here 1 is just the symbol for the multiplicative identity, and in ℘(ω) this is ω.)

  19. For each k in ω, we introduce a new symbol, Pk, which can be called a Boolean variable or an atomic propositional formula.

  20. We can interpret the symbol Pk as the function from ω2 to 2 that selects from any sequence (en : n ∈ ω) the entry ek. However, we distinguish the symbol from its interpretation.

  21. We can denote by ω22 the set of all functions from ω2 to 2.

  22. As the sets n2 and ω2 are Boolean rings, so is ω22.

  23. From the variables Pk, we build up arbitrary Boolean polynomials, or propositional formulas, or just formulas, using symbols ∧ and ¬. The formal definition is recursive:

    1. Each Boolean variable is a formula.

    2. If F is a formula, so is ¬F.

    3. If F and G are formulas, so is (F ∧ G).

    For convenience in usuing propositional formulas to denote actual compound propositions, one may introduce various abbreviations, especially (F ∨ G) for ¬(¬F ∧ ¬G), and (F ⇒ G) for (¬F ∨ G). The premisses in the puzzle quoted at the beginning of this article might be written as

    B ⇒ ¬L,
    ¬(D ∧ C),
    ¬L ⇒ D.
  24. Each propositional formula F has an interpretation as a function from ω2 to 2, when each Pk is interpreted as above, and ∧ is interpreted as multiplication, and ¬ as adding 1, in the Boolean ring ω22.

  25. The propositional formulas compose a set L.

  26. An element σ of ω2 is a model of a propositional formula F if the value at σ of the interpretation of F as a function from ω2 to 2 is 1; in traditional algebraic notation, this means F(σ) = 1.

  27. We however may use the symbol ⊨ to denote the relation that holds between an element of ω2 and a formula of which it is a model. Thus instead of F(σ) = 1 we may write

    σ ⊨ F;

    we may also say in this case that F is true in σ.

  28. We can now form the set Mod(F) of models of a particular formula F; it is the subset {σ ∈ ω2 : σ ⊨ F} of ω2.

  29. We can now form the subset {Mod(F) : F ∈ L} of ℘(ω2).

  30. The set {Mod(F) : F ∈ L} is a Boolean sub-algebra of ℘(ω2), because it is closed under the operations of taking intersections and complements.

  31. A model of a set H of formulas is just a model of each of the formulas in H.

  32. As the intersection of two sets consists of the elements that they have in common, so the intersection of a set of sets consists of the elements that they have in common. In this sense, the models of a set H of formulas compose a set Mod(H), which is the intersection of the set {Mod(F) : F ∈ H}.

  33. We can now form the set {Mod(H) : H ∈ ℘(L)} of sets of models of sets of formulas.

  34. The set {Mod(H) : H ∈ ℘(L)} is not a Boolean algebra, because it does not contain the complement of its every element. However, it is closed under taking intersections and unions. Moreover, it contains the intersection of its every subset, even an infinite one; it also contains ∅, which is Mod({P0, ¬P0}); therefore, by definition, {Mod(H) : H ∈ ℘(L)} consists of the closed sets of a topology on ℘(ω2). The complement of a closed set is called open.

  35. The topology on ℘(ω2) is compact, because if every finite subset of a set C of closed sets has nonempty intersection, then the whole set C has nonempty intersection. (Equivalently, if the union of a set O of open sets is all of ℘(ω2), then so is the union of a finite subset of O.)

  36. The ring operations being continuous, ℘(ω2) is a topological ring.

Postscript: König’s Lemma

We may now have climbed a number of levels above the infinite binary tree ω>2. However, there is a way to derive the Propositional Compactness Theorem from a result about this tree. Suppose T is an infinite subtree of the tree: this means T is an infinite subset of ω>2 that contains every element of the latter that is below an element of the former. Then there are infinitely many elements of T above the binary sequence ( ) of length 0 at the root of ω>2. Suppose that, for some n in ω, there are infinitely many elements of T above some binary sequence (e0, …, en−1) having length n in the tree. Then either there are infinitely many elements of T above (e0, …, en−1, 0), or there are infinitely many elements of T above (e0, …, en−1, 1), or perhaps both. Thus we can choose some en such that there are infinitely many elements of T above the binary sequence (e0, …, en) having length n + 1. Therefore, by recursion, we obtain an infinite binary sequence (ek : k ∈ ω) such that each finite initial segment (e0, …, en−1) belongs to T. In other words, T has an infinite branch.

This is a form of König’s Lemma. We used it implicitly to prove the Propositional Compactness Theorem, once we showed that, if H ∪ {F0, …, Fn−1} is consistent, where each Fk is either Pk or ¬ Pk, then H ∪ {F0, …, Fn}, is consistent, where Fn is either Pn or ¬ Pn.

For another proof of the Propositional Compactness Theorem, again we let H be a consistent set of formulas. There is a subtree T of ω>2 consisting of those elements e such that, for each F in H for which F(e) is defined, F(e) = 1. For F(e) to be defined, if e is (e0, …, en−1), the variables Pk occurring in F must satisfy k < n. The subtree T of ω>2 must be infinite: this is so because, even though H may contain infinitely many formulas whose variables come from the list {P0, … Pn−1}, there are only finitely many such formulas that are inequivalent in the sense of having different models; by the consistency of H, those formulas have a model (ek : k ∈ ω); and then (e0, …, en−1) belongs to T. Since there are infinitely many n, also T is infinite. So it has an infinite branch, and this is a model of H.

One Comment

  1. Posted December 28, 2019 at 4:53 pm | Permalink | Reply

    vamos verificar a diferença entre dois tópicos

6 Trackbacks

  1. By Logic of Elliptic Curves « Polytropy on January 6, 2019 at 8:49 pm

    […] of unity is zero in the field. For example, in the two-element field, effectively studied by George Boole, 1 + 1 = 0, so the characteristic is 2. Only as a post-doc was I able to generalize, to arbitrary […]

  2. By On the Odyssey, Book I « Polytropy on November 9, 2019 at 7:25 am

    […] The Grolier dictionary is probably the American Heritage Dictionary for sale abroad (as I noted in “Boolean Arithmetic”). […]

  3. By Ordinals « Polytropy on November 16, 2019 at 3:07 am

    […] The origin of the sign ∈ in the Greek epsilon (ε) is discussed in “Boolean Arithmetic.” […]

  4. By Poetry and Mathematics « Polytropy on July 5, 2020 at 9:14 pm

    […] set in the lavishly illustrated “Tree of Life”; I effectively proved its compactness in “Boolean Arithmetic.” However, when I proposed this compactness as a topic for a student who wanted to do her diploma […]

  5. By Knottedness « Polytropy on September 24, 2020 at 8:12 am

    […] the volume to grow, as if we add horns at each step. Let us say that Bn has 2n horns, one for each binary sequence σ of length n. Let the corresponding horn itself be called Hσ. To obtain Bn+1, to each Hσ we […]

  6. By Why It Works « Polytropy on September 26, 2020 at 10:49 am

    […] “Boolean Arithmetic,” I worked out the proof that the space of models of propositional logic is compact. The topology […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: