Mathematics “has no generally accepted definition,” according to Wikipedia today. Two references are given for the assertion. I suggest that what really has no generally accepted definition is the subject of mathematics: the object of study, what mathematics is about. Mathematics itself can be defined by its method. As Wikipedia currently says also,
it has become customary to view mathematical research as establishing truth by rigorous deduction from appropriately chosen axioms and definitions.
I would put it more simply. Mathematics is the science whose findings are proved by deduction.
That is my definition. It has only the word “science” in common with a dictionary definition:
măthėmă′tĭcs n. pl. (also treated as sing.) (Pure) ⁓, abstract science of space, number, and quantity; (applied) ⁓, this applied to branches of physics, astronomy, etc.; (as pl.) use of mathematics in calculation etc.
That’s the Concise Oxford Dictionary (6th ed., 1976). The Grolier International Dictionary (1981) does not even refer to science:
math⋅e⋅mat⋅ics (măth′əmăt′ĭks) n. Abbr. math. Plural in form, used with a singular verb. The study of number, form, arrangement, and associated relationships, using rigorously defined literal, numerical, and operational symbols.
Presumably the dictionaries reflect the common view. People think mathematics is about numbers, because that is all they learn about it in school. The teaching of more than this is urged by Euphemia Lofton Haynes, who says,
Mathematics is no more the art of reckoning and computation than architecture is the art of making bricks, no more than painting is the art of mixing colors.
This is in an address to highschool teachers, probably in Washington, D.C. The typescript, which I have digitized, is undated, but Haynes’s dates are 1890–1980.
For an explanation of a pursuit, an ideal example is Collingwood’s 1938 book, The Principles of Art (1938). I talked about Collingwood’s procedure in “Discrete Logarithms: Mathematics and Art” (a pdf booklet linked to from the post about art and mathematics that is also called “Discrete Logarithms”). In the middle of his book, Collingwood offers the definition,
By creating for ourselves an imaginary experience or activity, we express our emotions; and this is what we call art.
He immediately points out:
What this formula means, we do not yet know. We can annotate it word by word; but only to forestall misunderstandings …
In my formula for mathematics (“the science whose findings are proved by deduction”):

“Science” is used in the general sense of a system of knowledge. The Latin root scient means knowing, more precisely discerning or separating out. The same root is found in our “scission.” Curiously, the root is not found in “scissors,” whose initial letter ess was added by analogy, because the French original is the plural cisoires of cisoir, from the Latin cisorium, from the verb caedo, meaning cut and giving us words like “decide” and “homicide.”

Mathematics is not a deductive science, but the deductive science. Other sciences may use deductive arguments, but not exclusively. For example, a friend pointed out to me an article: Ada Marinescu, “Axiomatical examination of the neoclassical economic model. Logical assessment of the assumptions of neoclassical economic model,” Theoretical and Applied Economics, Volume XXIII (2016), No. 2(607), Summer, pp. 47–64. Says Marinescu,
We analyze in this paper the main assumptions of the neoclassical theory, considered as axioms, like the rationality of the economic actor, equilibrium of the markets, perfect information or methodological instrumentalism … Correspondence with reality matters less in this strictly abstract approach compared to the possibility to build a coherent and convincing system.
Correspondence with reality may matter less; I think it still matters, unless one is going to say that neoclassical economics is simply mathematics.

The “findings” of any science may be found by any means. The means of finding them may not, and usually will not, be the same as the method of proving them.

“Deduction” is universally valid reasoning. That mathematical proofs are universally valid is a “metaphysical proposition,” in the sense of Collingwood’s Essay on Metaphysics (1940). This means the proposition carries implicitly the “metaphysical rubric,” namely, “Certain persons believe.” As mathematicians, we believe our proofs to be universally valid. In the same way, Anselm proves not that God exists, but that we believe it:
Whatever may have been in Anselm’s mind when he wrote the Proslogion, his exchange of correspondence with Gaunilo shows beyond a doubt that on reflection he regarded the fool who ‘hath said in his heart, There is no God’ as a fool not because he was blind to the actual existence of un nommé Dieu, but because he did not know that the presupposition ‘God exists’ was a presupposition he himself made.
In mathematics, we believe not that our proofs are universally valid, but that they can be and ought to be, and if they are not, then they are not proofs.
Deduction can be called deductive logic. The address by Euphemia Lofton Haynes is called “Mathematics—Symbolic Logic.” The speaker discusses two “scientific movements,” originating in the 19th century. These movements were,

in mathematics, to criticize the foundations of calculus and geometry;

in logic, to overcome the inadequacy of Aristotelian logic.
Says Haynes,
Thus originating in apparently distinct domains and following separate parallel paths these two scientific movements led to the same conclusion, viz. that the basis of mathematics is the basis of logic also. Symbolic logic is Mathematics and Mathematics is Symbolic Logic.
On the page with the transcript of Haynes’s lecture, I mention that:

Haynes is said to have been the first African American woman to receive a doctorate in mathematics, in 1943 at Catholic University of America;

I learned of her address to teachers through a certain Twitter account.
That account is called Great Women of Mathematics, and it was denounced, earlier this year, apparently for not affirming the slogan “Trans women are women.” Perhaps there were additional failures to toe a line. Twitter users were called on to unfollow the account. Had I heeded the call, I might not have learned of Haynes’s address, or even of herself. Let me note by the way:

I try to follow Twitter accounts that I disagree with, at least if they offer actual arguments (either directly or through links).

I haven’t got great expectations for dialogue on Twitter with people I disagree with; however, I have hoped mathematicians would be different, because, as I say, we aim for universality in our arguments. This means we may be wrong, if somebody disagrees with us.
I had an argument with two other mathematicians over the proposed cancellation of Great Women of Mathematics. I shall not try to review the argument. I do have two related blog posts, called “Sex and Gender” and “Be Sex Binary, We Are Not,” composed respectively before and after the argument. I note now what Brian Earp says in his 2016 essay, “In praise of ambivalence—‘young’ feminism, gender identity, and free speech”:
… “noplatforming” can be a bad idea … even when the person you want to exclude is a dyedinthewool ideological opponent …
The political theorist Rebecca ReillyCooper, herself a controversial figure in this debate, argues that there is “a creeping trend among social justice activists of an identitarian persuasion” towards what she calls ideological totalism.
This is “the attempt to determine not only what policies and actions are acceptable, but what thoughts and beliefs are, too” …
My worry is that such thoughtpolicing, to the extent that it exists, is unlikely to achieve its aims in the long run.
Myself, I might worry, not that thoughtpolicing will fail, but that people think it will work.
I return to my original aim, which is to define mathematics. I say that its findings are proved by deduction. We usually refer to a deductive proof as a proof, simply. However, in her address, Haynes uses the word “proof” only in the context of physical science:
Let us consider for a moment a teacher of physics or chemistry. In order to examine the validity of an hypothesis, he [sic] does not rush through the proof, i.e. the experiment, concentrating on the findings, and spending the major portion of his time trying to discover whether or not his pupil has retained the results or findings. No …
Here I have edited what seems to be a corrupt text. I have put “and spending” where the typescript has “but spends.”
I think the kind of proof that Haynes refers to is inductive, in one of the senses that I discussed in the post “On Gödel’s Incompleteness Theorem.” Haynes uses the term “incomplete induction” for such proofs, as when she says of the mathematician,
His processes and methods are similar to those of his colleagues in natural science. The importance of observation, experimentation, testing of hypotheses by the laboratory method, and incomplete induction cannot be too strongly emphasized.
I assume the qualifier “incomplete” is meant to distinguish this kind of induction from specifically mathematical induction, which, in spite of its name, is a method of deductive proof. The method applies to sets of counting numbers. If such a set contains

the successor of its every element, and

1 itself,
then the set contains every counting number. For example, we may be given the set of all n such that twice the sum of the first n numbers is the product of n with its successor. More symbolically, the set consists of those n for which
2(1 + … + n) = n(n + 1).
Since indeed

2 ⋅ 1 = 1(1 + 1) and,

if, for some k,
2(1 + … + k) = k(k + 1),
then
2(1 + … + (k + 1)) = 2(1 + … + k) + 2(k + 1)
= k(k + 1) + 2(k + 1) = (k + 1)(k + 2),
we conclude that, for all counting numbers n,
2(1 + … + n) = n(n + 1).
That’s a proof. We call it a proof by induction, but it’s a deductive proof. There’s nothing missing, nothing left to prove; the proof is “complete.” The method might then be called complete induction. It is so called in one source that I have:
The method of mathematical (or complete) induction is a very strong tool in mathematical proofs. Unfortunately, in secondary school it does not receive the attention which it deserves. Most students have a rather hazy idea concerning this important method.
That’s from page 21 of Dorofeev, Potapov, and Rozov, Elementary Mathematics: Selected Topics and Problem Solving (Moscow: Mir Publishers, 1973). During the Cold War, and in particular during the first term of Ronald Reagan as American president, my teacher Donald J. Brown had us use the Soviet text for precalculus at St Albans School in Washington, D.C.
I assume George Yankovsky, translator of the text, knew what he was doing in using the word “complete” to describe mathematical induction. Nonetheless, looking around, I see that some people use the term “complete induction” as an alternative to “strong induction.” This is the method of proof whereby a set of counting numbers contains all of them if it contains

the successor of every number for which the set contains both that number and all less numbers, and

1 itself.
The two conditions can collapse to one: a set of counting numbers contains all of them if it contains
 every number for which the set contains all less numbers.
The condition of containing 1 then follows, since every set contains every counting number that is less than 1, there being no such numbers.
I find a webpage actually called “Complete Induction,” although it is only about the method that I just described as strong induction. The page is part of an online textbook on the foundations of mathematics, and the previous page in the text is called “Mathematical Induction.” Unfortunately the text teaches the same error that Donald Brown taught us from Spivak’s Calculus: the error that ordinary mathematical induction and “complete” or strong induction are equivalent conditions on the natural numbers. They are equivalent, in the sense that they are both true of the natural numbers; but what is meant is that you can prove either of the conditions after assuming the other. The proofs make tacit assumptions that ought to be explicit. Ordinary induction involves only 1 and the operation of succession, while strong induction involves a linear ordering. The explicit assumptions for the putative proofs of equivalence are that

different numbers have different successors;

1 is the successor of no number.
If we make a third assumption, namely
 ordinary induction is valid,
then we can indeed prove that the counting numbers are linearly ordered; however, the takes a lot of work, which is usually forgotten. One approach is that of Landau, whose Elementary Number Theory I used for my last post, “LaTeX to HTML.” In The Foundations of Analysis (1929/1966), given the three axioms above, Landau proves that there is a unique operation of addition with the property
n + (k + 1) = (n + k) + 1.
Then he can define the relation “<” by the rule
k < n ⇔ ∃x k + x = n.
Alternatively, one can prove the “Recursion Theorem,” or at least the special case that every natural number has a welldefined set of predecessors according to the rules,

1 has no predecessor,

the predecessors of n + 1 are precisely n and its predecessors.
Now one can define k < n to mean precisely that k is a predecessor of n.
Either way, one goes on to prove that “<” is a linear ordering and even a “wellordering,” meaning every nonempty set of numbers has a least element. This is equivalent to strong induction, at least in the compact form with the single condition that I stated.
Conversely, given a nonempty wellordered set with no greatest element, we can call

its least element 1,

the least of the elements greater than n the successor (namely n + 1) of n.
Every limit ordinal in the sense of my post “Ordinals” is a nonempty wellordered set with no greatest element, but only ω is isomorphic to the set of counting numbers.
There is more discussion of the logic here in Example 1.2.3 (pages 37–8) of Model Theory and the Philosophy of Mathematical Practice (Cambridge University Press, 2018), by John Baldwin, who calls the general problem my paradox (that is, “Pierce’s paradox”). I mentioned the socalled paradox in “Anthropology of Mathematics,” suggesting that we may learn some things when we are too young to question them, then teach them when we are older without going back to question them.
According to Euphemia Lofton Haynes, as in a quotation I made earlier, we mathematicians also use incomplete induction. Her example is the Pythagorean Theorem:
It was observation of the fact that the squares of certain numbers are each the sum of two other squares; the collection of these sets of numbers by the method of trial; the observation that apparently these and only these triplets are the measures of the sides of a right triangle—that is, observation, experimentation, incomplete induction—processes common to the experimental sciences—that led to the discovery of the Pythagorean Theorem.
That is how I would edit the typescript, which actually reads as follows:
It wasby observation of the fact that the squares of certain numbers are each the sum of two other squares; the collection of these sets of numbers by the method of trial; the observation that apparently these and only these triplets are the measures of the side of a triangle. That is by obser vation, experimentation, incomplete induction processes, common to the experimental sciences, led to the discovery of the Pythagorean Theorem.
The account is plausible. I’m not sure I didn’t offer a similar account of the Pythagorean Theorem, when an English teacher at St Albans tried to explain the distinction between deductive and inductive logic. I may have been in the eighth grade; at any rate, my classmates and I were too young to have seen official proofs in mathematics. We must have been taught the Pythagorean Theorem somehow, so that our English teacher could give it as an example of a general rule from which specific cases could be deduced. He may even have said that the rule itself was established deductively. In my memory at least, I responded that the rule must have been discovered inductively.
Perhaps before hypothesizing the Pythagorean Theorem itself, in all of its generality, somebody did observe that

3^{2} + 4^{2} = 5^{2}, and 5^{2} + 12^{2} = 13^{2}, and 8^{2} + 15^{2} = 17^{2},

the bases of the squares in each equation were sides of a right triangle.
How would one make such an observation? One might do it, not by measuring with a ruler, but with pictures. Diagrams, such as those illustrating this post, can make it clear that the triangles whose sides form the ordered triples (3, 4, 5), (5, 12, 13), and (8, 15, 17) are indeed rightangled. But then a single picture can also serve as a proof of the general theorem. It is conceivable that one may thus discover the theorem, without having considered particular triangles whose sides one knows the measures of.
Likewise may a picture replace our inductive proof that for all counting numbers n,
2(1 + … + n) = n(n + 1).
Here’s the picture:
An array of n rows, each row consisting of n + 1 dots, contains n(n + 1) dots in all. It can also be broken into two triangles as indicated, each triangle consisting of 1 + … + n dots.
You may say that’s not a proof. Dorofeev, Potapov, and Rozov say it’s not. The picture establishes the assertion, only for a special case.
The incompleteness of [such a] proof is obvious. We establish the formula for a few values of n and then draw the conclusion that it is true for any [counting number] n. With that approach, it is possible to “prove” the following assertion: for an arbitrary integer n, n^{2} + n + 41 is prime. Indeed, for n = 1, 2, 3, 4 we have 43, 47, 53, 61—all primes. “Consequently”, the assertion is proved, though it is clear that, for example, when n = 41 the number n^{2} + n + 41 is divisible by 41.
Our picture proof that 2(1 + … + n) = n(n + 1) is based on the case when n = 5; but it should be obvious that there is nothing special about 5 here. This makes the proof different from the false proof of the primality of all numbers n^{2} + n + 41. We could also write our picture proof in algebraic form:
2(1 + … + n)
= (1 + … + n) + (1 + … + n)
= (n + … + 1) + (1 + … + n)
= (n + 1) + … + (n + 1)
= n(n + 1).
However, such a proof may be too obscure, needing too much intuition. In that case, one can always fall back on the explicit proof by induction.
If I understand correctly, in the US there’s an attempt to teach arithmetic in a more intuitive way than by just applying the traditional algorithms. Thus for example to add 79 and 18, instead of first adding 9 and 8 to get 17, then adding the tens digit here to the sum of 7 and 1 to get 9, so that 79 + 18 = 97, you may do better to think
79 = 80 − 1,
18 = 20 − 2,
79 + 18 = 80 + 20 − (1 + 2) = 100 − 3 = 97.
That’s fine, but it seems to me one should have the fallback algorithm of performing the addition digit by digit, right to left, as I described first. This provides a mechanism for resolving disputes about sums, as well as for not having to think and be creative, which nobody wants to do all the time.
We have found three proofs that 2(1 + … + n) = n(n + 1), but not all of them may be accepted as proofs by everybody. I have suggested that some standard proofs found in textbooks are bogus. How then can I assert that mathematical proofs are universally valid?
A proof is not a picture or an arrangement of typographical characters, any more than a work of art is pigment in oil on canvas. The physical things are just the means we use to understand the real thing.
How we come to see the real thing may itself be obscure. The Soviet textbook has some good comments here, though they be about seeing the assertion rather than its proof:
It must be stressed that the induction method is a method of proof of specified assertions and does not serve as a derivation of these assertions. For instance, this method cannot be used to obtain the formula of the general term [of an arithmetic progression or—let us add—of the sum of its first n terms]; however, if we have found the formula in some way, say by trial and error, then the proof of it can be carried out by the induction method … In this process, of course, the method of trial and error, the mode of obtaining a formula or an assertion is not a necessary element of the proof. On the basis of some kind of reasoning or guessing we conjecture an assertion, then we can proceed to proof by induction.
This distinction between finding an assertion and proving it is one that we have seen Euphemia Lofton Haynes discuss for the natural sciences. How then is mathematics to be distinguished from, say, physics? According to Haynes,
Although a mathematical system is syllogistic and postulational in style and form, the assumption that syllogistic reasoning is the very foundation of all mathematical activity is another inherited fallacy which is the result of the reign of methodology.
I think another way to say “syllogistic and postulational” is deductive and axiomatic. Perhaps physics can be this, but the conclusions of the deductions still have to be checked, to see whether they fit the experimental data. By contrast,
the observation of the mathematician transcends that of the natural scientist in that it is not confined to observations of the physical eye …
The mathematician builds … worlds that are possible logically. Whether they are possible in our world of sense is of no concern to him.
That worlds are “possible logically” means they can be deduced from postulates. This is what distinguishes mathematics among the sciences. Mathematics is the deductive science.
Pages 22–3 of Joseph Needham,
Mathematics and the Sciences of Heaven and Earth,
Volume 3 of Science and Civilization in China
(Cambridge University Press, 1959),
concerning the Chou Pei Suan Ching or Zhoubi Suanjing
Edited February 2, 2021
6 Trackbacks
[…] « What Mathematics Is […]
[…] “What Mathematics Is”: As distinct from the natural sciences, mathematics is the science whose findings are proved by deduction. I say this myself, and I find it at least implicit in an address by Euphemia Lofton Haynes. […]
[…] I continue with the mathematics posts, taking up, as I did in the last, material originally drafted for the first. […]
[…] Euphemia Lofton Haynes, “Mathematics—Symbolic Logic” (supporting “What Mathematics Is”) […]
[…] Mathematics is in my genes somehow, but they are not my family’s genes. (I am adopted, as I discussed in a post of a year ago that was provoked by the cancellation of J. K. Rowling.) […]
[…] teacher to death, Euclid worked out a method, which we still use, for not fooling ourselves in mathematics. The deductive method is the reason why I call mathematics pacifist in principle. Like Tyson’s […]