4

By "mathematical modality" or "mathematical necessity" I mean the sort of necessity that makes mathematical propositions true.

The modal understanding of logic is that a deduction is an inference that is true by necessity; that is, if the premises are true, then the conclusion necessarily follows. However, the literature seems to be a bit ambiguous on exactly what the boundaries of deduction are. The examples of deduction are always examples of logic such as modus ponens, but the modal account of deduction seems to include mathematics as well since mathematics is true by a form of necessity. Also, mathematics is often called a deductive science, implying that mathematical reasoning is considered deduction, at least by those authors.

Also, logical necessity is commonly listed as a form of modality, and I've always assumed the term is intended to include only the forms of necessity usually associated with logic and not the broader forms of necessity associated with mathematical reasoning. By contrast, I don't think I've seen a reference to mathematical modality, although that's possibly supposed to be included in conceptual modality.

So here are my questions: Outside of works that are specifically on logic, is deduction generally understood to include or exclude mathematics? Has anyone explored the distinction between logical and mathematical (and conceptual) modality? Does anyone maintain that logical modality includes mathematical modality? Does anyone dispute that mathematical modality includes logical modality?

UPDATE:

It seems I wasn't clear enough. I know that mathematics uses standard logic; I just took that for granted. But mathematics also uses reasoning steps that are not standard logic. For example, the law of mathematical induction, inferring from A<B and B<C that A<C, inferring from the fact that one point is in the interior of a circle and another point in the exterior, that the line between the points intersects the circle, etc.

These are all propositions that have mathematical necessity as I am trying to describe it. They are, in some sense, necessarily true, but not by logical necessity. You could argue that they have metaphysical necessity, and maybe they do, but it seems to me that there is another more restrictive form of necessity here, because everyone readily agrees on these sorts of propositions, while not everyone agrees on other sorts of claim about metaphysical necessity.

16
  • There are generally no "boundaries of deduction". We can always construct a system of axioms in which an arbitrary statement using an arbitrary alphabet is "true". The usefulness of mathematics does not lie in the fact that it can deduce "true statements". It's only useful because it can deduce statements that have a physical interpretation. Deduction is only one tool of many in the human tool chest that produces physical results. I do, for instance, not need any math to deduce that dropping a heavy weight on my toe is a bad idea. I can learn that from experience. Commented Apr 24 at 1:33
  • @FlatterMann, I reject your formalist account of mathematics. It is obviously false since it cannot account for the practical utility of mathematics. Commented Apr 24 at 4:32
  • 1
    We use mathematics as a tool to build models. It is the models that have practical utility, because we design them with that intent. The abstract tool has been designed to facilitate creating trustworthy models. Commented Apr 24 at 13:38
  • 1
    I think we can consider ideas like cryptography using mathematical equations without having to assert that there must be a physical substrate. We all got the memo, can we move on? 'Cryptography' as a topic area is nonphysical. Equations are not made of anything. Commented Apr 24 at 18:26
  • 1
    Yep. If we weren't here, our thoughtforms wouldn't be either. Stange world. Commented Apr 24 at 22:12

7 Answers 7

6

Quote from the preface to The Story of Proof: Logic and the History of Mathematics (2022), by the mathematician John Stillwell:

The history of mathematics can be viewed as a history of proof, because mathematics presents the most extreme challenges to proof.

Motto of the book:

How the concept of proof has enabled the creation of mathematical knowledge.

Mathematics uses deduction (valid logical inference) in/as proofs. Thus it makes no sense to speak of deduction "including or excluding mathematics". Deduction is not some higher category instantiated by math or logic. Deduction is the specific form of argument (which is, among other things, characterized by necessary truth preservation) that is used in standard mathematical proofs. Other forms of proofs are also sometimes used - e.g. purely visual proofs, but those can easily be misleading, so need to be reducible to standard deductive proofs (even though, if not misleading, they can be more beautiful, insightful and instructive than any explicit deduction). There is no special "mathematical form of reasoning" used in deductive proofs. It's just logical, deductive reasoning (either classical or intuitionist). (Which doesn't encompass the reasoning involved in discovery or problem solving, in sofar as reasoning is involved in that, but that's another issue.)

Not all philosophies of logic see logic as "a form of modality". That perspective is mostly due to model theory (going back to Tarski) where "logically valid" is interpreted/explained as "true in all possible models". Gentzen's natural deduction (see proof-theoretic semantics) is not a modal view in this sense, and an approach like dialogic logic is also not. In both these other approaches (philosophies of logic, plus specific ways to develop formal logic(s)) logical validity is based on unambiguous rules and the strict application of those rules.


If the question is what kind of special "mathematical necessity" is somehow inherent in mathematical axioms, then it becomes a lot more interesting. According to a formalist view (going back to e.g. Thomae (1840-1921), vehemently opposed by Frege) mathematics is just a formal game, like chess. The choice of axioms would then be arbitrary. (But, in fact, even for chess the selection of rules - and the content of rules - is far from arbitrary; the variation in rules that is "possible" is pretty radically constrained by, even though it's not fully determined by, what can lead to "interesting", non-trivial games.) From a strictly logical standpoint, the choice of axioms is indeed arbitrary. But the actually selected axioms in mathematics are not arbitrarily selected. The question which axioms are needed, as minimal commitments to make mathematical proofs possible in various branches of mathematics, is taken up by reverse mathematics.

The best way to get a sense of how axioms are actually established is, I believe, to look at rather new, smaller branches of mathematics, such as, for instance, the development of multi-person algorithms for fair cake cutting, or the theory of knots, or the theory of folding (origami). We really never start with any explicit axioms, but with very concrete and basically utterly simple operations. (Such as: take a piece of paper and fold it, once. It seems typical for mathematics that we do almost immediately introduce some idealization. In this case: we image the paper to have no thickness, so any number of folds is "possible". This, I believe, is, the one and only "metaphysical" assumption we're making in this case.) It's only when we start to make general statements or try to relate these activities to other branches, or try to gauge the complexities, that axiomatization may be called for. Reverse mathematics can then also bring out common, underlying, structural similarities with other branches of math.


For an intro to origami, see the Huzita-Hatori axioms. Note how the wikipedia article very deliberately -- and appropriately, I think -- uses the word "discovered" and "rediscovered" in regards to these axioms. It's interesting to reflect on the similarities and dissimilarities with Euclidean geometry. It turns out that the origami operations can, for instance, trisect an angle - which is impossible with compass-and-straightedge.

From the point of view of philosophy of mathematics, the "forgotten" or ignored eighth axiom seems to be the most remarkable: it shows you more generally what we want from "axioms". That axiom states that there is a fold (a fold can be made) along a given line. Why would we need that? It's needed (as axiom/as elementary folding operation) for completeness:

This article reviews the so-called “axioms” of origami (paper folding), which are elementary single-fold operations to achieve incidences between points and lines in a sheet of paper. The geometry of reflections is applied, and exhaustive analysis of all possible incidences reveals a set of eight elementary operations. The set includes the previously known seven “axioms”, plus the operation of folding along a given line. This operation has been ignored in past studies because it does not create a new line. However, completeness of the set and its regular application in practical origami dictate its inclusion.

(Jorge C. Lucero, On the elementary single-fold operations of origami: reflections and incidence constraints on the plane (2017)) (I have rendered in bold what I found philosophically salient. It's an interesting paper, and anyone with either a minimum of knowledge of Euclidean geometry or a minimum of knowledge of paper folding should be able to follow it.)

What we, generally, want from a set of axioms is that each axiom should be as simple as possible; no axiom should be derivable from the set of all the others; all together they should be a sufficient basis enabling proofs of whatever we want to prove. So, simplicity, minimality/independence, and completeness.

The first seven axioms are all 'constructive' (they introduce ways in which a new fold can be constructed); the eighth is more 'foundational': it's needed to ensure that the formal system completely mirrors the practice (all results that can be achieved). It was only discovered by submitting the initial system (practice plus 7 original axioms) to a rigorous mathematical re-analysis (using a mapping to the geometry of reflections) trying to capture the limits of the whole system. (It's similar to discovering the need for 0 and the identity element in algebra: it doesn't add a "new number" so much as it completes the algebra. The nice thing is that here we can witness the gestation and birth of a 0 in a new branch of mathematics.)

1
  • 2
    Comments have been moved to chat; please do not continue the discussion here. Before posting a comment below this one, please review the purposes of comments. Comments that do not request clarification or suggest improvements usually belong as an answer, on Philosophy Meta, or in Philosophy Chat. Comments continuing discussion may be removed. Commented Apr 25 at 14:26
4

Frege and Russell hoped to reduce all of mathematics to logic: a project called logicism. On this view, there is no distinction between logical deduction and mathematical deduction, or between logical necessity and mathematical necessity. Mathematics is just a specialised branch of logic. Russell wrote that mathematics is a subject in which we do not know what we are talking about, nor whether what we are saying is true. It has no specific subject matter because it deals only in logical relationships.

This position ran into difficulties widely considered fatal. Some of the axioms Russell relied upon were disputed, and then Gödel came along and punched a big hole in the project by showing that there are fundamental limitations to the capabilities of axiom systems. There are a few neo-logicists around today, but it is not a popular position.

In modern logic it is customary to distinguish between logic and theory. Roughly, logic furnishes the framework and the mechanics of deduction, while theories provides the content. Mathematics sits on the theory side. Mathematics is a collection of theories with specific subject matter: numbers, sets, groups, spatial relationships, etc. Mathematical theories have non-logical axioms and it is the job of logic to demonstrate how theorems are deduced from axioms.

Consequently, there is a difference between the necessity that attaches to logic itself and the necessity, if any, that attaches to the content of theories. It is common to regard mathematical theorems as necessarily true, but only relative to their subject matter. Some theorists might describe the necessity of mathematics as a kind of conceptual or even analytic necessity. The axioms of scientific theories might also be considered necessary in a different way.

There is room to challenge this logic/theory dichotomy or to argue that it has blurred edges. The distinction is usually based on claims such as the following:

  1. Logic is topic-neutral; it has no specific subject matter.

It is doubtful whether anything really is topic-neutral. Even logics have metaphysical assumptions concerning such things as existence or realism.

  1. Logic is purely formal.

Some logicians dispute this and allow that there is material consequence. Also, mathematics is formal in nature.

  1. Logic is concerned only with the meaning of logical constants, not with subject-specific terminology.

OK, provided we have firm criteria for what a logical constant is. The □◇ operators from modal logic serve as logical constants but this suggests that the class of logical constants is indefinitely extensible.

  1. Systems of logic feature substitution rules that guarantee their universal applicability.

This is usually the case, though there are some logics that restrict substitution. Also, we still need to know what the logical constants are so we know which terms cannot be substituted.

Some counterclaims:

  1. There are many logics and they appear to have different domains of application.

  2. Some logics may be regarded as theories. For example, the modal logics K4, S4, and S5 are formed from the basic logic K together with additional axioms. So they could be regarded as theories over K, at least for some purposes.

  3. It used to be common to regard set theory as a branch of logic.

  4. At a high level of abstraction, logic itself may be regarded as a theory of consequence. Some theorists have proposed the idea of a universal logic that describes the general properties of logics, by analogy to how universal algebra describes the general properties of algebraic structures. At a high level, the logic/theory distinction dissolves. Logic is a theory concerning the structure of consequence relations; theories are local logics with restricted consequence rules.

1
  • 1
    I'm reminded of how SICP explained how great Lisp was, until about page 300 when they admitted you couldn't do anything with it at all unless you smuggled in the dread plague of assignment. Commented Apr 24 at 22:49
3

Most of the "informal steps" of mathematics have formal proofs available. We don't have to prove them every time, we just need to know/trust that a reliable proof exists that would work for this case... and thar last is generally achieved as a side effect of specifying the assumptions we are working within and confirming they match the requirements of the proof.

Don't confuse elision of intermediate logical steps with absence of logical grounding.

Admittedly, in practice human thought relies a lot more on pattern matching than on logic. But by saying "mathematics", you have limited that to retrieving and applying patterns which can be examined and whose validity can be tested/proved logically.

13
  • 3
    (I have a truly elegant proof of that conclusion, but it doesn't fit in a comment box.) Commented Apr 24 at 15:25
  • 1
    The Answer box is bigger :-) Commented Apr 24 at 22:33
  • Where can I download the 300TB long formal proof for the statement that the integral of x is x^2/2 + C? Commented Apr 25 at 16:48
  • 1
    @flattermann Isn't that from the basic definition of derivatives and integration, which is in turn derived from the infinite series expansions? I think it only took us one lecture, if that. But it has been over four decades and I haven't used calc much since then, so if there is a gotcha here I'm missing it. I grant that calculus was explained partially in intuitive terms, but only partially... Commented Apr 25 at 20:48
  • @keshlam Yes, but how many symbols does it take to express that x is a real valued variable? You are forgetting that symbolic logic doesn't have a built in interpretation. It's just random looking strings of characters. So how do we know what such a string means? We don't. Instead we have to expand it so that it can not mean anything other than what we want it to mean. Every proof of a high level statement basically has to carry the weight of all exclusions of all other possible interpretations with it. That's a horrible overhead that humans don't need. We already have the right context. Commented Apr 26 at 6:43
3

The terms

"mathematical modality" or "mathematical necessity"

can be read in two ways.

As a modality alongside metaphysical, logical, physical, etc. Maybe there is a little place between logical necssity and nomic one.

Contra: Russell and Wittgenstein (Tractatus 6.37), according to which there is no "necessity" distinct from the logical one. There is only the absolute generality of logic.

See B. Russell Principles of Mathematics (1903) , §430:

Everything is in a sense a mere fact. A proposition is said to be proved when it is deduced from premisses; but the premisses, ultimately, and the rule of inference, have to be simply assumed. Thus any ultimate premiss is, in a certain sense, a mere fact. [...] The only logical meaning of necessity seems to be derived from implication [consequence]. A proposition is more or less necessary according as the class of propositions for which it is a premiss is greater or smaller. In this sense the propositions of logic have the greatest necessity, and those of geometry have a high degree of necessity.

What about the issue:

is deduction generally understood to include or exclude mathematics?

In this case we are discussing of logical consequence: the relation firstly investigated by Aristotle:

A deduction is speech (logos) in which, certain things having been supposed, something different from those supposed results of necessity because of their being so. (Prior Analytics I.2, 24b18–20)

"something results by necessity", "it follows from", "no counterexample": many ways to express the consequence relation.

Modern logic analyzed the relation in terms of proofs and models: the proof-theoretic and model-theoretic perspectives have been considered as providing rival accounts of logical consequence.

The remarkable fact is that - in first order logic - the two perspectives carve out the same collection of intuitively valid arguments (Kreisel's squeezing argument).

From this point of view, we have no specific "matehmatical consequence" concept. A theorem of arithmetic follows from the axioms of arithmetic "by logic alone", i.e. it is a logical consequence of the axioms.

Contra: Henri Poincaré. For Poincaré mathematics needs a specific form of reasoning: induction.

Induction (contra Logicism), is not reducible to pure logic. Logic is analytic (devoided of content) while induction producesmathematical content.

Maybe we can investigate the old concept of Logical Form: logic is formal because logical inferences and logical laws are intuitively valid irrespective of their content.

If so, maybe a mathematical truth, like 1+1=2 is true by way of a specific "mathematical form": the symbolic manipulation laws of arithmetics.

Poincaré was strongly influenced by Kant, according to which Mathematics is the Science of Forms:

mathematical knowledge is synthetic a priori actually has two components. One is that mathematics can claim to give a priori knowledge of (universally applicable to) objects of possible experience because it is the science of the forms of intuition (space and time which are conditions under which all objects of experience are made known to us). The other is that the way in which mathematical knowledge is gained is through the synthesis (construction) of objects corresponding to its concepts, not by the analysis of concepts.

Here we have many interesting points: form vs content, analysis vs synthesis, and construction.

Still useful: G. E. Moore, Necessity (Mind, 1900):

it would appear there are three classes of entity which are commonly called necessary. We may call a connexion necessary, or we may call a thing necessary, or we may call a proposition necessary. And there is at least one property which may be commonl to all these three. All three of them may be forced upon the mind. [...] The universal certainly would seem a more likely candidate, than either of the others, for the honour of identification with the necessary. They have been ranked together by Kant as joint marks of the a priori.

But is there, perhaps, some [...] kind of universality which is common both to the propositions of Arithmetic and to the Law of Contradiction, and indeed to all propositions which have a prima facie claim to be considered necessary truths ? There is, I think, a sense in which, not indeed strict universality, but a certain generality may be claimed for all of them. [...] The logical relation, by means of which I propose to define necessity, is one to which constant appeal is made in philosophical arguments; but the appeal is almost as frequently misused. It is said that one proposition is presupposed, or implied, or involved in another [...]

if we say that no proposition is necessary in itself, but that when we call it necessary we can only mean that it is connected in a certain way with other propositions, it may be asked: But what of this connexion ? [...] when we say: If you admit that, you must admit this: they are necessarily connected; we only mean: This follows from that. [...] We have, then, an answer both as to the meaning of necessary propositions; and also as to the meaning of necessary connexion between propositions. The first are necessary when they are implied in a large number of other propositions; and as to the second, it is the proposition that the truth of what is implied follows from the truth of that which implies it, that is necessary. The connexioni itself is not necessary, but the truth, that if it is there, then a true conclusion may be drawn, is necessary.

3

The general issue here is reasoning: i.e., thinking about things in sensible, productive ways. Most people reason fairly well in direct, concrete situations. We can all (generally speaking) suss out how to parallel park a car, open a child-proof cap, get an assignment done for class or work, ease our way out of an uncomfortable social situation, etc. But there's a persistent social and personal usefulness in 'reasoning better' that leads us to try and develop reasoning as a skill. I mean, we could all go and reason out how to build ourselves a hut for shelter (if we had to). But if we reason better — thinking about angles and sizes and materials and such — we could certainly build a sturdier, more functional, and more aesthetically pleasing hut. And so we try to formalize our (otherwise concrete) reasoning:

  • Formalizing our thoughts about angles and distances leads us to geometry
  • Formalizing our thoughts about quantities leads us to algebra
  • Formalizing our thoughts about materials leads us to sciences
  • Formalizing our thoughts about thinking leads us to logic

Formalization merely means that we take pragmatic and concrete acts of reasoning and abstract them to generalized forms that we can import/export to other pragmatic and concrete acts of reasoning: e.g., we formalize all of the reasoning we did to build a hut, and apply that formal system to building other things (like a granary, a well, a stone circle, a pyramid…).

Every formal system shares the quality that it is formalized reasoning, meaning that (to an extent) we can take what we formalize in one context and apply it by analogy to formal reasoning in other contexts. But each formal system is still (to an extent) embedded in the practical, concrete activity that it formalized. Calculus and set theory are still embedded in the practical activity of counting; geometry is still embedded in the practical activity of measuring; logic is still embedded in the practical activity of arguing and convincing others; science is still embedded in the practical activity of building. That makes all these formal systems unique and distinct.


I should add that the idea of 'necessity' is a bit of a red herring. We only bother to formalize activities that work in a direct, pragmatic sense — there's no sense trying to formalize an activity that fail to accomplish what we want — and part of the process of formalization is to capture the aspects or attributes of the activity that (pragmatically) work. Addition and subtraction are formalizations of the practical act joining or dividing counted groups of things and recounting everything. Modus ponens and modus tollens are abstractions of pragmatic causal reasoning. I mean, if we want eggs we shouldn't look just anywhere, but we should look for bird nests, because bird nests are made to hold eggs (modus ponens); and if we want to know how many eggs we collected, we can just take the eggs you collected and the eggs I collected and work the numbers mentally (addition), rather than laying them all out in a pile and counting again. These formalizations are only 'necessarily true' in the sense that — if they are applied in the proper context and manner — they are (as best we can make them) as sure a recipe for success as doing the concrete activity.

1
  • There are enormous problems with statements like "Formalizing our thoughts about angles and distances leads us to geometry.". That's exactly how they thought about it before we discovered differential geometry and all the learned people claimed that "Geometry is Euclidean geometry". It isn't. We know better today, but that makes determining "What something really is." much, much harder. Otherwise I agree. Informal methods are far more practical. More importantly, they don't suffer from the delusion to be "right every time" as logic does. Commented Apr 25 at 16:51
0

Based on the clarification in your update, it sounds like you are describing analytic truths.

The natural numbers, analytically, just are an upwardly boundless sequence of recursive applications of the successor function to a base case. Induction on the naturals inherits the analyticity since a statement's being true of a base case and being true of a recursive application of the successor function to the base as a consequence of an application of the successor function to the base case is, analytically, just what it means for a statement to be true of the natural numbers.

Likewise with the transitivity of the ordering relation since a total ordering of a collection of objects just is the situation in which the ordering relation holds between any two distinct objects in the collection. The notion is similar regarding points 'interior' and 'exterior' to a circle.

1
  • I don't think there are any philosophers who would say that any of the examples I gave are analytic. Commented Apr 25 at 21:08
0

Aren’t you just after platonism?

This would be where all mathematical truths come from. A point inside a circle and outside connect by intersecting the circle because there is such an abstract object.

Or some form of nominalism à la Azzouni?

They are true because we regard the sentences as true.

Or structuralism?

They are true because of their role in some structure.

Or if then-ism or as-ifism?

They are true because they are true once we hypothesize or axiomatized a starting point.

Etc . . .

Each interpretation of mathematical truths explains their necessity.

1
  • The question is about modality, not ontology. Commented Apr 26 at 22:40

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.