Goldilocks, Bad Company and some Slippery Fish

Posted on February 28, 2017 by


No this isn’t a terrible (amazing?) fairy tale. And no, the title isn’t (just) badly thought out clickbait. The Bad Company problem, the Goldilocks problem and the Problem of Fishiness are all problems I’m writing about in my dissertation. More specifically, the overarching idea is to look at ways of solving the Bad Company problem. Fishiness and the Goldilocks problem are related issues with some of the more promising mainstream solutions. But first some background.

[I’ve provided some links to the SEP for those who are interested, and I play pretty fast and loose with the logic and mathematics, as the idea was to get the general idea across.]

The Bad Company problem, or simply Bad Company, is an objection to a variety of neo-logicism first championed by Bob Hale and Crispin Wright. The basic idea is that there is a principle (HP), and a collection of principles sharing important structural features with it (abstraction principles – APs) that provide epistemic and logical foundations for classical mathematics. If HP is analytic, or is privileged in some other way such that we know that it it true, then the fact that the axioms of arithmetic can be derived from HP is supposed to ground our knowledge of arithmetical truths.

In case you were wondering, HP says that the number belonging to a concept, F, is the same as the number belonging to another concept, G, just in case the Fs and Gs can be put into one-to-one correspondence ( \# F = \# G \leftrightarrow F \approx G ). BLV says that extensions two concepts are identical just in case exactly the same things fall under both (\epsilon F = \epsilon G \leftrightarrow (F\equiv G) -more technichal details at the end for those who’re interested.) Numbers and extensions are considered individual objects for these purposes.

I’ll bring in a few more details as I go, but Bad Company is the problem that there are certain APs that lead directly to inconsistency, or are inconsistent with  HP and/or other APs. How do we separate the good from the bad; how do we stay in good company, leaving the bad behind?

For a bit of context, Frege derived HP from Basic Law V (five)*, but in a letter to Frege in 1902, Bertrand Russell pointed out that Basic Law V (BLV) leads directly to inconsistency (Russell’s Paradox). But BLV and HP are both abstraction principles; HP is in bad company. Perhaps more worryingly, there are other APs that are fine on their own, but inconsistent with HP, or with each other.

The problem with BLV is that it requires there to be more objects than there possibly could be. Thanks to Georg Cantor in the 1880s, we know that the complete set of functions on a given domain is strictly larger (cardinality wise) than the domain of objects. One way of looking at BLV is as a principle that associates each function with an object, but then there have to be more objects than there are. Impossible.

Other principles, known as nuisance principles, essentially force the domain to be too small, finite in fact.  But HP needs an infinite domain, which is what we would expect given that it can be used to derive the axioms of arithmetic: there are infinitely many natural numbers.

This is the heart of what Roy Cook has called the Goldilocks problem. We need to find a principled way to allow only those APs that require the domain to be neither to large (like BLV) nor too small (like nuisance principles).

There is a large literature in which cardinality and other model-theoretic restrictions are imposed in trying to delineate a collection of ‘acceptable’ abstraction principles, which is where fishiness comes in. I’ll leave that for another, more mathematics heavy post.

The upshot of all of this is that if we can find a convincing solution to Bad Company, possibly through solving the Goldilocks problem and/or the problem of fishiness, we might then be quite close to showing how we could come to know mathematical truths.

*Both directions individually, anyway.

Some more logic

As promised, here is a bit more of the technical background for those who are interested.

First it should be noted that were assuming second-order logic with full powerset semantics and full comprehension in the background. It is in that context that analogues of the second-order Dedekind-Peano axioms can be derived from HP. Also important in that respect is that th `\approx‘ in HP is an abbreviation for a standard second-order definition of bijectability.

As far as the derivation of HP from BLV, Frege had previously laid down an explicit definition of ‘number of’: \# F = \epsilon [X: X \approx F] – the number of the concept F is the extension of the concept ‘equinumerous with F’.

There are also first-order APs, but it is generally the higher order ones that are (or may be) useful in the foundations of mathematics. The form of a second-order AP is: \partial F = \partial G \leftrightarrow (F \sim G). \partial is a function from concepts to objects/ a variable-binding term forming operator; and \sim  is an equivalence relation.