“To infinity and beyond: The struggle to save arithmetic “

The British magazine New Scientist recently posted online an article about Harvey Friedman and his work on Boolean relation theory. Unfortunately, the author seems to think that somehow arithmetic is in danger and Friedman needs to save it. The article made some people curious, and I ended up writing a somewhat lengthy group of posts on examples of undecidable (i.e., unprovable) statements. I think it may be worth sharing them, so here they are, “essentially” unedited.

I.

I’ll take several posts so we can talk about the “standard” examples, and then discuss what Friedman is doing, to glimpse at the significance of his work.

In this post, I’ll talk about the first example of undecidable sentences, due to Kurt Gödel.

In logic, we use a formal language. The point is that natural languages (such as English) are not precise enough. Many of the classical paradoxes (“This sentence is false”. The sentence “The smallest number not definable in less than twenty words” actually defines a number in less than 20 words, etc) in a sense boil down to the fact that they are formulated in natural languages, and this allows for some ambiguity.

In logic, we use a formal language, which though cumbersome, avoids these ambiguities. You probably have seen some sort of pseudo-logical notation in math courses: \land for “and”, \to for “implies” (if, then), \forall “for all” and \exists “there exists” quantifiers, etc.

Mathematical theories are formulated in this setting. We say they are formalized. A formal theory is a collection of sentences (“formulas”) in this formal language. We call these sentences axioms, and they don’t necessarily have anything to do with the standard usage of “axiom” to mean “self-evident” or “obvious”.

We use symbols for negation, and, or, implies, if and only if, parentheses, commas, for-all, there-exists, variables (x_0, x_1, x_2,\dots), and additional symbols that the specific theory we are formalizing may require. For arithmetic, we use +, \times, <, 0, 1. A formula is a sequence of these symbols satisfying certain requirements. For example, 0=0 is a formula, but 0=( is not.

The fact that addition is commutative (a+b=b+a) would be expressed by the axiom

\forall x_0 \forall x_1 ( x_0+x_1 = x_1+x_0 )

Peano Arithmetic is a list of axioms (that try to capture the basic truths of arithmetic).

Formal proof are purely syntactic transformations, according to certain rules. A proof of a sentence A from axioms is simply a sequence p_0,\dots,p_n of valid formulas such that p_n is A, and each p_i is either an axiom, or comes from previous p_j, p_k by means of these rules. (Let me not spell these rules out, it is not difficult, but it would take me a bit to describe them precisely and you’d probably find the formalism exhausting.)

What Gödel did was to formalize the liar paradox in this setting. Rather than building a formula A that means “A is false” (actually, this turns out to be impossible, by a theorem of Alfred Tarski), Gödel found a way of building a formula A that means “A is not provable from the Peano axioms.”

This sentence A is true, but undecidable:

  1. Suppose that the Peano axioms prove A. Then “A is not provable from the Peano axioms” is false. But then A is false (because of what A means). But then the Peano axioms could not possibly prove it, since all the Peano axioms are true, and truth is preserved by provability. (One needs to verify these claims, of course.) This is a contradiction.
  2. Suppose then that the Peano axioms prove \lnot A. Then they cannot prove A, because from A and \lnot A one gets a contradiction and, again, the Peano axioms can only prove true things. So “A is not provable from the Peano axioms” is true. But then the Peano axioms proved the false statement \lnot A. This is, again, a contradiction.

It follows that the Peano axioms cannot prove A and cannot prove \lnot A (so A is undecidable). In particular, A is true, but unprovable.

(One can generalize this so we do not need to talk about truth, and Gödel’s results apply to lists of axioms that do not need to be true, but let me avoid the technicalities that explaining this would require.)

This is somewhat unsatisfactory, in that the sentence A seems not to have anything to do with arithmetic, and so it is not likely that one would actually be working on anything similar to A in the course of actual mathematical practice. The aim of Friedman work, in a sense, is to produce “natural” examples of undecidable sentences that are not obtained by this “self-referential” method.

* * *

Let me close by briefly saying how it is that Gödel achieved this. (There is a technical fixed point lemma that is crucial, but I will not discuss it.) The point is, he needed to code all of logic inside arithmetic. Part of what this entails is representing formulas, and sequences of formulas, with numbers. This is relatively easy: Assign to each symbol of the language a number. Say the number for ( is 1, for ) is 2, for x_0 is 3, for x_1 is 4, for \forall is 5, for + is 6, for = is 7. The sentence

\forall x_0 \forall x_1 (x_0+x_1 = x_1+x_0)

could then be coded by the number

2^5 \times 3^3 \times 5^5 \times 7^4 \times 11^1 \times 13^3 \times 17^6 \times 19^4 \times 23^7 \times 29^4 \times 31^6 \times 37^3 \times 41^2.

There may be more efficient codings, of course. This one is as ugly as it can get.

Gödel saw that by appropriately coding formulas with numbers, all logical operations could be represented by arithmetic functions. For example, if A(x) is a formula that depends on the variable x (say, A(x) is “x=0“), by A(y) we mean the result of replacing x with y (so “y=0“). The syntactic operation

A(x) \mapsto A(y)

corresponds to some function of natural numbers T(l,m,n) such that T(l,m,n)=k if and only if l codes a formula A(x), m codes x, n codes y, and k codes A(y).

Essentially by following Gödel’s coding, the formula T(l,m,n)=k is simply an assertion about certain polynomial equations being true. (I am hiding here the solution to Hilbert’s 10th problem, to avoid having to refer to how to code sequences.)

This way, statements about provability in a formal system such as Peano arithmetic reduce to statements about polynomials. As such, Peano arithmetic may or may not be able to prove these statements about polynomials. Chasing this idea (it is here where one needs the fixed point lemma), he reached a way of making self-referential statements, such as the formula I mentioned above.

II.

In this post, I’m going to briefly talk about a very interesting set of examples that were discovered in the late 70s. These are natural “combinatorial” statements that turn out to be undecidable in Peano arithmetic but, unlike the example from my previous post, are not simply coding a logical “paradox.”

First, some of you may have heard of the halting problem. Essentially, this says that a computer cannot decide whether a given computer program will eventually stop or instead run forever. One can easily use this to give another proof of Gödel’s result that there are undecidable sentences. The example is perhaps a bit more natural that the one in my previous post, in that there is now a more mathematical reason (the halting problem) behind undecidability. On the other hand, this also requires at the end some kind of self-reference, so it is almost the same in disguise.

[In fact, it is common to present Gödel’s result in the context of the halting problem. The expression “undecidable” has a different meaning in computability theory, and it is unfortunate that this was the word, rather than “unprovable” that the article in the New Scientist chose to use.]

In the mid 1960s, Paul Cohen proved one of the most significant theorems in set theory. Cohen found that many statements in set theory are undecidable. The Zermelo-Fraenkel axioms of set theory are much more powerful than the Peano axioms, so his result is in a sense much stronger than the corresponding results for Peano arithmetic. Also, the examples that Cohen gave were purely mathematical.

Cohen showed that the so-called Continuum hypothesis is independent. This is a statement about sets of real numbers. He also showed the independence of the axiom of choice. Since then, his method has been shown to be extremely flexible and literally hundreds of new examples of undecidable statements have been discovered thanks to it.

Cohen’s method is so significant, that he was awarded for it the Fields medal, the highest honor in mathematics. (By the way, Field medals are awarded every 4 years. The last batch was given at 9:30 am on Aug. 19 this year in Hyderabad, India, see here. There was live streaming of the ceremony and I even tweeted about it.)

On the other hand, some people feel that set theory is too broad, and it would be nice to have examples that are of mathematical interest but refer specifically to arithmetic or, at least, finite objects.

In the late 1970s the first batch of such examples was obtained (by Paris, Harrington, and others). Again, the statements were shown to be true but not provable in Peano arithmetic. Somewhat later, Friedman found examples that were not provable in much stronger systems, and his latest results specifically deal with set theory and the so-called large cardinals. I get to this in a later post.

Let me give you an example which I think is easy to describe (the Paris-Harrington theorem is discussed in the New Scientist article). We all know how to write numbers in base 10. For example, 1970 means 10^3 + 9\times 10^2+7\times 10^1. Similarly, given any base n, any number can be written as a sum of powers of n so that the “coefficients” are all smaller than n. For example, 23 written in base 3, is 2\times 3^2+1\times 3+2\times 3^0.

Similarly, 23 in base 2 is 16+4+2+1 = 2^4 + 2^2 + 2^1 + 1. We could continue, and make sure that all the numbers that appear are written in base 2 as well. This gives us the super-base 2 representation of 23. In this case, it is 2^{2^2} + 2^2 +2^1 +1.

For another example, 2^{2^{2^1 +1}} +2^1 is the super-base 2 representation of 258, since 258= 256+2 =2^8 + 2^1= 2^{2^3} +2^1 = 2^{2^{2^1 +1}} +2^1.

Similarly, for any n, we can write a number in super-base n, if we write it in base n, and all the exponents are also written in base n, and all their exponents, and so on.

In the 1940s, Goodstein considered the following process:

  • Begin with a number m. Write it in super-base 2.
  • Replace all the 2s with 3s and subtract 1. Write the result in super-base 3.
  • Replace all the 3s with 4s and subtract 1. Write the result in super-base 4.
  • Etc.

For example, if m=10, we get the sequence

  • 10=2^{2^1 +1} +2^1,
  • 79=3^{3^1 +1} +3^1 -1=3^{3^1+1}+2,
  • 1025=4^{4+1} +1,
  • 15625=5^{5+1},
  • 279935=5*6^6 +5*6^5 +5*6^4 +5*6^3 +5*6^2 +5*6 +5,
  • 4215754=5*7^7 +5*7^5 +5*7^4 +5*7^3 +5*7^2 +5*7+4,
  • etc.

As this example illustrates, the numbers we produce this way grow very very quickly.

Goodstein proved that, nevertheless, the sequence always ends at 0.

In 1982, Kirby and Paris showed that this cannot be proved in Peano arithmetic. So, the statement “all Goodstein sequences eventually terminate” is an undecidable statement.

The reason for this is that the numbers produced by this process really grow ridiculously large. To give you an example, if m=4, the sequence takes 3 \times 2^{402653211}-2 \approx 6.895 \times 10^{121210694} steps to reach 0. The number of elementary particles in the universe is estimated well below 10^{90}.

Most of the natural combinatorial examples undecidable in Peano arithmetic that we know of are loosely speaking of the same kind as this one: They say that certain (obviously computable) function is defined on all numbers. (The function in this case is the number of steps that it takes the sequence to terminate.)

In technical terms, this is a \Pi^0_2 statement. Friedman’s most recent examples are \Pi^0_1, and this is perhaps the reason why this is a big deal and The New Scientist ran an article on this. (I’ll explain what this means in a later post.)

OK, the function that to m assigns the number of steps in the sequence that begins with m grows fast. So what?

Deep results in Proof Theory (an area of logic) have shown that we can associate to Peano arithmetic a collection of functions. If Peano arithmetic can prove that a function is defined on all numbers, then the function is actually bounded by one of the functions in this collection. Goodstein’s function grows so fast that all the functions in the collection are bounded by it. And this means that Peano arithmetic cannot prove that the function is always defined.

[The underlying reason is that we can code proofs by certain kind of ‘trees’, and we can estimate how tall these trees are. If a function grows faster than all the trees corresponding to proofs, then it escapes what Peano arithmetic can see. The growth of these trees is related to the complexity of the induction axioms. But explaining this would probably be more technical than desired.]

* * *

Here is a sketchy idea of what these functions are, and how we compare them: These are all functions from natural numbers to natural numbers. If f,g are two such functions, we say that g bounds f if f(n) < g(n) for all but perhaps finitely many values of n.

Now, define:

  • f_0(n)=n+1.
  • f_1(n)= f_0(\dots(f_0(n))\dots) where we apply f_0 n times. So, f_1(n)=2n.
  • f_2(n)=f_1(\dots(f_1(n))\dots) where we apply f_1 n times. So, f_2(n)=n 2^n.
  • f_3(n)=f_2(\dots (f_2(n))\dots) where we apply f_2 n times.
  • Etc.

Now, we continue! Let’s introduce a new symbol \omega. Intuitively, \omega is an infinite number.

[Actually, \omega is the first infinite ordinal.]

  • f_\omega(n)=f_n(n).
  • f_{\omega+1}=f_\omega(\dots(f_\omega(n))\dots) as before.
  • Similarly, define f_{\omega+2}, f_{\omega+3}, \dots
  • Then we set f_{\omega 2}(n)=f_{\omega+n}(n).

And continue, and so on.
We can this way define f_{\omega 3}, \dots, f_{\omega^2}, \dots, f_{\omega^\omega},\dots, f_{\omega^{\omega^\omega}}, \dots

Each of these functions dominates all the previous ones. Computers cannot deal well with functions that grow exponentially, that is, like f_2. All these function past f_2 are way beyond the realm of what we can actually compute in a feasible amount of time.

Goodstein’s function grows faster than all of them.

III.

Somebody asked:

Mathematical languages were supposed [to] provide a construct without ambiguities and mathematicians have figured out that this is not possible and the problems that are still ambiguous are called “undecidable” This is useful in the sense that we need to understand our limitations but it does not devalue the math that is known by the 99.9% of the population because mathematicians are asking questions and raising issues that are beyond the uses of math that we would encounter.

Is that accurate or am oversimplifying the issue at hand?

I am not sure I understand the way the word ambiguous is used here. There is a subtlety going on in the background, that distinguishes mathematical expressions from non-mathematical ones.

Namely, we have the two closely related, but different notions, of truth and provability. In natural languages we do not have anything like that. As a consequence, philosophers and linguists have to develop theories of truth and they are usually found wanting.

Provability is designed so that anything provable is true. The completeness theorem, also due to Gödel, says that if something is always true, then in fact it is also provable. So, our purely syntactical notion of proof is actually quite powerful.

We define truth with respect to certain structures (models of the axioms we are considering). The definition is completely unambiguous. For any sentence and any relevant structure, either the sentence is true in the structure, or else it is false in the structure.

However, any interesting theory with infinite models actually has many models. For example, Peano arithmetic has as a model the set of natural numbers. This is the standard model. But there are many other structures that are also models of Peano arithmetic. This is an inherent weakness, if you want, of the (first-order) logic we use. Some of these models have the same properties as the standard model. Some have different properties. If a property is true in all models, then we can actually prove it. But it may be that a property is true in the standard model and false in some other model. This would be an example of an undecidable proposition.

The original proof by Kirby and Paris of the undecidability of Goodstein’s result was obtained by building a model of the Peano axioms where the theorem was false. The sketch I gave is a different one that seems to me to be easier to grasp.

When we say that a statement about numbers is true, we mean “it is true in the standard model” according to our definition of truth in structures.

If we say that a statement about numbers is provable, we mean that following the syntactical provability rules, it can be derived from the axioms in finitely many steps. If this is the case, the statement is true as well.

What Gödel showed is that these two notions are different.

He showed more. We begin with the Peano axioms, but we could have as well began with just about any theory T. All we require is that 1) T is strong enough so we can represent certain operations inside, and 2) the axioms of T can be identified by a computer program. For any such theory, there are undecidable statements.

Note that “undecidable” does not mean “absolutely undecidable” or any such thing. Undecidability is with respect to a certain formal theory. For example, Gödel’s sentence “I am unprovable in Peano arithmetic”, and Goodstein’s result are both provable in set theory.

The continuum hypothesis is undecidable from the Zermelo-Fraenkel set theory axioms. It is decidable by certain extensions of these axioms. There is an active program in set theory trying to clarify how to extend the Zermelo-Fraenkel axioms (or how to choose between its many possible extensions), and this extension should end up proving or refuting the continuum hypothesis.

[There is also the possibility of bi-interpretability that would sidestep having to choose between different incompatible candidate extensions, but I will not discuss this here.

There is a deeper subtler issue in the background, namely that when one discusses mathematical theories, one needs to discuss them within some mathematical framework, and one needs to distinguish between the theory and its framework (meta-theory). But I tried not to touch on this in the post.]

IV.

This post is about odds and ends, before we get to Friedman’s contributions.

Let’s recap: In the 1930s Gödel proved the incompleteness theorem: Any (computable) theory T with a decent amount of strength is necessarily incomplete, meaning that there are sentences A such that T cannot prove A and cannot prove \lnot A. Gödel’s examples are sentences that essentially say “I am not provable in T“.

With more work, Goedel proved that T cannot prove the sentence saying “T is consistent.” That is, unless T is inconsistent, of course; inconsistent theories can prove anything you want. This is a curious state of affairs. Working within the resources given by Peano arithmetic, either you can prove that Peano arithmetic is consistent, but then it isn’t. Or else, Peano arithmetic is consistent, but you cannot know it.

In the late 70s a series of sentences A such that Peano arithmetic cannot prove A and cannot prove \lnot A were discovered. They have in common that, unlike Gödel’s examples, they are not coding logical notions. Instead, they are genuine mathematical statements of the kind that a person working in combinatorics would find interesting. The example I gave is Goodstein’s theorem (an undergraduate student of mine at Caltech wrote a nice paper on this a few years ago, “The termite and the tower.“). There are others. A nice one is about a game, Hercules and the Hydra.

Another key issue that these examples have in common is that they are \Pi^0_2 sentences.

A \Pi^0_1 sentence is one of the form

\forall x_1 \forall x_2 \dots \forall x_k A(x_1,\dots,x_k)

where A is some (easy) assertion that a computer can verify.

A \Pi^0_2 sentence has the form

\forall x_1\dots \forall x_k \exists y_1 \dots \exists y_n A(x_1,\dots,x_k,y_1,\dots,y_n)

where, again, A is easily checkable by a computer.

(There are also \Pi^0_3, \dots, \Pi^1_6,\dots but we don’t need to introduce them here.)

For example, Gödel’s example “Peano arithmetic is consistent” is \Pi^0_1: For all numbers n, n is not the code of a proof of an inconsistency.
Given any n, a computer can easily decode whatever n codes, and check whether this is a proof of an inconsistency or not.

Goodstein’s theorem is \Pi^0_2: For all m there is an n such that the Goodstein sequence starting with m reaches 0 after n steps.

\Pi^0_1 sentences are different from \Pi^0_2 in a subtle way: Suppose A is \Pi^0_1 and A is undecidable from Peano arithmetic (PA), so PA does not prove A, and does not prove \lnot A. Then A is actually true.

This is because, if A is false, then \lnot A is true. But \lnot A says that there is a number n for which something easily verifiable is true. For any such specific number n, no matter how large it is, PA can prove that n has this easily verifiable property. The proof will probably be very long and boring but that’s irrelevant. Essentially, PA simply goes through the verification that a computer would do. This means that PA actually proves \lnot A. But we are assuming that this is not the case.

This is nice: At least, if we have an undecidable \Pi^0_1 sentence, we then know that it is true. (Although PA cannot prove it.)

But this is no longer true once we have an undecidable \Pi^0_2 sentence. In this case, there is no a priori way of knowing whether it is true. It depends on the particular sentence. In the examples from the 70s, the \Pi^0_2 sentences are true, but this is not so in general.

An “open problem” is a question to which we don’t know the answer. Perhaps the most important open problem in mathematics is the Riemann hypothesis. It is one of the $1,000,000 Millenium problems. Of the 7, only one has been solved so far, Poincare’s conjecture. The mathematician who solved it, Grigori Perelman, rejected the prize. It was in the news, so probably you have heard about this.

The Riemann hypothesis can be expressed as a \Pi^0_1 sentence. So, either we can find a counterexample to it, or else it is true (even if we don’t know.)

Another famous open problem is the twin primes conjecture: Are there infinitely many primes p such that p+2 is also a prime? (I don’t think that there is a monetary prize attached to this one.)

This is a \Pi^0_2 statement, so one could argue that there is a chance that the conjecture is undecidable, and that we have no way of knowing at all whether it is true or false.

The best we know at the moment is that if a strong form of the Riemann hypothesis is true, then there are infinitely many primes p such that one of p+2, p+4, and p+6 is also a prime.

V.

And now, without further delay, let’s talk a bit about Friedman’s results.

All throughout his career, Harvey has been interested in finding undecidable statements that are of intrinsic mathematical interest. Moreover, he wants these statements to be unprovable not just in Peano arithmetic but in much stronger theories.

His first success was a result on “tree embeddability” A tree is a connected graph without cycles. Kruskal’s tree theorem essentially says that given any collection T_1,T_2,\dots of infinitely many finite trees, we can find two, T_i and T_j, say, with i<j and T_i contained in T_j.

This cannot be expressed in the language of arithmetic, since it talks about infinite sets, and Peano arithmetic can only deal with finite objects. Friedman found a version of this result that can be expressed in this language, and is undecidable: For all n, there is an m such that given any sequence T_1,\dots,T_m of finite trees, if T_k has at most k+n vertices, then (as before) we can find two, T_i and T_j, say, with i<j and T_i contained in T_j.

The difference with the previous examples is that now, the result is undecidable over a significantly stronger theory. The wikipedia article has links to references with details. This result was a big deal.

His next success, in the late 90s, was a result about trees that is true but cannot be proved, not just in Peano arithmetic, but in fact even in Zermelo Fraenkel set theory. Zermelo Fraenkel is a very powerful theory. One can formalize just about all of mathematics in it. It is way more powerful than Peano arithmetic. This means that this result about trees is in a precise sense much much harder.

Here is part of the abstract to the relevant paper:

We introduce insertion domains that support the placement of new, higher, vertices into finite trees. We prove that every nonincreasing insertion domain has an element with simple structural properties in the style of classical Ramsey theory. This result is proved using standard large cardinal axioms that go well beyond the usual axioms for mathematics. We also establish that this result cannot be proved without these large cardinal axioms.

This was incredible. I remember the news of this result, when I was starting as a graduate student at Berkeley, and the way the announcement was received. I was fortunate enough to attend a series of talks that Harvey gave there, at the end of which he sketched the proof that this result required the use of large cardinals.

Large cardinals are a big part of the reason why I became a set theorist. The basic idea in set theory is that we can make sense of the notion of different infinities. Large cardinals are infinities so large that we can use them to produce models of set theory. If you have a model of a theory, the theory is certainly consistent. Since set theory cannot prove its own consistency (by Gödel’s result), it follows that we cannot prove that these cardinals exist. Nevertheless, they have been extensively studied, and a remarkable fact has emerged, in that any strong statement that we have considered turns out to be in some sense “equivalent” to the existence of some large cardinal. So they give us a way to calibrate and compare the strength of mathematical statements.

These results of Friedman from the late 90s are \Pi^0_2 statements (like the results from the 70s), and reach modest large cardinals.

Now, the really big deal:

From around October last year on, Harvey has obtained undecidable statements that are combinatorial in nature, of complexity \Pi^0_1, and that reach very large cardinals, some of them almost at the very top of the hierarchy.

(So, in a sense, he has succeeded in his program in all counts. But he still does not feel completely satisfied, of course.)

Moreover, these examples don’t seem to be isolated, but a natural result of his work developing what he calls Boolean relation theory, which should be a natural framework for this type of results. A draft of his book on this topic can be downloaded from his site.

I know quite a few logicians have expressed serious interest in the results and techniques of the book, so on top of their `intrinsic’ significance, I expect they will prove very useful in the near future.

The article in the New Scientist on Harvey’s work is a bit silly although well-intentioned. Of course, it is not that arithmetic is in trouble and Harvey is attempting to save it. But Harvey’s work is genuinely significant, and it is a rare occasion when the general public has the chance to hear about some of the most recent results in some technical field, so the publicity is more than welcome.

Within the limitations of my own understanding, I hope to have given some framework for the results that the article wants to discuss, and to have given a small hint of the kind of statements that one actually deals with in practice. I’ll be more than happy to try to clarify whatever I can.

Advertisement

7 Responses to “To infinity and beyond: The struggle to save arithmetic “

  1. Thanks for this excellent compendium of undecidable (unprovable) results!

    I guess we’ll have to agree to disagree on whether my article is silly. All I’ll say is that if you want to get technical mathematics into the broader press, you won’t get very far without some sort of dramatic narrative! (Actually, the print edition was entitled ‘something doesn’t add up’, while the online version became ‘the struggle to save arithmetic’ – these were editors’ decisions not mine.)

    Well, of course the important thing is not my article, but Harvey Friedman’s new work, and I think it is very exciting. It is already spawning further developments – for example, I recently encountered some striking work in progress by Bovykin and De Smet (here [pdf]) which translates unprovable statements at various levels, right up to Mahlo cardinals, into quantified Diophantine sentences.

  2. Hehe. Thanks for the comment! I think it is very nice and unusual to see technical results of this kind discussed for a broader audience, so thank you for that as well. I have been following the work of Bovykin with close attention; it was this what prompted me to try and study Harvey’s book.

  3. Todd Eisworth says:

    So what does the existence of a (ostensibly true) \Pi_1^0 sentence with large cardinal strength say about the consistency of the corresponding large cardinals? How far can this be pushed to an argument about the “truth” of large cardinals?

  4. Hi Todd. I am honestly not sure. Friedman says in the introduction of his book that “The issue of why we believe, or why we should believe, that the relevant axiom systems used in this book are consistent – or even that they prove only true arithmetic sentences – is an important one, and must lie beyond the scope of this book.”

    I guess I am not convinced that arguments pro-large cardinals based on their arithmetic (or even \Pi^0_1 and therefore “real” even for finitists) consequences can be seriously pushed to convincing conclusions. By this I mean, most likely these arguments carry a lot of weight if one is not a set theorist. But as a set theorist, I don’t think this approach can say anything meaningful about the length of the universe, and this is relevant to the issue of truth rather than mere consistency.

    The inner model program seems a much more convincing avenue of research in this direction. Steel has a nice series of (three?) posts on the FOM list on what we in the trade call “the speech”, that touches on these issues.

  5. thomasthethinkengine says:

    I wrote a piece on logic recently, and a reader recommended t me the new scientist article, which is hidden behind a paywall. I ended up here.

    I found much of the above tantalising but frustratingly unintellgible.

    anyway here’s my piece. if you want to read and comment in a way that helps me understand the above, i’d appreciate it!

    http://thomasthethinkengine.wordpress.com/2010/08/23/and-thats-when-i-realised-that-almost-everything-makes-no-sense/

  6. I just referenced and quoted from your above article on The Roman Empire forum in this post: http://bit.ly/bJijLV.

    The post was a reply to another member who apparently read the New Scientist article, or an abstract, and made this claim on the forum, “Unfortunately, some mathematicians have noticed the rules ‘don’t add up’ for some reason, and although it isn’t proven, there appears to be a quality to finite values that we gloss over in our desire for a simple decimal set of rules. There’s been an article in the science press on this point just lately.”

    His post was the last in a series sparked by a discussion of the use of abacuses by the Romans and other ancient peoples as described in the paper http://bit.ly/af6J9t.

  7. Dan says:

    Hey, isn’t it necessary to know that the large cardinals or whatever are independent of ZFC? For all we know Friedman’s statements have a stupid counterexample and can be disproved in PA or something. Unless I misunderstood.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: