(Wherein I quote from Twitter.)

This is my excuse to put this page to use. It all started, more or less, around here:

Moreno, Javier (bluelephant). “Es muy bueno este artículo de Hamkins sobre algo así como sociología de la teoría de conjuntos: http://arxiv.org/abs/1203.4026” 2 April 2012, 3:39 p.m. Tweet.

Villaveces, Andrés (gavbenyos). “@bluelephant lo siento bastante crudo – me parece que Joel simplemente pone en versión impresa cierto consenso, pero falta argumentar” 2 April 2012, 4:32 p.m. Tweet.

(Loosely translated: “This is a very good article by Hamkins on something akin to sociology of set theory” “I find it weak — I think that Joel is simply putting in writing a known consensus, but needs to argue for it more”) Long story short, I feel Andrés is right, but I thought I should elaborate my view, at least somewhat. Originally I considered writing a blog entry, but it quickly became apparent it would grow longer than I can afford time-wise. So, I used twitter instead.

(But there is a serious caveat, namely, it seems that the paper is intended for a general mathematical and philosophical audience, so the omission of technical issues is most likely intentional. Javier even remarked as much.)

What follows is the series of Tweets I posted. It is not a transcription; to ease reading, I have added a couple of links, reformatted the posts rather than continuing with the MLA suggested approach, very lightly edited the most obvious typos, and added a couple of phrases where I felt more clarity was needed. [**Edit, April 22:** I also removed a line on vs Woodin’s , as the proof of the underlying claim has been withdrawn.]

I started at 10:44 p.m., with a warning: “(Technical pseudo-philosophical thoughts for a few posts.)”

Earlier today there was a little discussion around here on Hamkins’s recent essay on CH, “Is the dream solution of the continuum hypothesis attainable?” http://arxiv.org/abs/1203.4026

What Joel calls “the dream solution” is a two step procedure:

- Find an obviously true new set-theoretic principle.
- Show that the principle decides the Continuum Hypothesis.

He then proceeds to argue that such a solution is unattainable.

“Our extensive experience […] prevents us from looking upon any statement settling CH as being obviously true.”

I could only find one convincing argument in the paper, the most familiar one: By very mild forcing, we can change the truth value of CH. The point here is that, even if we admit large cardinals, we expect them to satisfy the Levy-Solovay theorem. (“Even if”, because Joel doesn’t seem to admit large cardinals as true.)

Is Levy-Solovay, plus the belief that we understand the large cardinal hierarchy well enough, all there is to the Continuum problem?

I think there is strong mathematical evidence for admitting large cardinals as part of the set theoretic universe. See

http://mathoverflow.net/questions/44095/arguments-against-large-cardinals/44185#44185

The main reason why we favor large cardinals is interpretative power. A more sophisticated presentation of Joel’s argument would have to show that large cardinals should force us to have bi-interpretability between (suitable) theories where CH holds and where it fails. That is what Woodin‘s “Ultimate L” theory would really accomplish: If

- We can isolate true K without anti-large cardinal restrictions, and
- Prove the Omega-conjecture,

then, and only then, would we be able to use Levy-Solovay as a convincing argument. Prior to these, or analogous developments, its use would be premature. (So, if one were to advocate an unattainability situation, a more serious mathematical discussion than in the paper seems needed.)

Woodin had previously advanced a theory that would suggest a way of settling CH, by refuting it. This was based on Omega-completeness. But unfortunately the key point of the argument was a “simple definability” result, based on what we now know was a mistake. (Roughly, Woodin thought that what we now call hod-mice could not accommodate too high large cardinals. This was refuted by Sargsyan.)

The bi-interpretability position is actually rather radical. We abandon the pursuit of “the right theory” beyond its large cardinal core. If we didn’t, then one could make a strong argument for not-CH based on

- The naturalness of reflection principles. (Which I would go so far as to call the “true” part of forcing axioms.)
- Mathematical experience. (Which goes against some of the other arguments in Joel’s paper.)

In effect, there is not perfect symmetry between CH universes and not-CH ones. CH universes are good for anti-classification results. Not-CH universes (by which I really mean universes with rich reflection principles) are good for structure theorems.This has been argued well by Todorcevic and others, and weakens a naive defense of CH that an analyst could make; I briefly argued this here. And, if there is a way of rescuing some of Woodin’s “simple definability” approach, then I think we will be making progress towards strong set theoretic evidence against CH.

(The point of Twitting was so this wouldn’t grow out of control and never-ending…)

As I was posting, Andrés Villaveces made comments along the way, which prompted some responses:

- Yes, I agree Joel’s position with respect to large cardinals seems very strange to me. It goes beyond not advocating for them, even. I find set theory to be different from “model theory of ZFC”, which is the closest I can come to a coherent description of part of his position. I do not think that all models share the same status, even if they are ill-founded or not even omega-models. And definitely large cardinals are part of the set-theoretic landscape. (So I disagree with Shelah, it seems. So I may very well be wrong, what do I know?)
- Of course, the assertion that reflection principles are the true part of forcing axioms is meant to be provocative. The way I see it, the justified extensions of ZFC come in two stages. In the first one, we get large cardinals. In the second, we definitely need to go beyond first-order, and the closest we currently get to true maximality principles is forcing axioms. But I do not see how to justify that forcing axioms are
*true*, in whatever sense one uses that word. On the other hand, they provide us with a rich and coherent theory, that usually traces back to reflection principles. And reflection can itself be naturally justified by very similar arguments to those we first advance for large cardinals. (The argument of course needs elaboration, but this is how I usually begin. I know I’ve written on this before, but couldn’t quite locate where at the moment — will add a link if I find something.)

There is one last point I feel I need to make: Levy-Solovay depends rather explicitly in certain large cardinal behavior. Woodin has suggested some large cardinal schema (at the level of rank-to-rank embeddings and beyond) that are *fragile* under forcing. The schema are far from reasonably understood at the moment, and they may end up being inconsistent. Rather, the point is that they suggest that Levy-Solovay may not be the end-all, and large cardinals, properly understood, may end up after all settling CH.

So: Does CH have a truth value, or can we expect a “dream solution”? I do not know. But a more serious argument is needed in any case.

And with that, I stopped: “(Apologies for the length.)” at 11:46 p.m.

Let’s see if I understand what you wrote:

There is a point in Hamkins’ paper where he claims the following (“using” Levy-Solovay):

What you’re saying is that this is not entirely true? (What does “large cardinal schema” mean? I imagine some set of axioms that not only says that this and that large cardinal type exist (and maybe how many) but defines certain non-trivial interactions between them.) So the idea is that there are (in the works) “large cardinal schema” (settling CH) that would be hard to keep alive after the kind of forcing that would turn an universe holding it and a certain status of CH into a universe still with it and the opposite status of CH?

A bit more on set theory vs “model theory of sets”. Still not quite a full picture, and sadly rather poorly argued on my part:

https://plus.google.com/103404025783539237119/posts/T5E1EhjRBG3

Hi Javier.

The examples I had in mind come from work on very strong large cardinals. The formulation is technical, but people may find it interesting:

Kunen proved that if has critical point , and is the first fixed point past , then .

In the example, one is looking at embeddings , so this is right at the boundary of inconsistency. Suppose also that is sufficiently large (limit of supercompacts).

In some sense, Woodin has shown that features of the theory of determinacy have nice analogues here, and he proposes some “axioms” that would explore how far the analogy goes. Anyway, one of these is:

This is a large cardinal statement. If such a beast is at all consistent, then it is generically fragile: Adding a Cohen real makes the axiom false. In fact, any forcing that adds a new -sequence of ordinals makes the axiom false.

So here we have a large cardinal statement to which the Levy-Solovay paradigm does not apply. For all we know, this axiom settles the continuum hypothesis.

Ok. One may say that such an “axiom” is far from natural. But we are not saying that this specific axiom solves CH. The point is that it suggests that Levy-Solovay may be an artifact of what Guy calls the strong law of small numbers,

http://www.jstor.org/stable/2322249

[Of course, one would then have to argue for the naturalness of the large cardinals that would end up being involved, etc. But this would certainly undermine the position that Levy-Solovay indicates CH has no “true” truth value.]

Funnily, I’ve been finding G. Hardy to be a good source of (not examples, analogies perhaps?) on this topic.

[ Hardy is known for his untamed views on “pure mathematics” as the true kind. But we all know that some of his results turned out to have important applications. For example, that (Mendelian) inheritance allows species to maintain genetic diversity. Otherwise, after a few generations, we would just have this:

http://en.wikipedia.org/wiki/Home_(The_X-Files)

I bring him up because of the following:]

The prime number theorem was proved using complex analysis at the end of the XIX century. In fact, it is essentially equivalent to the assertion that the zeta function does not vanish on . Hardy wrote:

Of course, we now have an elementary proof:

http://www.math.columbia.edu/~goldfeld/ErdosSelbergDispute.pdf

But the paragraph that contains this quote actually redeems Hardy, when he says:

Lovely blog Andres. I am going to try to stay away from these arguments but I will try to say something as succinctly as possible.

In my modest opinion, the goal of set theory isn’t solving CH or any one particular question. Set theory is a foundational subject and its beauty is in the fact that set theorists have and continue to put out simple basic principles that provide the framework needed to anticipate any mathematical object whatsoever and when these objects turn out, also completely study them. This has been the case with ZFC, its success in functional analysis and other areas of math is just striking. In recent years, this is actually becoming the norm for more complicated frameworks as well, look at the use of an inaccessible in the proof of Fermat’s last theorem or at Farah’s work on the Calkin algebra. Of course, there are many other examples.

ZFC with all of its strange consequences, like the Banach-Tarski paradox, has been accepted as the fundamental set of axioms because of the kind of foundations it provides, not because it turns Banach-Tarski, or any one statement for that matter, into a true statement. As set theorists, we should take pride in our deep foundational intuition that has anticipated all that structure. Who would have thought that thinking about iterating partial orderings that add reals in a way that doesn’t collapse omega_1 is going to lead to “all automorphisms of the Calkin algebra are inner”.

The lesson is that there is an enormous and deep combinatorial structure out there hidden by large cardinals and unravelling it, via our set theoretic tools (i.e., forcing and inner model theory) eventually leads to striking results in other areas of math as well. Unravelling that structure should be our primary goal which, I would say, is of utmost intellectual value, while settling CH in any way has a far less intellectual value, at least to me.

This enormous and endless study has lead to many natural frameworks (and I am intentionally refraining from “theories”) and it looks like we have developed competing points of views. But are they really competing?

They are not! A true competition would be when we have two natural axiomatic systems T and S that solve many problems but S isn’t interpretable in T and a vice versa. Well, do we really have such a situation? No. A true PFA person cannot avoid analyzing frameworks that say V is a hod mouse and a vice versa. This is because there is a natural interpretation of V=Hod Mouse in the PFA universe and similarly, the other way around.

So PFA isn’t just some isolated shiny city, as Hamkins’ seems to refer to these natural frameworks, it is a very shiny city that is inherent in any other shiny city that we have ever encountered. The conjecture is that this will always be true no matter what else we unravel from large cardinals. I see Woodin’s axiom, or rather the underlining technical theory of analyzing hod of models of determinacy, as our primary way of interpreting one natural framework into another. So far it has been astonishingly successful but hasn’t gone all the way.

We can never remove subjectivity from anything. There will still be people that, even if starting from their favorite natural axiom X naturally leads to any other natural axiom Y, will defend their view that X is somehow better. But I cannot read this in any other way then saying Englsih is the best language. I am allowed to be subjective and I do think that English is the best language, but perhaps we should realize that this isn’t really a scientific view of languages.

The beauty of theories about infinity is that there are probably infinitely many natural environments that settle many interesting questions in other areas, but one only looks better than the other because we haven’t discovered the bridge between them yet. They become the same as soon as we do.

So from a purely foundational point of view, or rather set theoretic point of view, it makes no difference which one framework you choose as they are all the same, one is interpreted into the other. We can, and we probably will make a choice and say, make the norm that the international language of mathematics is PFA (English), and then we will have a hod mice nationalist claiming it should be V=hod mouse (French). But it would only be a matter of taste, not a matter of scientific debate. Nor it would mean that PFA people won, nor that CH is true, but just that we have written tons of books in many other areas of math and the authors of these books realized that for what they would like to do, PFA was a better choice than its natural interpretation in V=Hod Mouse, just like for commercial reasons it is better to write a calculus book in English than in French.

So I would say that Hamkins’ point that any theory is just as good as any other is correct but not for the reasons he advocates. But just because from a foundational point of view, one is interpretable into the other, and not that they form completely disjoint multiverses.

Hi Grigor. Many thanks! Very nice contribution (Plus: You have expressed very coherently what I wanted to say.)

Thank you for giving my dream solution article some thought; let me just jot a few thoughts here in response. As you guessed, Andres, it was indeed intended for the largely philosophical audience of the London Birkbeck meeting last summer. My longer paper The set-theoretic multiverse, from which it was adapted, is somewhat more mathematical.

My target in these articles has been the strong Platonist position, what I think of as the classic California view, that there is an absolute set-theoretic background with definitive answers to questions such as CH and the existence of large cardinals and so on. Surely we must look upon the bi-interpretability position as an enormous retreat from that view, since it gives up on the central claim that set-theoretic assertions have an absolute truth value. My view is that the bi-interpretability position is at bottom a multiverse position, and for my opponents to adopt it means that I’ve essentially won the argument I have undertaken.

Your position here, Andres, is a little blurry, since you agreed with Andres Villaveces’s suggestion that my article expresses a known consensus, but also advocate explicitly for the California position in your commitment that large cardinals exist and there is no projective well-ordering of the reals (my view: both of these are easily destroyed by class forcing; it is not just about Levy-Solovay); and you also express agreement with Grigor’s presentation of the bi-interpretability position. But aren’t these three positions contradictory? I think you hold the California view closest, and of course the debate is more interesting if you do, since in this case there would be a genuine disagreement.

The point of the multiverse position is not to replace the subject of set theory with another subject, what you call the model theory of set theory, but rather is to recognize that the fundamental nature of set theory is already model-theoretic. What we are doing is building diverse models of set theory from one another, usually with large cardinals and forcing and definability and so on. This is what the subject of set theory is about, rather than about a hopelessly vague task of divining the truths of a mysterious but somehow absolute V, whose basic features nevertheless remain unknown, and even whose existence is in question.

I take issue with two specific remarks. First, Platonists often mischaracterize formalism as requiring one somehow to treat all theories as equal, but this is a beginner’s error; I know of no formalists like that, and they are just as able to hold preferences amongst their theories as you are. So I take exception to Grigor’s assertion that I hold “any theory is just as good as any other”, since I don’t hold that view, and I state the opposite view explicitly in my papers. A preference for large cardinals (which I do hold) is something like a vector in the multiverse, pointing us toward the more interesting universes, while recognizing that no single universe tells the whole story; there will always be a better one. And isn’t this the upward extendible nature of the large cardinal hierarchy anyway?

Second, on David Roberts page, you say “Only Harvey Friedman has seen usefulness in studying non-standard models of set theory”, but this is just false. There is a huge literature on this topic, with excellent work by Ali Enayat and many others. One of the main lessons is the fundamental similarity in a large part of the theory for models of PA versus models of ZFC, with the relatively few exceptions to this lesson being highly interesting. My view is that an important part of the future of set theory lies with this kind of work. (And look for another paper of my own very soon! I think you’ll find it surprising.)

Hi Joel! Many thanks or the thoughtful response. I wish I had more time, but here are some quick comments. (By the way, there were some formatting issues. Let me know if I broke something by mistake.)

First, I agree Ali’s work is excellent. I think of his results (on definable ordinals and such) as *model theory of set theory*, which is why I excluded them. But yes, there are more than a few exceptions; Harvey’s work is the one that I can think of most immediately, but it is certainly not the only one. That was a dumb mistake on my part. [And of course, then there is the issue of whether it makes sense to talk of model theory of set theory as a different thing from set theory proper, and most likely my views here are not quite carefully thought. I do hope to expand on this at some (future) point. Hopefully I’ll find the time.]

About the “beginner’s error”, I guess I’m guilty again. (But I’m glad I was mistaken here.)

As of my own position:

1. I find a few technical issues with proper class forcing as opposed to set forcing, so I separate them in my mind. But I do not feel at the moment I can articulate all the issues precisely enough to make a strong case fully justifying this separation.

2. My agreement with Andres was on Levy-Solovay being the standard argument towards a “no truth value for CH” position.

3. I do think large cardinals are part of basic set theory. I do not think of this as platonism. Rather, I think the resulting theory has more (mathematical) merit to be called set theory than ZFC. The mental picture where we favor a theory if it has more interpretative power than another favors large cardinals, I think. As of whether I think something or other is “really true”, and what that could possibly mean: I usually use that phrasing for convenience, but it is not a philosophical view. I do not think I fully understand platonism, and what I think it is, is certainly too naive to be right.

4. So, when I do set theory, I think of large cardinals as being there. If I need a universe without them, I can always interpret that as working on rank initial segments or thin inner models. I like the bi-interpretability position, and agree it is a tremendous radical departure from a position advocating for a unique true theory. Whether at the end of the day it will be the favored position, I think it is too early to tell. Aesthetically, I like my set theoretic universe to have strong reflection principles. But I do not think of these yet as part of what I call “set theory”; again, I think it is too early to tell.

(5. So, I hope this shows my position is not so contradictory after all.)

(And, sadly, I have to run. Again, many thanks.)

The new paper I mentioned is now available at http://jdh.hamkins.org/every-model-embeds-into-own-constructible-universe/. The main theorem is that every countable model of set theory , including every well-founded model, is isomorphic to a submodel of its constructible universe. Another way to say this is that there is an embedding that is elementary for quantifier-free assertions, so that . It follows from the proof that the countable models of set theory are linearly pre-ordered by embeddability, and indeed, pre-well-ordered by embeddability.

Hi Joel, Thanks for the update. Really nice (and yes, surprising) result!

This is to preserve some points that Andres made, and linked to in comment #2 above.

Andres Caicedo Wrote on Apr 4, 2012 (edited)

Hi, +John Baez. You are right that group theory is largely the model theory of the axioms of groups. But the situation is different. A poor analogy is: An analyst may be studying the reals. And it may prove convenient to consider from time to time ultrapowers of the real line and what not. (Tao has shown that I am not telling an entirely fictional story. And of course Tao was not the first, or the only analyst to do this.) But I do not think an analyst would ever describe what they do as model theory of the axioms of analysis. You may argue that this is indeed a poor analogy, as the axioms are not first order. But then I would replace “analysis” with “number theory”. Certainly number theory is not the study of models of PA. They are different beasts. Of course, related. But definitely different.

The thing is, in set theory we do study models of ZFC. But we are not doing model theory of ZFC. An example: The Keisler-Morley theorem is model theory of ZFC. The result says that every countable model of ZFC has an end-extension, that is, if A is a countable model of set theory, then there is an elementary extension of A, call it B, with the property that if b is in B, a is in A, and B thinks that “b is an element of a”, then b is in fact in A, and A also thinks “b is an element of a”. This is a nice result, with an elegant proof. It is about arbitrary (countable) models of ZFC, but not about set theory proper.

There are many nice purely combinatorial results in set theory that do not mention models at all, but I do not mean that a result is model theoretic if it mentions models. “If all projective sets of reals are Lebesgue measurable, then the aleph_1 of the universe of sets is an inaccessible cardinal in the constructible universe” is a result in set theory, and it mentions models. It is not about arbitrary models, but rather about the way that certain canonical objects interact in the set theoretic universe V.

Now, the real point is this: We do not study models for their own sake. We want to gain an understanding of what happens in V. Goedel taught us that something is provable (in first order) iff it is true in all models. But we are interested in something other than first order proofs. For example, the models we want to consider are not arbitrary, but omega-models. They are correct about the theory of the natural numbers, and in particular, about assertions of consistency of theories. Harvey Friedman has seen usefulness in studying non-standard models of set theory. On the other hand, this is a drawback of some of his results. For if a sufficiently simple combinatorial statement is equivalent to the 1-consistency of “ZFC+There is a Mahlo”, or whatever, we automatically know that it is true, even if it is not provable in a first order theory (which is a rather artificial concept, after all).

In fact, we are interested in a much more restricted class of models than the omega-models. We want our models to be well-founded (so they are correct about Sigma^1_1 assertions). These models we automatically identify with their transitive collapses. And then we further require that they contain all the ordinals (so they are correct about well-foundedness, about Sigma^1_2 assertions, etc). And even this is not enough, so we distinguish “thin” models, such as the constructible universe L, from other models that better resemble the universe. L is a model of ZFC. But “ZFC” is not the same as “set theory”.

I find this is an important distinction to make. I’ll be more assertive here than one perhaps should be; think of the rest of this paragraph as a sort of caricature that hopefully conveys the point: We know that there are no projective well-orderings of the reals, for example. Joel will immediately protest at this, and say that, yes, there are or, at least, yes, there may be, say if V=L. But these are different assertions! Mine is that in the set theoretic universe, where we have large cardinals and all the richness there is, no such well-orderings exist. Joel’s is that there is some weak theory for which there are some models with some property. And not distinguishing between these two assertions, in my opinion, paints a distorted picture of the set theoretic enterprise.

As I said in the linked post, I think that the bi-interpretability description where we do not choose between two theories if they have the same expressive power is a truly radical position. But this is not the same as favoring a view where any universe of sets is just as good as any other. This unbound desire for generality ends up being limiting and distracting.

[At some point I should expand on this. And sure, my view spouses large cardinals as given. But years of experience have shown us that this is not just a fashion, but the way to go.]

[*Edit, May 14, 2012*: I removed one word from the original, in view of remarks of Joel. There are actually a few infelicities in this text, and perhaps I should thoroughly revise it at some point.-A.C.]

Hi Andres, I know this post is somewhat old, but I was wondering if you could elaborate ever-so-slightly on Woodin’s “simple definability” argument?

Hi Everett, sorry for the delay. I’ll try to add some details over this coming week (Thanksgiving break).

[…] on what mathematicians and others expect from proofs. (A previous exchange on a different topic is here. Twitter produces surprisingly nice results sometimes. What follows is a bit meandering, but […]