So, Löwenheim-Skolem implies that any first order theory that has an infinite model has a countable model.

This is, for some, considered a paradox. I undertand the reasons why this is not, in general, a paradox - notably, the difference of the notion of countability "outside" of our theory vs within.

However, where I'm having trouble from an epistemologic perspective is where it comes to large cardinal axioms and their implications. Any proof (even original papers) I look at seem to either refer to the Van Neuman universe (which pretty much is a concept that can either be interpreted in ZFC - then with a countable model - or as a "naive" concept) or operate purely in ZFC and thus face Löwenheim-Skolem on the meta-level that we base our discussion on.

The trouble I have is how I can interpret large cardinal axioms in a meaningful way, then. If I rely on ZFC (plus any arbitrary large cardinal axiom) as a basis for the abstract discussion, then I seem to be circular and end up with countable set-theoretic models no matter what large cardinal axiom I chose. But if I don't, then what sort of universe am I even thinking in? If I don't have to be specific, then what reason is there to even think in anything but full V (=full Neumann universe) from the get-out?

Now, I know that various large cardinal axioms presumably imply a lot of results not resulting from ZFC. What I don't understand is why their provable relative consistancy has any epistomologic value if we need to, necessarily, base ourselves in V before we can even reasonably discuss these if full V already contains them on the meta-level?

I'm not really 100% sure if the question, as phrased, makes total sense. Maybe my issue is better explained by stating that all of the results from large cardinals seem futile for anyone not already "believing" in the existance of an underlying Van Neuman universe. And if we do, then why do we care about large cardinal axioms?

## Set-theoretic brain fart

**Moderators:** gmalivuk, Moderators General, Prelates

### Re: Set-theoretic brain fart

"And if we do, then why do we care about large cardinal axioms?"

I think I have a bit of an answer for that. Aside from any philosophical or foundational considerations, large cardinals give working mathematicians "enough" sets for them to do their work.

The example I have in mind is something I read about Wiles's proof of Fermat's Last Theorem. Wiles works in the context of modern algebraic number theory, which assumes the existence of Grothendieck universes.

http://en.wikipedia.org/wiki/Grothendieck_universe

Basically (in my naive understanding) algebraic number theory depends on a lot of category theory. But you want to make sure that the categories you use are small, in the sense that they're sets and not proper classes. A Grothendieck universe is a set that's large enough to model the categories you need.

The existence of Grothendieck universes is equivalent to the existence of an inaccessible cardinal. So it's independent of ZFC. But Wiles used that to prove FLT. Does that mean that Wiles's proof requires an inaccessible cardinal?

Well no. It turns out that if you ask an expert, they'll tell you that nobody doubts that if they needed to, they could produce a version of Wiles's proof in ZFC. But nobody cares enough to have actually carried this out.

To me what this all means is that mathematicians do their work and use whatever foundations they need. If tomorrow ZFC were discovered to be inconsistent, nobody would care. Mathematicians would keep doing their work and in another hundred years the foundations experts would have patched everything up again.

After all, the lack of epsilon-delta theory didn't keep Newton from inventing calculus and using it to work out the theory of gravity.

So the answer to why anyone would care about an inaccessible cardinal, is simply that if you need a lot of sets to carry out a proof, an inaccessible cardinal will give you the sets you need. You could make do without all those extra sets, but that would make your proof harder. You use whatever's logically consistent and helpful to your purposes.

I'm sure I mis-stated some technical things, but the point is that nobody much cares about epistemology! Even if Lowenheim-Skolem casts doubt on the "true meaning," whatever that is, of an inaccessible cardinal; if an inaccessible cardinal can save Wiles a few hundred extra pages of work, then why not!

This came up on Mathoverflow a while back. This page has several really interesting references that will make you stop and think about the relationship of foundations to what working mathematicians really do.

http://mathoverflow.net/questions/35746 ... less-proof

I think I have a bit of an answer for that. Aside from any philosophical or foundational considerations, large cardinals give working mathematicians "enough" sets for them to do their work.

The example I have in mind is something I read about Wiles's proof of Fermat's Last Theorem. Wiles works in the context of modern algebraic number theory, which assumes the existence of Grothendieck universes.

http://en.wikipedia.org/wiki/Grothendieck_universe

Basically (in my naive understanding) algebraic number theory depends on a lot of category theory. But you want to make sure that the categories you use are small, in the sense that they're sets and not proper classes. A Grothendieck universe is a set that's large enough to model the categories you need.

The existence of Grothendieck universes is equivalent to the existence of an inaccessible cardinal. So it's independent of ZFC. But Wiles used that to prove FLT. Does that mean that Wiles's proof requires an inaccessible cardinal?

Well no. It turns out that if you ask an expert, they'll tell you that nobody doubts that if they needed to, they could produce a version of Wiles's proof in ZFC. But nobody cares enough to have actually carried this out.

To me what this all means is that mathematicians do their work and use whatever foundations they need. If tomorrow ZFC were discovered to be inconsistent, nobody would care. Mathematicians would keep doing their work and in another hundred years the foundations experts would have patched everything up again.

After all, the lack of epsilon-delta theory didn't keep Newton from inventing calculus and using it to work out the theory of gravity.

So the answer to why anyone would care about an inaccessible cardinal, is simply that if you need a lot of sets to carry out a proof, an inaccessible cardinal will give you the sets you need. You could make do without all those extra sets, but that would make your proof harder. You use whatever's logically consistent and helpful to your purposes.

I'm sure I mis-stated some technical things, but the point is that nobody much cares about epistemology! Even if Lowenheim-Skolem casts doubt on the "true meaning," whatever that is, of an inaccessible cardinal; if an inaccessible cardinal can save Wiles a few hundred extra pages of work, then why not!

This came up on Mathoverflow a while back. This page has several really interesting references that will make you stop and think about the relationship of foundations to what working mathematicians really do.

http://mathoverflow.net/questions/35746 ... less-proof

### Re: Set-theoretic brain fart

I'm not sure I really understood your question. If this doesn't help, please say so.

First of all, it is important to note that the von Neumann universe (or V) is a class model of set theory, therefore Löwenheim-Skolem doesn't apply to it, either on the base level or on the meta level. Of course, assuming consistency of ZFC, Gödel's Completeness theorem guarantees the existence of a (countable) set model, but there is no a priori reason to expect this model to satisfy any putative large cardinal properties you might believe hold in the universe.

Secondly, there is the issue of what is "true" in the universe. Since you mention epistemological knowledge of the set-theoretical universe, I'm going to assume you are leaning toward the Platonistic side of the Magical Dial of Mathematical Philosophy EXTREME!!!, so I'll formulate my answer in that way. The issue with independent statements (like large cardinal axioms) is that each person either believes that the statement is true in the (Platonistic) Universe or that it is false in the Universe. But, since the statement is independent of the axioms of set theory, different people might have differing beliefs about what should be the same Universe. I suppose adding axioms can then be seen as a means of describing one person's view of the Universe to someone else.

I think the simpler example of group theory is useful here. A Group is a "universe" of group theory just like the Universe is a "universe" of set theory. We can view the axiom of abelianess (or nilpotency class 17 or whatever) as analogous to a large cardinal axiom. If I then ask different people if the Group is abelian, I will get different answers. I suppose adding the abelianess axiom to group theory wouldn't mean much to the people who believe the Group is abelian, but it will mean new knowledge to the people who thought the Group to be non-abelian.

First of all, it is important to note that the von Neumann universe (or V) is a class model of set theory, therefore Löwenheim-Skolem doesn't apply to it, either on the base level or on the meta level. Of course, assuming consistency of ZFC, Gödel's Completeness theorem guarantees the existence of a (countable) set model, but there is no a priori reason to expect this model to satisfy any putative large cardinal properties you might believe hold in the universe.

Secondly, there is the issue of what is "true" in the universe. Since you mention epistemological knowledge of the set-theoretical universe, I'm going to assume you are leaning toward the Platonistic side of the Magical Dial of Mathematical Philosophy EXTREME!!!, so I'll formulate my answer in that way. The issue with independent statements (like large cardinal axioms) is that each person either believes that the statement is true in the (Platonistic) Universe or that it is false in the Universe. But, since the statement is independent of the axioms of set theory, different people might have differing beliefs about what should be the same Universe. I suppose adding axioms can then be seen as a means of describing one person's view of the Universe to someone else.

I think the simpler example of group theory is useful here. A Group is a "universe" of group theory just like the Universe is a "universe" of set theory. We can view the axiom of abelianess (or nilpotency class 17 or whatever) as analogous to a large cardinal axiom. If I then ask different people if the Group is abelian, I will get different answers. I suppose adding the abelianess axiom to group theory wouldn't mean much to the people who believe the Group is abelian, but it will mean new knowledge to the people who thought the Group to be non-abelian.

### Re: Set-theoretic brain fart

Well, obviously, not every countable ordinal has a proof of its countability in ZFC - there are uncountably many countable ordinals, but only countably many finite proofs! I am not even vaguely qualified to say this, but declaring the existence of a large cardinal that ZFC otherwise cannot prove to exist does not seem all that different from declaring the countability of an ordinal that ZFC cannot otherwise prove countable.

That would certainly resolve it, if it was true.

That would certainly resolve it, if it was true.

All Shadow priest spells that deal Fire damage now appear green.

Big freaky cereal boxes of death.

### Re: Set-theoretic brain fart

In my amateur opinion, I don't think the "large" in "large cardinal" means what you think it means. I mean, you already know that size is relative, so in some sense it can't. Besides, in any model of ZFC, the cardinals already go all the way up to the top. I don't see that adding a large cardinal axiom necessarily puts extra ones even higher - it just makes unverifiable claims about the ones we can't reach, because what "large" really means is "non-constructible". It's a very human way of looking at things - we can't reach it, so it must be big. But all you have to do is reject the continuum hypothesis and there are cardinals that ZFC can't imagine right near the bottom.

All posts are works in progress. If I posted something within the last hour, chances are I'm still editing it.

### Re: Set-theoretic brain fart

Token wrote:In my amateur opinion, I don't think the "large" in "large cardinal" means what you think it means. I mean, you already know that size is relative, so in some sense it can't. Besides, in any model of ZFC, the cardinals already go all the way up to the top.

I don't believe that's true. The smallest type of large cardinal, inaccessible cardinals, do not exist within ZFC. They require an extra axiom to bring them into existence. So do the other various large cardinals. They all require extra axioms.

http://en.wikipedia.org/wiki/Inaccessible_cardinal

This relates directly to the references I gave earlier regarding the use of Grothendieck universes in Wiles's proof of FLT.

### Re: Set-theoretic brain fart

I find that I didn't phrase my problem clear enough. While I actually am a realist in the sense mentioned, that's not what I was aiming at - in fact, from that perspective, the issue is only half as unclear and reduces to not quite understanding the formalism involved.

My issue is that in pretty much anything I read about large cardinals, it appears to me as if there's fairly free jumping between results within the model examined (i.e. a model of ZFC + some large cardinal axiom) and properties of the underlying universe in which our language (such as variable sets) is defined. It seems that I'm missing a key point there.

For example, look at the wikipedia page on forcing. What happens in forcing is extending our language by variables from a set that we want to use. That "flips" from inside set theory to the "outer universe" we're using to even define our language. Are we not, by allowing ourselves to use rather arbitrary sets to be involved in our reasoning that are no longer "within" our actual theory, already putting more in than we reasonably should? Now, and that is where my issue really lies, we can apparently prove that the result falls down to the original model we consider - yet, to even prove this to be possible, we need to create a filter outside of our theory (and by necessity quite "large") - and thus, it matters which "outer" set theory and model thereof we have. This apparently circular logic is what I'm having issues wrapping my head around.

I guess I could, perhaps, phrased this a lot simpler: "I don't get forcing. Can someone explain?"

I'd also take reading recommendations that focus on being clear about the issue.

My issue is that in pretty much anything I read about large cardinals, it appears to me as if there's fairly free jumping between results within the model examined (i.e. a model of ZFC + some large cardinal axiom) and properties of the underlying universe in which our language (such as variable sets) is defined. It seems that I'm missing a key point there.

For example, look at the wikipedia page on forcing. What happens in forcing is extending our language by variables from a set that we want to use. That "flips" from inside set theory to the "outer universe" we're using to even define our language. Are we not, by allowing ourselves to use rather arbitrary sets to be involved in our reasoning that are no longer "within" our actual theory, already putting more in than we reasonably should? Now, and that is where my issue really lies, we can apparently prove that the result falls down to the original model we consider - yet, to even prove this to be possible, we need to create a filter outside of our theory (and by necessity quite "large") - and thus, it matters which "outer" set theory and model thereof we have. This apparently circular logic is what I'm having issues wrapping my head around.

I guess I could, perhaps, phrased this a lot simpler: "I don't get forcing. Can someone explain?"

I'd also take reading recommendations that focus on being clear about the issue.

- Proginoskes
**Posts:**313**Joined:**Mon Nov 14, 2011 7:07 am UTC**Location:**Sitting Down

### Re: Set-theoretic brain fart

fishfry wrote:Token wrote:In my amateur opinion, I don't think the "large" in "large cardinal" means what you think it means. I mean, you already know that size is relative, so in some sense it can't. Besides, in any model of ZFC, the cardinals already go all the way up to the top.

I don't believe that's true. The smallest type of large cardinal, inaccessible cardinals, do not exist within ZFC. They require an extra axiom to bring them into existence. So do the other various large cardinals. They all require extra axioms.

Hey, you need an axiom just to bring infinity into existence ...

### Re: Set-theoretic brain fart

Desiato wrote:For example, look at the wikipedia page on forcing. What happens in forcing is extending our language by variables from a set that we want to use. That "flips" from inside set theory to the "outer universe" we're using to even define our language. Are we not, by allowing ourselves to use rather arbitrary sets to be involved in our reasoning that are no longer "within" our actual theory, already putting more in than we reasonably should? Now, and that is where my issue really lies, we can apparently prove that the result falls down to the original model we consider - yet, to even prove this to be possible, we need to create a filter outside of our theory (and by necessity quite "large") - and thus, it matters which "outer" set theory and model thereof we have. This apparently circular logic is what I'm having issues wrapping my head around.

I guess I could, perhaps, phrased this a lot simpler: "I don't get forcing. Can someone explain?"

I'd also take reading recommendations that focus on being clear about the issue.

I'm not exactly sure forcing is completely relevant to your original post (as I understood it, which I probably didn't, but still), but since you focused on this I'll try to explain.

First of all, if you're feeling a bit scared of working on the divide between a (set) model and the whole universe, there is nothing stopping you from considering one model inside another one. You can then pretend the meta-discussion is going on in the larger of the two models. Of course not all models will be the same, but what we need for our meta-level is just basic set theory on which all models should agree.

There are various justifications for the method of forcing and which one you prefer depends on the machinery you're willing to use. I think the most easily understood version is the purely semantic one, which you mention. You take a model, find a generic filter (which in most cases will not be in the model) and pass to the extension. The extended model will still satisfy ZFC if the original model did, but will have various other properties which we are interested in. Of course, the hangup is in the existence of the generic filter. The thing is, if we assume our original model is countable (the assumption of the existence of such a model is slightly stronger than just the consistency of ZFC), we can prove there is a generic filter for this model in the universe, so we can construct the generic extension as a set within the universe.

Some people find this method distasteful for more or less you reasons, i.e. we shouldn't need to even talk about things that are going on outside of our model. Because of this, there are also syntactic versions of forcing (one of these was actually Cohen's original version). There are a few of these, working with reflection theorems or Boolean-valued models etc., but the main idea is that forcing (that is, the forcing relation) completely determines what will be true in the putative generic extension (which is never really considered in this version) and, importantly, the forcing relation is definable (so can be "computed") within our original model. The point is that, while we, living inside the ground model might not be able to comprehend what elements of some generic extension would look like (because we find it hard to believe there can even be an extension), we are still by some miracle able to reason about these elements and their properties and figure out what things would be true in the extension. The logical content is that, if some statement were provable in the theory of our model, the negation of this statement can't hold in any generic extension (because forcing is sound for classical logic). If we can then find a poset which creates a generic extension in which the negation actually holds, we have proved that something isn't provable in our original theory, just by arguing about some posets.

I hope I haven't confused you further. The philosophy of forcing is tricky to grasp, if you want to be very careful about everything. As for reading, most books on set theory will discuss forcing, but some are more technical than others. Kunen's Set Theory discusses the different views of forcing briefly, so does Smullyan & Fitting's Set Theory and the Continuum Problem (I personally don't like the second one, but people say they find it quite readable). Joel Hamkins has some notes on the philosophy of set theory and he talks about forcing at the end, but you might find it a bit terse.

### Re: Set-theoretic brain fart

Desiato wrote:I guess I could, perhaps, phrased this a lot simpler: "I don't get forcing. Can someone explain?"

That's a MUCH clearer question!

You want Tim Chow's Forcing for Dummies. It's pretty good.

http://www-math.mit.edu/~tchow/mathstuff/forcingdum

### Who is online

Users browsing this forum: Bing [Bot] and 13 guests