## 414/514 The theorems of Riemann and Sierpiński on rearrangement of series

November 16, 2014

I.

Perhaps the first significant observation in the theory of infinite series is that there are convergent series whose terms can be rearranged to form a new series that converges to a different value.

A well known example is provided by the alternating harmonic series,

$\displaystyle 1-\frac12 +\frac13-\frac14+\frac15-\frac16+\frac17-\dots$

and its rearrangement

$\displaystyle 1-\frac12-\frac14+\frac13-\frac16-\frac18+\frac15-\dots$

According to

Henry Parker Manning. Irrational numbers and their representation by sequences and series. John Wiley & Sons, 1906,

Laurent evaluated the latter by inserting parentheses (see pages 97, 98):

$\displaystyle \left(1-\frac12\right)-\frac14+\left(\frac13-\frac16\right)-\frac18+\left(\frac15-\frac1{10}\right)-\dots$ $\displaystyle=\frac12\left(1-\frac12+\frac13-\frac14+\dots\right)$

A similar argument is possible with the rearrangement

$\displaystyle 1+\frac13-\frac12+\frac15+\frac17-\frac14+\dots$,

which can be rewritten as

$\displaystyle 1+0+\frac13-\frac12+\frac15+0+\frac17+\dots$ $\displaystyle =\left(1-\frac12+\frac13-\frac14+\frac15-\frac16+\frac17-\dots\right)$ $\displaystyle +\left(0+\frac12+0-\frac14+0+\frac16+0-\dots\right)$ $\displaystyle =\frac32\left(1-\frac12+\frac13-\frac14+\dots\right).$

The first person to realize that rearranging the terms of a series may change its sum was Dirichlet in 1827, while working on the convergence of Fourier series. (The date is mentioned by Riemann in his Habilitationsschrift, see also page 94 of Ivor Grattan-Guinness. The Development of the Foundations of Mathematical Analysis from Euler to Riemann. MIT, 1970.)

Ten years later, he published

G. Lejeune Dirichlet. Beweis des Satzes, dass jede unbegrenzte arithmetische Progression, deren erstes Glied und Differenz ganze Zahlen ohne gemeinschaftlichen Factor sind, unendlich viele Primzahlen enthält. Abhandlungen der Königlich Preussischen Akademie der Wissenschaften von 1837, 45-81,

where he shows that this behavior is exclusive of conditionally convergent series:

Theorem (Dirichlet). If a series converges absolutely, all its rearrangements converge to the same value.

Proof. Let $u_0,u_1,\dots$ be the original sequence and $u_{\pi(0)},u_{\pi(1)},\dots$ a rearrangement. Denote by $U_0,U_1,\dots$ and $V_0,V_1,\dots$ their partial sums, respectively. Fix $\epsilon>0$. We have that for any $n$, if $m$ is large enough, then for all $i\le n$ there is some $j\le m$ with $\pi(j)=i$. Also, there is a $k$ such that for all $j\le m$ there is a $i\le k$ with $\pi(j)=i$, so

$|U_m-V_m|\le\sum_{i=n+1}^m|u_i|+\sum_{i=n+1}^k|u_j|.$

Choosing $n$ large enough, and using that $\sum_i|u_i|$ converges,  we can ensure that the two displayed series add up to less than $\epsilon$. This gives the result. $\Box$

## 314 – Foundations of Analysis – Syllabus

January 20, 2014

Math 314: Foundations of Analysis.

Andrés E. Caicedo.
Contact Information: See here.
Time: TTh 12:00 – 1:15 pm.
Place: Mathematics Building, Room 139.
Office Hours: Th, 1:30 – 3:00 pm, or by appointment. (If you need an appointment, email me a few times/dates that may work for you, and I’ll get back to you).

Textbook: Stephen Abbott. Understanding Analysis. Springer-Verlag, Undergraduate Texts in Mathematics, 2001; 257 pp. ISBN-10: 0387950605. ISBN-13: 978-0387950600.

Here is the publisher’s page. Additional information is available from the author’s page. Review (MR1807438 (2001m:26001)) by Robert Gardner Bartle at MathSciNet. Review by Jeffrey Nunemacher at the American Mathematical Monthly, Vol. 118, No. 2 (February 2011), pp. 186-189.

I will mention additional references, and provide handouts of additional material, as needed.

Contents: The department’s course description reads:

The real number system, completeness and compactness, sequences, continuity, foundations of the calculus.

I strongly suggest you read the material ahead of our meetings, and work on it frequently. You may find some of the topics challenging. If so, here is some excellent advice by Faulkner (from an interview at The Paris Review):

Personally, I find the topics we will study beautiful, and I hope you enjoy learning it as much as I did.

Please bookmark this post. I update it frequently with detailed week-to-week descriptions.

Detailed day to day description and homework assignments. All problems are from Abbott’s book unless otherwise explicitly specified:

• January 21 – 30. Chapter 1. The real numbers. Irrationality. Completeness. Countable and uncountable sets.
• January 21. Functions. Mathematical induction and the well-ordering principle.
• January 23. Sets, logic, quantifiers. Completeness.
• January 28. Completeness. Countable and uncountable sets. I recommend you read Errol Morris‘s essay on Hypassus of Metapontum, the apparent discoverer of the irrationality of $\sqrt2$.
• January 30. Comparing infinities. Counting the rationals. I recommend the following two papers on this topic: 1 and 2. Office hours this week will be on Friday, 11:45-1:15.

Homework set 1 (Due February 4). Exercises 1.2.1, 1.2.2, 1.2.7, 1.2.8, 1.2.10; 1.3.21.3.9; 1.4.21.4.7, 1.4.11 1.4.13; 1.5.3, 1.5.4, 1.5.9. See below for the required format.

• February 4 – 20. Chapter 2. Sequences and series. Limits. Cauchy sequences. Infinite series. Riemann‘s rearrangement theorem.
• February 4. Rearrangements of infinite series, limits of sequences. Homework 1 is due today.
• February 6. Limit theorems.
• February 11. Limit theorems continued. Infinite series.
• February 13. Monotone convergence. The BolzanoWeierstrass theorem.
• February 18. The Bolzano-Weierstrass theorem continued. Absolute and conditional convergence. Cauchy sequences.
• February 20. Riemann’s rearrangement’s theorem, and extensions (see here and here). The interesting paper by Marion Scheepers mentioned on the second of those links can be found here.
• Additional topics: Products of series. Double series.

Homework set 2 (Due February 25). Exercises 2.2.1, 2.2.2, 2.2.5, 2.2.7, 2.3.2, 2.3.3, 2.3.6, 2.3.7, 2.3.9, 2.3.11, 2.4.2, 2.4.4, 2.4.5, 2.5.3, 2.5.4, 2.6.1, 2.6.3, 2.6.5, 2.7.1, 2.7.4, 2.7.6, 2.7.9, 2.7.11. See below for the required format.

• February 25 – March 6. Chapter 3. Basic topological notions: Open sets. Closed, compact, and perfect sets. The Cantor set. Connectedness. The Baire category theorem.
• February 25. The Cantor set. Open and closed sets.
• February 27. Open and closed sets, continued. Extra credit problem: Find a set of reals such that we can obtain $14$ different sets by applying to it (any combination of) the operations of complementation and closure. Kuratowski showed that $14$ is the largest number that can be obtained that way, you are welcome to also try to show that. (See here.)
• March 4. Open covers, compact sets. Perfect sets. Connectedness.
• March 6. The Baire category theorem.
• Additional topics: The study of closed sets of reals naturally leads to the Cantor-Bendixson derivative, and the Cantor-Baire stationary principle (See here for Ivar Otto Bendixson). A nice reference is Alekos Kechris‘s book, Classical descriptive set theory. For the Baire category theorem and basic applications, I recommend the beginning of John Oxtoby‘s short book, Measure and category. See also the nice paper Subsum Sets: Intervals, Cantor Sets, and Cantorvals by Zbigniew Nitecki, downloadable at the arXiv.

Homework set 3 (Due March 11). Exercises 3.2.1, 3.2.3, 3.2.7, 3.2.9, 3.2.11, 3.2.12, 3.2.14, 3.3.2, 3.3.43.3.7, 3.3.9, 3.3.10, 3.4.2, 3.4.4, 3.4.5, 3.4.73.4.10, 3.5.43.5.6.

• March 11 – March 20. Chapter 4. Limits and continuity: “Continuous” limits. Continuity of functions. The interaction of continuity and compactness.  The intermediate value theorem.
• March 11. The concept of function. Dirichlet‘s and Thomae‘s examples. Definition of limit and basic properties.
• March 13. Properties of limits (continued). Definition of continuity and basic properties.
• March 18. Applications of continuity: The intermediate value property. Banach‘s fixed point theorem.
• March 20.  Continuity and compactness. Uniform continuity. Sets of discontinuity of functions.
• Additional topics: The history of the concept of function is very interesting. The intermediate value property also has a curious history. Apparently, for a while it was expected that it sufficed to characterize continuity. Bolzano’s original paper is fairly accessible. A particularly interesting continuous function is the Cantor function, also called the devil’s staircase. The topic of fixed points (Exercise 4.5.7) leads to a beautiful theorem of Sharkovski, on the possible periods of continuous functions (See here for Oleksandr Mykolaiovych Sharkovsky).

Homework set 4 (Due April 1st). Exercises 4.2.1, 4.2.4, 4.2.6, 4.2.7, 4.3.1, 4.3.3, 4.3.4, 4.3.6, 4.3.84.3.10, 4.3.12, 4.4.1, 4.4.4, 4.4.6, 4.4.9, 4.4.10, 4.4.13, 4.5.2, 4.5.4, 4.5.7.

• April 1 – April 10. Chapter 5. Derivatives: What is a derivative? Differentiability and continuity. Darboux theorem. The mean value theorem. Nowhere differentiable functions.
• April 1. Sets of discontinuity of functions. Definition of derivative, basic properties. Baire class 1 functions.
• April 3. Darboux theorem (the intermediate value property).
• April 8. Rolle‘s theorem. The mean value theorem. L’Hôpital’s rule (see here for Guillaume de l’Hôpital).
• April 10. Continuous nowhere differentiable functions. Weierstrass function. Proper understanding of this topic requires the notion of uniform convergence, that we will discuss in Chapter 6.
• Supplemental reading: This is a very useful exercise to review the notions of continuity and uniform continuity. For more on the Baire classes of functions, I recommend Kechris’s book on Classical descriptive set theory. The problem of characterizing which functions are derivatives has led to a significant amount of research; these two notes (by Andrew Bruckner, and by Bruckner and J. L. Leonard) discuss some details: 1, 2. On continuous nowhere differentiable functions, the thesis linked to above (by Johan Thim) is a useful resource. Sections 1, 2, 4 of this “quiz” (by Louis A. Talman) complement well the discussion of similar topics in the book. For the history of the mean value theorem, see these slides by Ádám Besenyei.

Homework set 5 (Due April 15). Exercises 5.2.15.2.5, 5.3.2, 5.3.3, 5.3.5, 5.3.7.

• April 15 – April 24. Chapter six: Sequences and series of functions. Pointwise vs. uniform convergence. Uniform convergence, continuity, and differentiability. Power series, Taylor series, $C^\infty$ vs. real analytic.
• April 15. Pointwise and uniform convergence of sequences of functions. The uniform limit of a sequence of continuous functions is continuous.
• April 17. Section 6.3: Let $(f_n)_{n=1}^\infty$ be a sequence of differentiable functions defined on a closed interval, that converges pointwise and such that their derivatives converge uniformly. Then the pointwise limit is indeed uniform, the resulting function is differentiable, and its derivative is the limit of the $f_n'$.
• April 22. Series of functions. Weierstrass $M$-test. Power series.
• April 24. Power series (continued). Taylor series. Real analytic functions.
• Supplemental reading: On the topic of analytic vs $C^\infty$ functions, see these two essays by Dave L. Renfro: 1, 2. The result of section 6.3 is false if we ask that the sequence of functions $f_n$ converges uniformly while their derivatives converge pointwise. Darji in fact proved that we can have the limit of the $f_n$ be a differentiable function whose derivative disagrees everywhere with the limit of the derivatives. See here. On Formal power series and applications in combinatorics, I recommend the nice paper by Ivan Niven on this topic. For more on real analytic functions, see the first two chapters of the book A primer of real analytic functions, by Steven Krantz and Harold Parks.

Homework set 6 (Due April 29). Exercises 6.2.1, 6.2.5, 6.2.8, 6.2.13, 6.2.15, 6.2.16, 6.3.1, 6.3.4, 6.4.1, 6.4.36.4.6, 6.5.1, 6.5.2, 6.6.1, 6.6.6.

• April 29 – May 8. Chapter seven: The Riemann integral. Darboux’s characterization. Basic properties. The fundamental theorem of calculus. Lebesgue‘s criterion.
• April 29. Darboux’s approach to the Riemann integral in terms of upper and lower sums. Continuous functions are integrable.
• May 1. Basic properties of the integral, integrable discontinuous functions. A theorem on uniform convergence ensuring that the integral of a limit is the limit of the integrals.
• May 6. The fundamental theorem of calculus. Sets of measure zero.
• May 8. Lebesgue’s characterization of Riemann integrable functions.
• Supplemental reading: For the interesting history of the early development of the Riemann integral, I suggest the first two chapters of Lebesgue’s theory of integration, by Thomas Hawkins.

Homework set 7 (Due May 13 at 10:30). Exercises 7.2.2, 7.2.5, 7.2.6, 7.3.1, 7.3.3, 7.3.6, 7.4.2, 7.4.4, 7.4.6, 7.5.1, 7.5.4, 7.5.10.

Group project due May 15 at 10:30.

Grading: Based on homework. There will also be a group project, that will count as much as two homework sets. I expect there will be no exams, but if we see the need, you will be informed reasonably in advance.

There is bi-weekly homework, due Tuesdays at the beginning of lecture; you are welcome to turn in your homework early, but I will not accept homework past Tuesdays at 12:05 pm, or grant extensions. The homework covers some routine and some more challenging exercises related to the topics covered in the past two weeks (roughly, one homework set per chapter). It is a good idea to work daily on the homework problems corresponding to the material covered that day.

You are encouraged to work in groups and to ask for help. However, the work you turn in should be written on your own. Give credit as appropriate: Make sure to list all books, websites, and people you collaborated with or consulted while working on the homework. If relevant, indicate what software packages you used, and include any programs you may have written, or additional data.

Your homework must follow the format developed by the mathematics department at Harvey Mudd College. You will find that format at this link. If you do not use this style, unfortunately your homework will be graded as 0. In particular, please make sure that what you turn in is not your scratch work but the final product. Include partial attempts whenever you do not have a full solution.

I may ask you to meet with me to discuss details of sets, and I suggest that before you turn in your work, you make a copy of it, so you can consult it if needed.

I post links to supplementary material on Google+. Circle me and let me know if you are interested, and I’ll add you to my Analysis circle.

## Set theory seminar – Marion Scheepers: Coding strategies (IV)

October 12, 2010

For the third talk (and a link to the second one), see here. The fourth talk took place on October 12.

We want to show the following version of Theorem 2:

Theorem. Suppose $\kappa$ is a singular strong limit cardinal of uncountable cofinality. Then the following are equivalent:

1. For each ideal $J$ on $\kappa$, player II has a winning coding strategy in $RG(J)$.
2. $2^\kappa<\kappa^{+\omega}$.

Since $2^\kappa$ has uncountable cofinality, option 2 above is equivalent to saying that the instance of ${\sf wSCH}$ corresponding to $\kappa$ holds.

Before we begin the proof, we need to single out some elementary consequences in cardinal arithmetic of the assumptions on $\kappa$. First of all, since $\kappa$ is singular strong limit, then for any cardinal $\lambda<\kappa$, we have that

$\kappa^\lambda=\left\{\begin{array}{cl}\kappa&\mbox{\ if }\lambda<{\rm cf}(\kappa),\\ 2^\kappa&\mbox{\ otherwise.}\end{array}\right.$

Also, since the cofinality of $\kappa$ is uncountable, we have Hausdoff’s result that if $n<\omega$, then $(\kappa^{+n})^{\aleph_0}=\kappa^{+n}$. I have addressed both these computations in my lecture notes for Topics in Set Theory, see here and here.

Proof. $(2.\Rightarrow 1.)$ We use Theorem 1. If option 1. fails, then there is an ideal $J$ on $\kappa$ with ${\rm cf}(\left< J\right>,{\subset})>|J|$.

Note that ${\rm cf}(\left< J\right>,{\subset})\le({\rm cf}(J,{\subset}))^{\aleph_0}$, and $\kappa\le|J|$. Moreover, if $\lambda<\kappa$, then $2^\lambda<{\rm cf}(J,{\subset})$ since, otherwise,

$({\rm cf}(J,{\subset}))^{\aleph_0}\le 2^{\lambda\aleph_0}=2^\lambda<\kappa$.

So ${\rm cf}(J,{\subset})\ge\kappa$ and then, by Hausdorff, in fact ${\rm cf}(J,{\subset})\ge \kappa^{+\omega}$, and option 2. fails.

$(1.\Rightarrow 2.)$ Suppose option 2. fails and let $\lambda=\kappa^{+\omega}$, so $\kappa<\lambda<2^\kappa$ and ${\rm cf}(\lambda)=\omega$. We use $\lambda$ to build an ideal $J$ on $\kappa$ with ${\rm cf}(\left< J\right>,{\subset})>|J|$.

For this, we use that there is a large almost disjoint family of functions from ${\rm cf}(\kappa)$ into $\kappa$. Specifically:

Lemma. If $\kappa$ is singular strong limit, there is a family ${\mathcal F}\subseteq{}^{{\rm cf}(\kappa)}\kappa$ with ${}|{\mathcal F}|=2^\kappa$ and such that for all distinct $f,g\in{\mathcal F}$, we have that ${}|\{\alpha<{\rm cf}(\kappa)\mid f(\alpha)=g(\alpha)|<{\rm cf}(\kappa)$.

In my notes, I have a proof of a general version of this result, due to Galvin and Hajnal, see Lemma 12 here; essentially, we list all functions $f:{\rm cf}(\kappa)\to\kappa$, and then replace them with (appropriate codes for) the branches they determine through the tree $\kappa^{{\rm cf}(\kappa)}$. Distinct branches eventually diverge, and this translates into the corresponding functions being almost disjoint.

Pick a family ${\mathcal F}$ as in the lemma, and let ${\mathcal G}$ be a subfamily of size $\lambda$. Let $S=\bigcup{\mathcal G}\subseteq{\rm cf}(\kappa)\times\kappa$. We proceed to show that $|S|=\kappa$ and use ${\mathcal G}$ to define an ideal $J$ on $S$ as required.

First, obviously $|S|\le\kappa$. Since $\kappa<\lambda=|{\mathcal G}|$ and ${\mathcal G}\subseteq{\mathcal P}(S)$, it follows that ${}|S|\ge\kappa$, or else ${}|{\mathcal P}(S)|<\kappa$, since $\kappa$ is strong limit.

Now define

$J=\{X\subseteq S\mid\exists {\mathcal H}\subseteq{\mathcal G}\,(|{\mathcal H}|<\omega,\bigcup{\mathcal H}\supseteq X)\}.$

Clearly, $J$ is an ideal. We claim that $|J|=\lambda$. First, each singleton $\{f\}$ with $f\in{\mathcal G}$ is in $J$, so ${}|J|\ge\lambda$. Define $\Phi:[{\mathcal G}]^{<\aleph_0}\to J$ by $\Phi({\mathcal H})=\bigcup{\mathcal H})$. Since the functions in ${\mathcal G}$ are almost disjoint, it follows that $\Phi$ is 1-1. Let $G$ be the image of $\Phi$. By construction, $G$ is cofinal in $J$. But then

${}|J|\le|{\mathcal G}|2^{{\rm cf}(\kappa)}=\lambda 2^{{\rm cf}(\kappa)}=\lambda$,

where the first inequality follows from noticing that any $X\in J$ has size at most ${\rm cf}(\kappa)$. It follows that $|J|=\lambda$, as claimed.

Finally, we argue that ${\rm cf}(\left< J\right>,{\subset})>\lambda$, which completes the proof. For this, consider a cofinal ${\mathcal A}\subseteq\left< J\right>$, and a map $f:{\mathcal A}\to[{\mathcal G}]^{\le\aleph_0}$ such that for all $A\in{\mathcal A}$, we have $A\subseteq\bigcup f(A)$.

Since ${\mathcal A}$ is cofinal in $\left< J\right>$, it follows that $f[{\mathcal A}]$ is cofinal in ${}[{\mathcal G}]^{\le\aleph_0}$. But this gives the result, because

${}|{\mathcal A}|\ge{\rm cf}([{\mathcal G}]^{\le \aleph_0},{\subset})={\rm cf}([\lambda]^{\le \aleph_0},{\subset})>\lambda$,

and we are done. $\Box$

## Set theory seminar – Marion Scheepers: Coding strategies (III)

September 28, 2010

For the second talk (and a link to the first one), see here. The third talk took place on September 28.

In the second case, we fix an $X\in J$ with ${}|J(X)|<|J|$. We can clearly assume that $S$ is infinite, and it easily follows that ${}|{\mathcal P}(X)|=|J|$. This is because any $Y\in J$ can be coded by the pair $(Y\cap X,X\cup(Y\setminus X))$, and there are only ${}|J(X)|$ many possible values for the second coordinate.

In particular, $X$ is infinite, and we can fix a partition $X=\bigsqcup_n X _n$ of $X$ into countably many pieces, each of size ${}|X|$. Recall that we are assuming that $\text{cf}(\left< J\right>,{\subset})\le|J|$ and have fixed a set $H$ cofinal in $\left< J\right>$ of smallest possible size. We have also fixed a perfect information winning strategy $\Psi$ for II, and an $f:\left< J\right>\to H$ with $A\subseteq f(A)$ for all $A$.

For each $n$, fix a surjection $f:{\mathcal P}(X_n)\setminus\{\emptyset,X_n\}\to{}^{<\omega}H$.

We define $F:J\times\left< J\right>\to J$ as follows:

1. Given $O\in \left< J\right>$, let

$A=\Psi(\left)\setminus X$,

and

$B\in{\mathcal P}(X_0)\setminus\{\emptyset, X_0\}$ such that $f_0(B)=\left< f(O)\right>$,

and set $F(\emptyset,O)=A\cup B$.

2. Suppose now that $(T,O)\in J\times\left< J\right>$, that $T\ne\emptyset$ , and that there is an $n$ such that $\bigcup_{j, $T\cap X_n\ne\emptyset,X_n$, and $T\cap\bigcup_{k>n}X_k=\emptyset$. Let

$B\in{\mathcal P}(X_{n+1})\setminus\{\emptyset,X_{n+1}\}$ be such that $f_{n+1}(B)=f_n(T\cap X_n){}^\frown\left< f(O)\right>$,

and

$A=\Psi(f_{n+1}(B))\setminus X$

and set $F(T,O)=A\cup\bigcup_{j\le n}X_j\cup B$.

3. Define $F(T,O)=\emptyset$ in other cases.

A straightforward induction shows that $F$ is winning. The point is that in a run of the game where player II follows $F$:

• Player II’s moves code the part that lies outside of $X$ of player II’s moves in  a run ${\mathcal A}$ of the game following $\Psi$ where I plays sets covering the sets in the original run. For this, note that at any inning there is a unique index $n$ such that player II’s move covers $\bigcup_{j, is disjoint from $\bigcup_{j>n}X_j$, and meets $X_n$ in a set that is neither empty nor all of $X_n$, and this $n$ codes the inning of the game, and the piece of player II’s move in $X_n$ codes the history of the run ${\mathcal A}$ played so far.
• $X$ is eventually covered completely, so in particular the parts inside $X$ of player II’s responses in the run ${\mathcal A}$ are covered as well.

This completes the proof of Theorem 1. $\Box$

By way of illustration, consider the case where $J$ is the ideal of finite sets of some set $S$. Then whether II has a winning coding strategy turns into the question of when it is that $\text{cf}({\mathcal P}_{\aleph_1}(S))\le|S|$. This certainly holds if ${}|S|={\mathfrak c}$ or if ${}|S|<\aleph_\omega$. However, it fails if ${}|S|=\aleph_\omega$.

This example illustrates how player II really obtains an additional advantage when playing in $WMG(J)$ rather than just in $RG(J)$. To see that this is the case, consider the same $J$ as above with ${}|S|=\aleph_\omega$. This is an instance of the countable-finite game. We claim that II has a winning coding strategy in this case. To see this, consider a partition of $S$ into countably many sets $S_n$ with ${}|S_n|=\aleph_n$. For each $n$, pick a winning coding strategy $\sigma_n$ for the countable-finite game on $S_n$, and define a strategy in $WMG(J)$ so that for each $n$ it simulates a run of the game $WMG(J\cap{\mathcal P}(S_n)$ with II following $\sigma_n$, as follows: In inning $n$, II plays on $S_i$ for $i\le n$; player I’s moves in the “$i$-th board” are the intersection with $S_i$ of I’s moves in $WMG(J)$, and I’s first move occurred at inning $i$. (II can keep track of $n$ in several ways, for example, noticing that, following the proof of Theorem 1 produces coding strategies that never play the empty set.)

Note that this strategy is not winning in $RG(J)$, the difference being that there is no guarantee that (for any $i$) the first $i$ moves of I in the $(i+1)$-st board are going to be covered by II’s responses. On the other hand, the strategy is winning in $WMG(J)$, since, no matter how late one starts to play on the $i$-th board, player I’s first move covers I’s prior moves there (and so, II having a winning coding strategy for the game that starts with this move, will also cover those prior moves).

The first place where this argument cannot be continues is when $|S|=\aleph_{\omega_1}$. However, ${\sf GCH}$ suffices to see that player I has a winning strategy in $RG(J)$ in this case, and so we can continue. This illustrates the corollary stated in the first talk, that ${\sf GCH}$ suffices to guarantee that II always has a winning coding strategy in $WMG(J)$.

The natural question is therefore how much one can weaken the ${\sf GCH}$ assumption, and trying to address it leads to Theorem 2, which will be the subject of the next (and last) talk.

## Set theory seminar – Marion Scheepers: Coding strategies (II)

September 27, 2010

For the first talk, see here. The second talk took place on September 21.

We want to prove $(2.\Rightarrow1.)$ of Theorem 1, that if ${\rm cf}(\left< J\right>,\subset)\le|J|$, then II has a winning coding strategy in $RG(J)$.

The argument makes essential use of the following:

Coding Lemma. Let $({\mathbb P},<)$ be a poset such that for all $p\in{\mathbb P}$,

${}|\{q\in{\mathbb P}\mid q>p\}|=|{\mathbb P}|.$

Suppose that ${}|H|\le|{\mathbb P}|$. Then there is a map $\Phi:{\mathbb P}\to{}^{<\omega}H$ such that

$\forall p\in{\mathbb P}\,\forall\sigma\in H\,\exists q\in{\mathbb P}\,(q>p\mbox{ and }\Phi(q)=\sigma).$

Proof. Note that ${\mathbb P}$ is infinite. We may then identify it with some infinite cardinal $\kappa$. It suffices to show that for any partial ordering $\prec$ on $\kappa$ as in the hypothesis, there is a map $\Phi:\kappa\to\kappa$ such that for any $\alpha,\beta$, there is a $\gamma$ with $\alpha\prec\gamma$ such that $\Phi(\gamma)=\beta$.

Well-order $\kappa\times\kappa$ in type $\kappa$, and call $R$ this ordering. We define $\Phi$ by transfinite recursion through $R$. Given $(\alpha,\beta)$, let $A$ be the set of its $R$-predecessors,

$A=\{(\mu,\rho)\mid(\mu,\rho) R(\alpha,\beta)\}$.

Our inductive assumption is that for any pair $(\mu,\rho)\in A$, we have chosen some $\tau$ with $\mu\prec\tau$, and defined $\Phi(\tau)=\rho$.  Let us denote by $D_A$ the domain of the partial function we have defined so far. Note that ${}|D_A|<\kappa$. Since $\{\gamma\mid\alpha\prec\gamma\}$ has size $\kappa$, it must meet $\kappa\setminus D_A$. Take $\mu$ to be least in this intersection, and set $\Phi(\mu)=\beta$, thus completing the stage $(\alpha,\beta)$ of this recursion.

At the end, the resulting map can be extended to a map $\Phi$ with domain $\kappa$ in an arbitrary way, and this function clearly is as required. $\Box$

Back to the proof of $(2.\Rightarrow1.)$. Fix a perfect information winning strategy $\Psi$ for II in $RG(J)$, and a set $H$ cofinal in $\left< J\right>$ of least possible size. Pick a $f:\left< J \right>\to H$ such that for all $A\in \left< J\right>$ we have $A\subseteq f(A)$.

Given $X\in J$, let $J(X)=\{Y\in J\mid X\subseteq Y\}$. Now we consider two cases, depending on whether for some $X$ we have ${}|J(X)|<|J|$ or not.

Suppose first that $|J(X)|=|J|$ for all $X$. Then the Coding Lemma applies with $(J,\subset)$ in the role of ${\mathbb P}$, and $H$ as chosen. Let $\Phi$ be as in the lemma.

We define $F:J\times\left< J\right>\to J$ as follows:

1. Given $O\in\left< J\right>$, let $Y\supseteq\psi(f(O))$ be such that $\Phi(Y)=\left$, and set $F(\emptyset,O)=Y$.
2. Given $(T,O)\in J\times\left< J\right>$ with $T\ne\emptyset$, let $Y\supseteq \Psi(\Phi(T){}^\frown\left)$ be such that $\Phi(Y)=\Phi(T){}^\frown\left< f(O)\right>$, and set $F(T,O)=Y$.

Clearly, $F$ is winning: In any run of the game with II following $F$, player II’s moves cover their responses following $\Psi$, and we are done since $\Psi$ is winning.

The second case, when there is some $X\in J$ with $|J(X)|<|J|$, will be dealt with in the next talk.

## Set theory seminar – Marion Scheepers: Coding strategies (I)

September 25, 2010

This semester, the seminar started with a series of talks by Marion. The first talk happened on September 14.

We consider two games relative to a (proper) ideal $J\subset{\mathcal P}(S)$ for some set $S$. The ideal $J$ is not assumed to be $\sigma$-complete; we denote by $\left< J\right>$ its $\sigma$-closure, i.e., the collection of countable unions of elements of $J$. Note that $\left< J\right>$ is a $\sigma$-ideal iff $\left< J\right>$ is an ideal iff $S\notin\left< J\right>$.

The two games we concentrate on are the Random Game on $J$, $RG(J)$, and the Weakly Monotonic game on $J$, $WMG(J)$.

In both games, players I and II alternate for $\omega$ many innings, with I moving first, moving as follows:

$\begin{array}{cccccc} I&O_0\in\left< J\right>&&O_1\in\left< J_2\right>&&\cdots\\ II&&T_0\in J&&T_1\in J \end{array}$

In $RG(J)$ we do not require that the $O_i$ relate to one another in any particular manner (thus “random”), while in $WMG(J)$ we require that $O_1\subseteq O_2\subseteq\dots$ (thus “weakly”, since we allow equality to occur).

In both games, player II wins iff $\bigcup_n T_n\supseteq\bigcup_n O_n$. Obviously, II has a (perfect information) winning strategy, with $=$ rather than the weaker $\supseteq$.

However, we are interested in an apparently very restrictive kind of strategy, and so we will give some leeway to player II by allowing its moves to over-spill if needed. The strategies for II we want to consider we call coding strategies. In these strategies, II only has access to player I’s latest move, and to its own most recent move. So, if $F$ is a coding strategy, and II follows it in a run of the game, then we have that for every $n$,

$T_n=F(T_{n-1},O_n)$,

with $T_{-1}=\emptyset$.

The underlying goal is to understand under which circumstances player II has a winning coding strategy in $WMG(J)$. Obviously, this is the case if II has a winning coding strategy in $RG(J)$.

Theorem 1. For an ideal $J\subset{\mathcal P}(S)$, the following are equivalent:

1. II has a winning coding strategy in $RG(J)$.
2. ${\rm cf}(\left< J\right>,{\subset})\le|J|$.

Corollary. ${\sf GCH}$ implies that for any ideal $J\subset{\mathcal P}(S)$, II has a winning strategy in $WMG(J)$.

We can reformulate our goal as asking how much one can weaken ${\sf GCH}$ in the corollary.

Let’s denote by ${\sf wSCH}$, the weak singular cardinals hypothesis, the statement that if $\kappa$ is singular strong limit of uncountable cofinality, then for no cardinal $\lambda$ of countable cofinality, we have $\kappa<\lambda<2^\kappa$.

By work of Gitik and Mitchell, we know that the negation of ${\sf wSCH}$ is equiconsistent with the existence of a $\kappa$ of Mitchell order $o(\kappa)=\kappa^{+\omega}+\omega_1$.

Theorem 2. The following are equivalent:

1. ${\sf wSCH}$.
2. For each ideal $J$ on a singular strong limit $\kappa$ of uncountable cofinality, II has a winning strategy in $RG(J)$.

We now begin the proof of Theorem 1.

$(1.\Rightarrow2.)$ Suppose II has a winning coding strategy $F$ in $RG(J)$. We want to show that ${\rm cf}(\left< J\right>,{\subset})\le|J|$. For this, we will define a map $f:J\to\left< J\right>$ with $\subset$-cofinal range, as follows: Given $X\in J$, let $T_0=X$ and $T_{n+1}=F(T_n,\emptyset)$ for all $n$. Now set

$f(X)=\bigcup_n T_n$.

To see that $f$ is cofinal, given $O\in\left< J\right>$, let $X=F(\emptyset,O)$, so that the $T_n$ are II’s responses using $F$ in a run of the game where player I first plays $O$ and then plays $\emptyset$ in all its subsequent moves. Since $F$ is winning, we must have $f(X)\supseteq O$.