Set theory seminar – Marion Scheepers: Coding strategies (I)

September 25, 2010

This semester, the seminar started with a series of talks by Marion. The first talk happened on September 14.

We consider two games relative to a (proper) ideal J\subset{\mathcal P}(S) for some set S. The ideal J is not assumed to be \sigma-complete; we denote by \left< J\right> its \sigma-closure, i.e., the collection of countable unions of elements of J. Note that \left< J\right> is a \sigma-ideal iff \left< J\right> is an ideal iff S\notin\left< J\right>.

The two games we concentrate on are the Random Game on J, RG(J), and the Weakly Monotonic game on J, WMG(J).

In both games, players I and II alternate for \omega many innings, with I moving first, moving as follows:

\begin{array}{cccccc} I&O_0\in\left< J\right>&&O_1\in\left< J_2\right>&&\cdots\\ II&&T_0\in J&&T_1\in J \end{array}

In RG(J) we do not require that the O_i relate to one another in any particular manner (thus “random”), while in WMG(J) we require that O_1\subseteq O_2\subseteq\dots (thus “weakly”, since we allow equality to occur).

In both games, player II wins iff \bigcup_n T_n\supseteq\bigcup_n O_n. Obviously, II has a (perfect information) winning strategy, with = rather than the weaker \supseteq.

However, we are interested in an apparently very restrictive kind of strategy, and so we will give some leeway to player II by allowing its moves to over-spill if needed. The strategies for II we want to consider we call coding strategies. In these strategies, II only has access to player I’s latest move, and to its own most recent move. So, if F is a coding strategy, and II follows it in a run of the game, then we have that for every n,


with T_{-1}=\emptyset.

The underlying goal is to understand under which circumstances player II has a winning coding strategy in WMG(J). Obviously, this is the case if II has a winning coding strategy in RG(J).

Theorem 1. For an ideal J\subset{\mathcal P}(S), the following are equivalent:

  1. II has a winning coding strategy in RG(J).
  2. {\rm cf}(\left< J\right>,{\subset})\le|J|.

Corollary. {\sf GCH} implies that for any ideal J\subset{\mathcal P}(S), II has a winning strategy in WMG(J).

We can reformulate our goal as asking how much one can weaken {\sf GCH} in the corollary.

Let’s denote by {\sf wSCH}, the weak singular cardinals hypothesis, the statement that if \kappa is singular strong limit of uncountable cofinality, then for no cardinal \lambda of countable cofinality, we have \kappa<\lambda<2^\kappa.

By work of Gitik and Mitchell, we know that the negation of {\sf wSCH} is equiconsistent with the existence of a \kappa of Mitchell order o(\kappa)=\kappa^{+\omega}+\omega_1.

Theorem 2. The following are equivalent:

  1. {\sf wSCH}.
  2. For each ideal J on a singular strong limit \kappa of uncountable cofinality, II has a winning strategy in RG(J).

We now begin the proof of Theorem 1.

(1.\Rightarrow2.) Suppose II has a winning coding strategy F in RG(J). We want to show that {\rm cf}(\left< J\right>,{\subset})\le|J|. For this, we will define a map f:J\to\left< J\right> with \subset-cofinal range, as follows: Given X\in J, let T_0=X and T_{n+1}=F(T_n,\emptyset) for all n. Now set

f(X)=\bigcup_n T_n.

To see that f is cofinal, given O\in\left< J\right>, let X=F(\emptyset,O), so that the T_n are II’s responses using F in a run of the game where player I first plays O and then plays \emptyset in all its subsequent moves. Since F is winning, we must have f(X)\supseteq O.

Next talk.

305 -7. Extension fields revisited

April 3, 2009


1. Greatest common divisors.


Let’s conclude the discussion from last lecture.

If {{mathbb F}} is a field and {p(x),q(x)in{mathbb F}[x]} are nonzero, then we can find polynomials {alpha(x),beta(x)in{mathbb F}[x]} such that {alpha p+beta q} is a gcd of {p} and {q.}

To see this, consider {{mathcal A}={{rm deg}(a(x)):0ne a(x)in{mathbb F}[x]} and for some polynomials {alpha,betain{mathbb F}[x],} we have {a=alpha p+beta q}.}

We see that {{mathcal A}neemptyset,} because both {p} and {q} are nonzero linear combinations of {p} and {q,} so their degrees are in {{mathcal A}.} Each element of {{mathcal A}} is a natural number because {{rm deg}(a)=-infty} only for {a=0.} By the well-ordering principle, there is a least element of {{mathcal A}.}

Let {n} be this least degree, and let {g=alpha p+beta q} have degree {n.}

First, if {sin{mathcal F}[x]} and {s|p,q} then {s|alpha p+beta q,} so {s|g.}

Second, by the division algorithm, we can write {p=gm+r} for some polynomials {m,rin{mathbb F}[x]} with {{rm deg}(r)<{rm deg}(g).} Then {r=p-gm=(1-alpha m)p+(-beta m)q} is a linear combination of {p,q.} Since {{rm deg}(r)<{rm deg}(g),} and {n={rm deg}(g)} is the smallest number in {{mathcal A},} it follows that {{rm deg r}=-infty,} i.e., {r=0.} This is to say that {p=gm,} so {g|p.} Similarly, {g|q.}

It follows that {g} is a greatest common divisor of {p,q.}

Since any other greatest common divisor of {p,q} is {ig} for some unit {i,} it follows that any gcd of {p} and {q} is a linear combination of {p} and {q.}

Notice that this argument is very similar to the proof of the same result for {{mathbb Z}.}

Read the rest of this entry »

305 -Rings, ideals, homomorphisms (2)

March 16, 2009

Let’s begin by verifying:

Theorem 1 If {R,S} are rings and {h:Rrightarrow S} is a homomorphism, then {h^{-1}(0)={ain R: h(a)=0}} is an ideal of {R.}


Proof: Clearly {0in h^{-1}(0),} so this set is nonempty. If {a,bin h^{-1}(0),} then {h(a-b)=h(a)+h(-b)=h(a)-h(b)=0,} so {a-bin h^{-1}(0).} Finally, if {ain h^{-1}(0)} and {bin R} then {h(ab)=h(a)h(b)=0h(b)=0} and {h(ba)=h(b)h(a)=h(b)0=0} so both {ab} and {ba} are in {h^{-1}(0).} Box

In a sense, this is the only source of examples of ideals. This is shown by means of an abstract construction.

Theorem 2 If {I} is an ideal of a ring {R} then there is a ring {S} and a homomorphism {h:Rrightarrow S} such that {I=h^{-1}(0).}


Proof: The proof resembles what we did to define the rings {{mathbb Z}_n:} Begin by defining a relation {sim} on {R} by ({asim b} iff {a-bin I}). Check that {sim} is an equivalence relation. We can then define {S=R/{sim} ={[x]:xin R}} where {[x]=[x]_sim} is the equivalence class of {x,} i.e., {[x]={y:xsim y}.}

We turn {S} into a ring by defining {[x]+[y]=[x+y],} {[x]times[y]=[xy]} and {0=[0].} Check that these operations are well defined. Then check that {(S,+,times,0)} satisfies the axioms of rings.

Finally, let {h:Rrightarrow S} be the quotient map, {h(x)=[x].} Check that this is a homomorphism and that {h^{-1}(0)=I.} Box

Definition 3 An isomorphism is a bijective homomorphism. If {h:Rrightarrow R} is an isomorphism, we say that it is an automorphism.


Proposition 4 Suppose that {{mathbb F}} is a field and {I} is an ideal of {{mathbb F}.} Then either {I={0}} or {I={mathbb F}.} {Box}


It will be very important for us to understand the automorphisms of the field extensions {{mathbb F}^{p(x)}} where {p(x)in{mathbb F}[x].} For this, we will need some tools of linear algebra, so it will be useful to review Chapter 12 and Appendix C of the book.

Typeset using LaTeX2WP. Here is a printable version of this post.

305 -6. Rings, ideals, homomorphisms

March 13, 2009

It will be important to understand the subfields of a given field; this is a key step in figuring out whether a field {{mathbb Q}^{p(x)}} is an extension by radicals or not. We need some “machinery” before we can develop this understanding.


Definition 1 A ring is a set {R} together with two binary operations {+,times} on {R} such that:

  1. {+} is commutative.
  2. There is an additive identity {0.}
  3. Any {a} has an additive inverse {-a.}
  4. {+} is associative.
  5. {times} is associative.
  6. {times} distributes over {+,} both on the right and on the left.

Read the rest of this entry »