496 – Regressive functions on pairs

August 31, 2009

[Updated September 4, 2009.]

(For background, see my paper Regressive functions on pairs, in my papers page.)

Here, {{\mathbb N}=\{0,1,\dots\}.} For {X\subseteq{\mathbb N},} we denote by {X^{[2]}} the set of unordered pairs of elements of {X.} We will use interval notation, with the understanding that our variables range over natural numbers so, for example, {[3,6)=\{3,4,5\}.}

Suppose that {0\notin X.} A function {f:X^{[2]}\rightarrow{\mathbb N}} is regressive iff {f(t)<{\rm min}(t).} We will usually write {f(a,b)} for {f(\{a,b\})} with the understanding that {a<b.}

A subset {H\subseteq X} is min-homogeneous for {f} iff whenever {a<b<c} are in {H,} then {f(a,b)=f(a,c).}

Given {0<n\in{\mathbb N},} denote by {g(n)} the smallest integer {N} such that whenever {f:[n,N]^{[2]}\rightarrow{\mathbb N}} is regressive, there is a min-homogeneous set {H\subset[n,N]} of size at least {4.}

We want to bound the function {g(n)} as precisely as possible.

Here are some exact values and bounds:

  • {g(1)=5.}
  • {g(2)=15.}
  • {g(3)=37.}
  • {g(n)\le 2^{n-1}(n+7)-3.} 
    (In the paper, I prove the weaker bound {g(n)\le 2^n n+12\cdot 2^{n-3}+1} for {n\ge3.})
  • {g(n+1)\ge 2g(n)+3.}

I will be modifying the table above if additional results are found.

Typeset using LaTeX2WP.


502 – Formal systems (2)

August 31, 2009

Here is a different (more direct) presentation of the argument for Fact 2; the algorithm in the previous proof can be extracted from here as well:

Proof: We proceed by induction on the length of the proof to show that for all {n,} whenever a string has a proof from {\Sigma} of length at most {n,} then it has a proof of the required form.

This is clear if {n=1.} Suppose {\tau} has a proof {s} of length {n+1} and the result holds for all proofs of length at most {n.} If {\tau} is an axiom, it has a proof of length {1.} If in {s,} {\tau} is the result of applying the extension rule to some {\sigma,} then {\sigma} has (by the induction hypothesis) a proof {t} of the required form, and {t{}^\frown(\tau)} is a proof of {\tau} of the required form.

Finally, suppose that in {s,} {\tau} is the result of applying compression to {\tau0} and {\tau1.} By the induction hypothesis, {\tau0} and {\tau1} have proofs {s_0} and {s_1,} respectively, of the required form. If in {s_0} the extension rule is used, then it must in particular be used at the last step, so {\tau} appears in {s_0.} Restricting {s_0} to its initial segment that ends in {\tau} gives us a proof of {\tau} of the required form. Similarly with {s_1.}

We can then assume that extension is neither used in {s_0} nor in {s_1.} So, for {i\in2,} {s_i=t_i{}^\frown r_i,} where {t_i} consists of applications of the axioms rule, and {r_i} consists of applications of compression. Then {t_0{}^\frown t_1{}^\frown r_0{}^\frown r_1{}^\frown(\tau)} is a proof of {\tau} of the required form. \Box

Read the rest of this entry »