It is common in algebraic settings to build new structures by taking quotients of old ones. This occurs in topology (building quotient spaces), in abstract algebra when building field extensions, or in the homomorphism theorems. Here we explore quotients in vector spaces.

First we briefly consider an example from differential equations.

Let be the space consisting of all continuously differentiable functions and let be the space of all continuous functions Let be the linear transformation

Show that is a real vector space of dimension 1, by showing that iff is a constant. This means, of course, that

Show that by finding a particular solution to the equation One way of doing this is by looking for such a function of the form for some constant Find the form of an arbitrary function such that by noting that if then

More generally, show that is surjective, by finding for any the explicit form of the solutions to the equation It may help you solve this equation if you first multiply both sides by

For another example, denote by the space of all matrices with real entries. Define a map by Show explicitly that has dimension 6 and that is surjective.

Now we abstract certain features of these examples to a general setting:

Suppose is a field and is a linear transformation between two -vector spaces and It is not necessary to assume that or are finite dimensional.

Let and let be any preimage, i.e., Show that the set of all preimages of is precisely

Define a relation in by setting iff Show that is an equivalence relation. Denote by the equivalence class of the vector

Let be the quotient of by i.e., the collection of equivalence classes of the relation We want to give the structure of an -vector space. In order to do this, we define and for all and Show that this is well-defined and satisfies the axioms of an -vector space. What is the usual name we give to the 0 vector of this space?

It is standard to denote by Define two functions and as follows: is given by Also, is given by Show that is well-defined and that both and are linear.

Show that that is a surjection, and that is an isomorphism between and In particular, any surjective image of a vector space by a linear map can be identified with a quotient of

43.614000-116.202000

Advertisements

Like this:

LikeLoading...

Related

This entry was posted on Friday, March 5th, 2010 at 12:32 pm and is filed under 403/503: Linear Algebra II. You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.

In the second example, where T maps the real 3×3 matrices to R^3, you ask us to show explicitly that the dimension of null(T) is 6. By explicitly, do you mean that you actually want us to find a basis for null(T), or would it suffice to reference an appropriate theorem?

When we define an equivalence relation, what is required to show that it is an equivalence relation? The book doesn’t discuss them, so from google I found three properties: a~a, a~b then b~a, and finally a~b and b~c then a~c. Is this all that’s required to show it is one?

Yes, that’s all that is needed. To be precise: You need to show that holds for all vectors . (This is called the reflexive property.) Similarly, you need to show that, for any vectors , if it is the case that , then also (symmetry), and that if it is the case that and , then also (transitivity).

I had a question referring to what I have called Show #7: “Show that this is well-defined and satisfies the axioms of an -vector space.”
I’m not sure what you mean by “show that this is well-defined”? What exactly do I need to show? I tried to look this up and I found things that said well defined meant the mapping was one-to-one, is this what we are talking about here?
Thanks,
Summer

Hi Summer,
We are defining a map on a quotient. Whenever we do this, we typically run into the following problem:

I am defining However, if and then and right? This is potentially a problem, because we are defining But we had already said that So, there is the risk that the definition of sum we gave makes no sense: Which one is it: or ?

The answer ought to be that both expressions are the same, i.e., that no matter what elements of and are chosen, when we add them, we always land in the same equivalence class. This is usually expressed by saying that the definition of sum that we gave is independent of the specific representatives and that we choose in order to compute from which we then obtain

Then you have to argue that, similarly, if then for any so the definition of scalar multiplication we are giving is also independent of the specific representatives of the equivalence classes that we choose in order to compute it.

(The point is that when somebody hands you two equivalence classes and and asks you to add them, you are not given and , only their classes. You have to pick an element of the first class, and one of the second, and then you add these 2 elements, and form the class of the result that you obtain, and that’s your answer. Of course, you want that your answer does not change if you pick different elements of the first and second classes to begin with.)

(As I pointed out in a comment) yes, partial Woodinness is common in arguments in inner model theory. Accordingly, you obtain determinacy results addressing specific pointclasses (typically, well beyond projective). To illustrate this, let me "randomly" highlight two examples: See here for $\Sigma^1_2$-Woodin cardinals and, more generally, the noti […]

I am not sure which statement you heard as the "Ultimate $L$ axiom," but I will assume it is the following version: There is a proper class of Woodin cardinals, and for all sentences $\varphi$ that hold in $V$, there is a universally Baire set $A\subseteq{\mathbb R}$ such that, letting $\theta=\Theta^{L(A,{\mathbb R})}$, we have that $HOD^{L(A,{\ma […]

A Wadge initial segment (of $\mathcal P(\mathbb R)$) is a subset $\Gamma$ of $\mathcal P(\mathbb R)$ such that whenever $A\in\Gamma$ and $B\le_W A$, where $\le_W$ denotes Wadge reducibility, then $B\in\Gamma$. Note that if $\Gamma\subseteq\mathcal P(\mathbb R)$ and $L(\Gamma,\mathbb R)\models \Gamma=\mathcal P(\mathbb R)$, then $\Gamma$ is a Wadge initial se […]

Craig: For a while, there was some research on improving bounds on the number of variables or degree of unsolvable Diophantine equations. Unfortunately, I never got around to cataloging the known results in any systematic way, so all I can offer is some pointers to relevant references, but I am not sure of what the current records are. Perhaps the first pape […]

Yes. Consider, for instance, Conway's base 13 function $c$, or any function that is everywhere discontinuous and has range $\mathbb R$ in every interval. Pick continuous bijections $f_n:\mathbb R\to(-1/n,1/n)$ for $n\in\mathbb N^+$. Pick a strictly decreasing sequence $(x_n)_{n\ge1}$ converging to $0$. Define $f$ by setting $f(x)=0$ if $x=0$ or $\pm x_n […]

All proofs of the Bernstein-Cantor-Schroeder theorem that I know either directly or with very little work produce an explicit bijection from any given pair of injections. There is an obvious injection from $[0,1]$ to $C[0,1]$ mapping each $t$ to the function constantly equal to $t$, so the question reduces to finding an explicit injection from $C[0,1]$ to $[ […]

One way we formalize this "limitation" idea is via interpretative power. John Steel describes this approach carefully in several places, so you may want to read what he says, in particular at Solomon Feferman, Harvey M. Friedman, Penelope Maddy, and John R. Steel. Does mathematics need new axioms?, The Bulletin of Symbolic Logic, 6 (4), (2000), 401 […]

"There are" examples of discontinuous homomorphisms between Banach algebras. However, the quotes are there because the question is independent of the usual axioms of set theory. I quote from the introduction to W. Hugh Woodin, "A discontinuous homomorphism from $C(X)$ without CH", J. London Math. Soc. (2) 48 (1993), no. 2, 299-315, MR1231 […]

This is Hausdorff's formula. Recall that $\tau^\lambda$ is the cardinality of the set ${}^\lambda\tau$ of functions $f\!:\lambda\to\tau$, and that $\kappa^+$ is regular for all $\kappa$. Now, there are two possibilities: If $\alpha\ge\tau$, then $2^\alpha\le\tau^\alpha\le(2^\alpha)^\alpha=2^\alpha$, so $\tau^\alpha=2^\alpha$. In particular, if $\alpha\g […]

Fix a model $M$ of a theory for which it makes sense to talk about $\omega$ ($M$ does not need to be a model of set theory, it could even be simply an ordered set with a minimum in which every element has an immediate successor and every element other than the minimum has an immediate predecessor; in this case we could identify $\omega^M$ with $M$ itself). W […]

In the second example, where T maps the real 3×3 matrices to R^3, you ask us to show explicitly that the dimension of null(T) is 6. By explicitly, do you mean that you actually want us to find a basis for null(T), or would it suffice to reference an appropriate theorem?

Thanks,

Nick

I would prefer an actual basis, I’ve been seen some difficulties carrying out explicit computations in the previous homework sets.

When we define an equivalence relation, what is required to show that it is an equivalence relation? The book doesn’t discuss them, so from google I found three properties: a~a, a~b then b~a, and finally a~b and b~c then a~c. Is this all that’s required to show it is one?

Thanks, Amy

Yes, that’s all that is needed. To be precise: You need to show that holds for

allvectors . (This is called thereflexiveproperty.) Similarly, you need to show that, for any vectors , if it is the case that , then also (symmetry), and that if it is the case that and , then also (transitivity).I had a question referring to what I have called Show #7: “Show that this is well-defined and satisfies the axioms of an -vector space.”

I’m not sure what you mean by “show that this is well-defined”? What exactly do I need to show? I tried to look this up and I found things that said well defined meant the mapping was one-to-one, is this what we are talking about here?

Thanks,

Summer

Hi Summer,

We are defining a map on a quotient. Whenever we do this, we typically run into the following problem:

I am defining However, if and then and right? This is potentially a problem, because we are defining But we had already said that So, there is the risk that the definition of sum we gave makes no sense: Which one is it: or ?

The answer ought to be that both expressions are the same, i.e., that

no matter what elements of and are chosen, when we add them, we always land in the same equivalence class. This is usually expressed by saying that the definition of sum that we gave isindependent of the specific representatives andthat we choose in order to compute from which we then obtainThen you have to argue that, similarly, if then for any so the definition of scalar multiplication we are giving is also independent of the specific representatives of the equivalence classes that we choose in order to compute it.

(The point is that when somebody hands you two equivalence classes and and asks you to add them, you are not given and , only their classes. You have to pick an element of the first class, and one of the second, and then you add these 2 elements, and form the class of the result that you obtain, and that’s your answer. Of course, you want that your answer does not change if you pick different elements of the first and second classes to begin with.)