It is common in algebraic settings to build new structures by taking quotients of old ones. This occurs in topology (building quotient spaces), in abstract algebra when building field extensions, or in the homomorphism theorems. Here we explore quotients in vector spaces.

First we briefly consider an example from differential equations.

Let be the space consisting of all continuously differentiable functions and let be the space of all continuous functions Let be the linear transformation

Show that is a real vector space of dimension 1, by showing that iff is a constant. This means, of course, that

Show that by finding a particular solution to the equation One way of doing this is by looking for such a function of the form for some constant Find the form of an arbitrary function such that by noting that if then

More generally, show that is surjective, by finding for any the explicit form of the solutions to the equation It may help you solve this equation if you first multiply both sides by

For another example, denote by the space of all matrices with real entries. Define a map by Show explicitly that has dimension 6 and that is surjective.

Now we abstract certain features of these examples to a general setting:

Suppose is a field and is a linear transformation between two -vector spaces and It is not necessary to assume that or are finite dimensional.

Let and let be any preimage, i.e., Show that the set of all preimages of is precisely

Define a relation in by setting iff Show that is an equivalence relation. Denote by the equivalence class of the vector

Let be the quotient of by i.e., the collection of equivalence classes of the relation We want to give the structure of an -vector space. In order to do this, we define and for all and Show that this is well-defined and satisfies the axioms of an -vector space. What is the usual name we give to the 0 vector of this space?

It is standard to denote by Define two functions and as follows: is given by Also, is given by Show that is well-defined and that both and are linear.

Show that that is a surjection, and that is an isomorphism between and In particular, any surjective image of a vector space by a linear map can be identified with a quotient of

43.614000-116.202000

Advertisements

Like this:

LikeLoading...

Related

This entry was posted on Friday, March 5th, 2010 at 12:32 pm and is filed under 403/503: Linear Algebra II. You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.

In the second example, where T maps the real 3×3 matrices to R^3, you ask us to show explicitly that the dimension of null(T) is 6. By explicitly, do you mean that you actually want us to find a basis for null(T), or would it suffice to reference an appropriate theorem?

When we define an equivalence relation, what is required to show that it is an equivalence relation? The book doesn’t discuss them, so from google I found three properties: a~a, a~b then b~a, and finally a~b and b~c then a~c. Is this all that’s required to show it is one?

Yes, that’s all that is needed. To be precise: You need to show that holds for all vectors . (This is called the reflexive property.) Similarly, you need to show that, for any vectors , if it is the case that , then also (symmetry), and that if it is the case that and , then also (transitivity).

I had a question referring to what I have called Show #7: “Show that this is well-defined and satisfies the axioms of an -vector space.”
I’m not sure what you mean by “show that this is well-defined”? What exactly do I need to show? I tried to look this up and I found things that said well defined meant the mapping was one-to-one, is this what we are talking about here?
Thanks,
Summer

Hi Summer,
We are defining a map on a quotient. Whenever we do this, we typically run into the following problem:

I am defining However, if and then and right? This is potentially a problem, because we are defining But we had already said that So, there is the risk that the definition of sum we gave makes no sense: Which one is it: or ?

The answer ought to be that both expressions are the same, i.e., that no matter what elements of and are chosen, when we add them, we always land in the same equivalence class. This is usually expressed by saying that the definition of sum that we gave is independent of the specific representatives and that we choose in order to compute from which we then obtain

Then you have to argue that, similarly, if then for any so the definition of scalar multiplication we are giving is also independent of the specific representatives of the equivalence classes that we choose in order to compute it.

(The point is that when somebody hands you two equivalence classes and and asks you to add them, you are not given and , only their classes. You have to pick an element of the first class, and one of the second, and then you add these 2 elements, and form the class of the result that you obtain, and that’s your answer. Of course, you want that your answer does not change if you pick different elements of the first and second classes to begin with.)

The technique of almost disjoint forcing was introduced in MR0289291 (44 #6482). Jensen, R. B.; Solovay, R. M. Some applications of almost disjoint sets. In Mathematical Logic and Foundations of Set Theory (Proc. Internat. Colloq., Jerusalem, 1968), pp. 84–104, North-Holland, Amsterdam, 1970. Fix an almost disjoint family $X=(x_\alpha:\alpha

At the moment most of those decisions come from me, at least for computer science papers (those with a 68 class as primary). The practice of having proceedings and final versions of papers is not exclusive to computer science, but this is where it is most common. I've found more often than not that the journal version is significantly different from the […]

The answer is no in general. For instance, by what is essentially an argument of Sierpiński, if $(X,\Sigma,\nu)$ is a $\sigma$-finite continuous measure space, then no non-null subset of $X$ admits a $\nu\times\nu$-measurable well-ordering. The proof is almost verbatim the one here. It is consistent (assuming large cardinals) that there is an extension of Le […]

I assume by $\aleph$ you mean $\mathfrak c$, the cardinality of the continuum. You can build $D$ by transfinite recursion: Well-order the continuum in type $\mathfrak c$. At stage $\alpha$ you add a point of $A_\alpha$ to your set, and one to its complement. You can always do this because at each stage fewer than $\mathfrak c$ many points have been selected. […]

Stefan, "low" cardinalities do not change by passing from $L({\mathbb R})$ to $L({\mathbb R})[{\mathcal U}]$, so the answer to the second question is negative. More precisely: Assume determinacy in $L({\mathbb R})$. Then $2^\omega/E_0$ is a successor cardinal to ${\mathfrak c}$ (This doesn't matter, all we need is that it is strictly larger. T […]

R. Solovay proved that the provably $\mathbf\Delta^1_2$ sets are Lebesgue measurable (and have the property of Baire). A set $A$ is provably $\mathbf\Delta^1_2$ iff there is a real $a$, a $\Sigma^1_2$ formula $\phi(x,y)$ and a $\Pi^1_2$ formula $\psi(x,y)$ such that $A=\{t\mid \phi(t,a)\}=\{t\mid\psi(t,a)\}$, and $\mathsf{ZFC}$ proves that $\phi$ and $\psi$ […]

Yes, the suggested rearrangement converges to 0. This is a particular case of a result of Martin Ohm: For $p$ and $q$ positive integers rearrange the sequence $$\left(\frac{(−1)^{n-1}} n\right)_{n\ge 1} $$ by taking the ﬁrst $p$ positive terms, then the ﬁrst $q$ negative terms, then the next $p$ positive terms, then the next $q$ negative terms, and so on. Th […]

Yes, by the incompleteness theorem. An easy argument is to enumerate the sentences in the language of arithmetic. Assign to each node $\sigma $ of the tree $2^{

A simple example is the permutation $\pi$ given by $\pi(n)=n+2$ if $n$ is even, $\pi(1)=0$, and otherwise $\pi(n)=n−2$. It should be clear that $\pi$ is computable and has the desired property. By the way, regarding the footnote: if a bijection is computable, so is its inverse, so $\pi^{-1}$ is computable as well. In general, given a computable bijection $\s […]

The question is asking to find all polynomials $f$ for which you can find $a,b\in\mathbb R$ with $a\ne b$ such that the displayed identity holds. The concrete numbers $a,b$ may very well depend on $f$. A priori, it may be that for some $f$ there is only one pair for which the identity holds, it may be that for some $f$ there are many such pairs, and it may a […]

In the second example, where T maps the real 3×3 matrices to R^3, you ask us to show explicitly that the dimension of null(T) is 6. By explicitly, do you mean that you actually want us to find a basis for null(T), or would it suffice to reference an appropriate theorem?

Thanks,

Nick

I would prefer an actual basis, I’ve been seen some difficulties carrying out explicit computations in the previous homework sets.

When we define an equivalence relation, what is required to show that it is an equivalence relation? The book doesn’t discuss them, so from google I found three properties: a~a, a~b then b~a, and finally a~b and b~c then a~c. Is this all that’s required to show it is one?

Thanks, Amy

Yes, that’s all that is needed. To be precise: You need to show that holds for

allvectors . (This is called thereflexiveproperty.) Similarly, you need to show that, for any vectors , if it is the case that , then also (symmetry), and that if it is the case that and , then also (transitivity).I had a question referring to what I have called Show #7: “Show that this is well-defined and satisfies the axioms of an -vector space.”

I’m not sure what you mean by “show that this is well-defined”? What exactly do I need to show? I tried to look this up and I found things that said well defined meant the mapping was one-to-one, is this what we are talking about here?

Thanks,

Summer

Hi Summer,

We are defining a map on a quotient. Whenever we do this, we typically run into the following problem:

I am defining However, if and then and right? This is potentially a problem, because we are defining But we had already said that So, there is the risk that the definition of sum we gave makes no sense: Which one is it: or ?

The answer ought to be that both expressions are the same, i.e., that

no matter what elements of and are chosen, when we add them, we always land in the same equivalence class. This is usually expressed by saying that the definition of sum that we gave isindependent of the specific representatives andthat we choose in order to compute from which we then obtainThen you have to argue that, similarly, if then for any so the definition of scalar multiplication we are giving is also independent of the specific representatives of the equivalence classes that we choose in order to compute it.

(The point is that when somebody hands you two equivalence classes and and asks you to add them, you are not given and , only their classes. You have to pick an element of the first class, and one of the second, and then you add these 2 elements, and form the class of the result that you obtain, and that’s your answer. Of course, you want that your answer does not change if you pick different elements of the first and second classes to begin with.)