It is common in algebraic settings to build new structures by taking quotients of old ones. This occurs in topology (building quotient spaces), in abstract algebra when building field extensions, or in the homomorphism theorems. Here we explore quotients in vector spaces.

First we briefly consider an example from differential equations.

Let be the space consisting of all continuously differentiable functions and let be the space of all continuous functions Let be the linear transformation

Show that is a real vector space of dimension 1, by showing that iff is a constant. This means, of course, that

Show that by finding a particular solution to the equation One way of doing this is by looking for such a function of the form for some constant Find the form of an arbitrary function such that by noting that if then

More generally, show that is surjective, by finding for any the explicit form of the solutions to the equation It may help you solve this equation if you first multiply both sides by

For another example, denote by the space of all matrices with real entries. Define a map by Show explicitly that has dimension 6 and that is surjective.

Now we abstract certain features of these examples to a general setting:

Suppose is a field and is a linear transformation between two -vector spaces and It is not necessary to assume that or are finite dimensional.

Let and let be any preimage, i.e., Show that the set of all preimages of is precisely

Define a relation in by setting iff Show that is an equivalence relation. Denote by the equivalence class of the vector

Let be the quotient of by i.e., the collection of equivalence classes of the relation We want to give the structure of an -vector space. In order to do this, we define and for all and Show that this is well-defined and satisfies the axioms of an -vector space. What is the usual name we give to the 0 vector of this space?

It is standard to denote by Define two functions and as follows: is given by Also, is given by Show that is well-defined and that both and are linear.

Show that that is a surjection, and that is an isomorphism between and In particular, any surjective image of a vector space by a linear map can be identified with a quotient of

This entry was posted on Friday, March 5th, 2010 at 12:32 pm and is filed under 403/503: Linear Algebra II. You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.

In the second example, where T maps the real 3×3 matrices to R^3, you ask us to show explicitly that the dimension of null(T) is 6. By explicitly, do you mean that you actually want us to find a basis for null(T), or would it suffice to reference an appropriate theorem?

When we define an equivalence relation, what is required to show that it is an equivalence relation? The book doesn’t discuss them, so from google I found three properties: a~a, a~b then b~a, and finally a~b and b~c then a~c. Is this all that’s required to show it is one?

Yes, that’s all that is needed. To be precise: You need to show that holds for all vectors . (This is called the reflexive property.) Similarly, you need to show that, for any vectors , if it is the case that , then also (symmetry), and that if it is the case that and , then also (transitivity).

I had a question referring to what I have called Show #7: “Show that this is well-defined and satisfies the axioms of an -vector space.”
I’m not sure what you mean by “show that this is well-defined”? What exactly do I need to show? I tried to look this up and I found things that said well defined meant the mapping was one-to-one, is this what we are talking about here?
Thanks,
Summer

Hi Summer,
We are defining a map on a quotient. Whenever we do this, we typically run into the following problem:

I am defining However, if and then and right? This is potentially a problem, because we are defining But we had already said that So, there is the risk that the definition of sum we gave makes no sense: Which one is it: or ?

The answer ought to be that both expressions are the same, i.e., that no matter what elements of and are chosen, when we add them, we always land in the same equivalence class. This is usually expressed by saying that the definition of sum that we gave is independent of the specific representatives and that we choose in order to compute from which we then obtain

Then you have to argue that, similarly, if then for any so the definition of scalar multiplication we are giving is also independent of the specific representatives of the equivalence classes that we choose in order to compute it.

(The point is that when somebody hands you two equivalence classes and and asks you to add them, you are not given and , only their classes. You have to pick an element of the first class, and one of the second, and then you add these 2 elements, and form the class of the result that you obtain, and that’s your answer. Of course, you want that your answer does not change if you pick different elements of the first and second classes to begin with.)

In the second example, where T maps the real 3×3 matrices to R^3, you ask us to show explicitly that the dimension of null(T) is 6. By explicitly, do you mean that you actually want us to find a basis for null(T), or would it suffice to reference an appropriate theorem?

Thanks,

Nick

I would prefer an actual basis, I’ve been seen some difficulties carrying out explicit computations in the previous homework sets.

When we define an equivalence relation, what is required to show that it is an equivalence relation? The book doesn’t discuss them, so from google I found three properties: a~a, a~b then b~a, and finally a~b and b~c then a~c. Is this all that’s required to show it is one?

Thanks, Amy

Yes, that’s all that is needed. To be precise: You need to show that holds for

allvectors . (This is called thereflexiveproperty.) Similarly, you need to show that, for any vectors , if it is the case that , then also (symmetry), and that if it is the case that and , then also (transitivity).I had a question referring to what I have called Show #7: “Show that this is well-defined and satisfies the axioms of an -vector space.”

I’m not sure what you mean by “show that this is well-defined”? What exactly do I need to show? I tried to look this up and I found things that said well defined meant the mapping was one-to-one, is this what we are talking about here?

Thanks,

Summer

Hi Summer,

We are defining a map on a quotient. Whenever we do this, we typically run into the following problem:

I am defining However, if and then and right? This is potentially a problem, because we are defining But we had already said that So, there is the risk that the definition of sum we gave makes no sense: Which one is it: or ?

The answer ought to be that both expressions are the same, i.e., that

no matter what elements of and are chosen, when we add them, we always land in the same equivalence class. This is usually expressed by saying that the definition of sum that we gave isindependent of the specific representatives andthat we choose in order to compute from which we then obtainThen you have to argue that, similarly, if then for any so the definition of scalar multiplication we are giving is also independent of the specific representatives of the equivalence classes that we choose in order to compute it.

(The point is that when somebody hands you two equivalence classes and and asks you to add them, you are not given and , only their classes. You have to pick an element of the first class, and one of the second, and then you add these 2 elements, and form the class of the result that you obtain, and that’s your answer. Of course, you want that your answer does not change if you pick different elements of the first and second classes to begin with.)