
Dual Spaces 
Given any two vector spaces, and , over a field we have defined Hom to be the set of all vector space homomorphisms of into . As yet Hom is merely a set with no structure imposed on it. We shall now proceed to introduce operations in it which will turn it into a vector space over . Actually we have already indicated how to do so in the descriptions of some of the problems in the earlier sections. However we propose to treat the matter more formally here.
Let and be any two elements of Hom ; this means that these are both vector space homomorphisms of into . Recalling the definition of such a homomorphism, we must have for all and all . The same conditions also hold for . We first want to introduce an addition for these elements and in Hom . What is more natural than to define by declaring for all We must, of course, verify that is in Hom . By the very definition of , if then since and and since addition in is commutative, we get Once again invoking the definition of the righthand side of this relation becomes ; we have shown that . A similar computation shows that . Consequently is in Hom . Let be that homomorphism of into which sends every element of onto the zero element of ; for let be defined by . It is immediate that Hom is an abelian group under the addition defined above. Having succeeded in introducing the structure of an abelian group on Hom we now turn our attention to defining for and , our ultimate goal being that of making Hom into a vector space over . For and we define by for all . We leave it to the reader to show that is in Hom and that under the operations we defined, Hom is a vector space over . However, we have no assurance that Hom has any elements other than the zerohomomorphism. Be that as it may, we have proved LEMMA 4.3.1: Hom is a vector space over under the operations described above. A result such as that of Lemma 4.3.1 really gives us very little information; rather it confirms for us that the definitions we have made are reasonable. We would prefer some results about Hom that have more of a bite to them. Such a result is provided us in THEOREM 4.3.1: If and are of dimensions and respectively, over then Hom is of dimension over . An immediate consequence of Theorem 4.3.1 is that whenever and are finitedimensional vector spaces, then Hom does not just consist of the element , for its dimension over is . Some special cases of Theorem 4.3.1 are themselves of great interest and we list these as corollaries. COROLLARY 1: If then COROLLARY 2: If then Corollary 2 has the interesting consequence that if is finitedimensional over it is isomorphic to , for, by the corollary, they are of the same dimension over , whence by Corollary 4 to Lemma 4.2.4 they must be isomorphic. This isomorphism has many shortcomings! Let us explain. It depends heavily on the finitedimensionality of , for if is not finitedimensional no such isomorphism exists. There is no nice, formal construction of this isomorphism which holds universally for all vector spaces. It depends strongly on the specialities of the finitedimensional situation. In a few pages we shall, however, show that a ``nice'' isomorphism does exist for any vector space into ). DEFINITION: If is a vector space then its dual space is . We shall use the notation for the dual space of V. An element of will be called a linear functional on into . If is not finitedimensional then is usually too large and wild to be of interest. For such vector spaces we often have other additional structures, such as a topology, imposed and then, as the dual space, one does not generally take all of our but rather a properly restricted subspace. If is finitedimensional its dual space is always defined, as we did it, as all of . In the proof of Theorem 4.3.1 we constructed a basis of Hom using a particular basis of and one of . The construction depended crucially on the particular bases we had chosen for and respectively. Had we chosen other bases we would have ended up with a different basis of Hom . As a general principle, it is preferable to give proofs, whenever possible, which are basisfree. Such proofs are usually referred to as invariant ones. An invariant proof or construction has the advantage, other than the mere aesthetic one, over a proof or construction using a basis, in that one does not have to worry how finely everything depends on a particular choice of bases. The elements of are functions defined on and having their values in . In keeping with the functional notation, we shall usually write elements of as , etc. and denote the value on as (rather than as ). Let be a finitedimensional vector space over and let be a basis of ; let be the element of defined by for , and In fact the are nothing but the introduced in the proof of Theorem 4.3.1, for here is onedimensional over . Thus we know that form a basis of . We call this basis the {\it dual basis} of . If by Lemma 4.2.5 we can find a basis of the form and so there is an element in , namely such that . We have proved LEMMA 4.3.2: If is finitedimensional and then there is an element such that . In fact Lemma 4.3.2 is true if is infinitedimensional, but as we have no need for the result, and since its proof would involve logical questions that are not relevant at this time, we omit the proof. Let , where is any vector space over . As varies over , and is kept fixed, defines a functional on into ; note that we are merely interchanging the role of function and variable. Let us denote this function by ; in other words for any . What can we say about ? To begin with, furthermore, Thus is in the dual space of ! We write this space as refer to it as the second dual of V. Given any element we can associate with it an element in Define the mapping by for every . Is a homomorphism of into ? Indeed it is! For, and so , that is, . Similarly for . Thus defines a homomorphism of into . The construction of used no basis or special properties of ; it is an example of an invariant construction. When is an isomorphism? To answer this we must know when , or equivalently, when . But if then for all . However as we pointed out, without proof, for a general vector space, given there is an with . We actually proved this when is finitedimensional. Thus for finitedimensional (and, in fact, for arbitrary ) is an isomorphism. However, when is finitedimensional is an isomorphism onto ; when is infinitedimensional is not onto. If is finitedimensional, by the second corollary to Theorem 4.3.1, and are of the same dimension; similarly, and are of the same dimension; since is an isomorphism of into , the equality of the dimensions forces to be onto. We have proved LEMMA 4.3.3: If is finitedimensional, then is an isomorphism of onto . We henceforth identify and , keeping in mind that this identification is being carried out by the isomorphism . DEFINITION: If is a subspace of then the annihilator of We leave as an exercise to the reader the verification of the fact that is a subspace of . Clearly if then . Let be a subspace of , where is finitedimensional. If let be the restriction of to ; thus is defined on by for every . Since , clearly . Consider the mapping defined by for . It is immediate that and that . Thus is a homomorphism of into . What is the kernel of ? If is in the kernel of then the restriction of to must be ; that is for all . Also, conversely, if for all then is in the kernel of . Therefore the kernel of is exactly . We now claim that the mapping is onto . What we must show is that given any element then is the restriction of some , that is . By Lemma 4.2.5, if is a basis of then it can be expanded to a basis of of the form where . Let be the subspace of spanned by . Thus . If define by: let be written as ; then . It is easy to see that is in and that . Thus and so maps onto . Since the kernel of is by Theorem 4.1.1, is isomorphic to . In particular they have the same dimension. Let and . By Corollary 2 to Theorem 4.3.1, and . However, by Lemma 4.2.6 and so . Transposing, . We have proved THEOREM 4.3.2: If is finitedimensional and is a subspace of then is isomorphic to and .
COROLLARY: . Theorem 4.3.2 has application to the study of systems of linear homogeneous equations. Consider the system of equations in unknowns where the are in . We ask for the number of linearly independent solutions there are in to this system. In let be the subspace generated by the vectors and suppose that is of dimension . In that case we say the system of equations is of {\it rank} . Let be used as a basis of and let be its dual basis in . Any is of the form where the . When is ? In that case, since since for and . Similarly the other equations of the system are satisfied. Conversely, every solution of the system of homogeneous equations yields an element, , in . Thereby we see that the number of linearly independent solutions of the system of equations is the dimension of , which, by Theorem 4.3.2 is . We have proved the following THEOREM 4.3.3: If the system of homogeneous linear equations: where is of rank then there are linearly independent solutions in . COROLLARY: If that is, if the number of unknowns exceeds the number of equations, then there is a solution where not all of are 