Linear Maps - II

We will now discuss certain additional topics related to linear maps that are useful in the study of continuum mechanics.

Important invariants of linear maps

The fact that the set of all volume forms on a finite dimensional inner product space is itself a vector space of dimension permits a basis-independent means to define a variety of useful functions on linear maps. Two such functions, the determinant and trace of a linear map are discussed now. These functions are called invariants of linear maps since their definitions are independent of the choice of a basis.

Determinant

Given a linear map , where is a finite dimensional inner product space of dimension , the determinant of , written is defined as follows: for any , Here, is a chosen volume form on . To see that this definition of the determinant is well-defined, note that it is possible to define a volume form on , given and , as follows: for any , The fact that the set of all volume forms is a vector space of dimension implies that is a scalar multiple of . This scalar multiple is, in fact, defined as the determinant of .

Example

To see how this definition is related to the determinant of a matrix encountered in elementary linear algebra, it is helpful to work in . The determinant of a linear map is computed as follows: choosing to be the standard basis of , and to be the standard volume form on , Note that the final expression is the familiar expression for the determinant of the matrix . It is a good exercise to expand this and check that it indeed reduces to the familiar expression for the determinant.

Example

To see the advantage in the abstract and basis-independent definition of the determinant provided here, consider the determinant of the product of two linear maps . Given any , Notice how this proof of the fact that is significantly simpler than the proof in terms of the elementary definition of the determinant in terms of the components of a linear map with respect to suitable choice of bases.

Remark

It is left as a simple exercise to verify, along the same lines as the previous example, that for any , The determinant is thus not a linear map.

Example

Let be any two vectors in an inner product space of dimension . Then the determinant of the linear map is always . To see this, note that for any , The last step follows from the skew-symmetry of the volume form . We have thus demonstrated that .

Example

Suppose that is a linear map on an -dimensional inner product space . Then, for any real number , This is easily proved as follows: for any ,

Trace

The trace of the linear map , written , is defined as follows: for any , Despite the cumbersome form of the definition of the trace of a linear map adopted here, it will prove to be very convenient later on.

Example

To understand this definition better, consider the special case of a linear map . In this case, it easily follows from the definition that The last expression follows from the fact that is non-zero only when are distinct. Note also that there is no summation over the repeated indices in the final expression. Choosing the familiar expression for the trace of is recovered:

Example

Unlike the determinant, the trace operation is a linear map. To see this, note that given any two linear maps on an -dimensional inner product space and real numbers , where are any set of vectors in .

Example

Many identities involving the trace operation are conveniently proved by working with representations with respect to an orthonormal basis of the vector space under consideration. For instance, suppose that is an -dimensional inner product space, and let be any two vectors. Let us prove that Let is an orthonormal basis of . Then, with respect to this basis, the trace of the linear map can be written as Note that despite the fact that we have used a special basis to prove this fact, the equation remains true in general. This is because the final equation has no explicit dependence on the basis vectors. This is a useful trick in proving many identities that we will often exploit later.

Example

As another example of the foregoing trick in proving identities involving traces, let us now show that given any two linear maps on a finite dimensional inner product space , If is an orthonormal basis of , then The identity is thus proved.

It is possible to introduce an inner product in the space of all linear maps from into as follows. Given , the inner product is defined as follows: for any , It is left as an easy exercise to verify that this is indeed an inner product on . Note also that .

Example

As an illustration of the inner product just introduced, let us prove a very useful way of rewriting the expression for the trace of a linear map on a finite dimensional inner product space . If denotes the identity map, then To see this, choose an orthonormal basis of . With respect to this basis, It is a good exercise to prove this using a general basis of .

Example

Given linear maps , we have the following identities: To see why, note that In proving this, we have used the fact that trace of the transpose of a linear map is just the trace of that linear map - a fact that is easily checked. The other identity is similarly proved by noting that

Special groups of linear maps

To conclude the current introductory discussion on tensor algebra, a few important class of linear maps are considered now. Throughout this discussion, stands for an inner product space of dimension , and attention is focused on certain special subsets of .

Groups

It is useful at this juncture to introduce the notion of a group. A set is said to be a group if there exists a map that satisfies the following properties:

  1. Associativity of group operation: for any , ,

  2. Existence of group identity: there exists , called the group identity, such that for a ,

  3. Existence of inverse: for every , there exists , called the inverse of , such that .

If it is further the case that for every , then is said to be a commutative group, or an Abelian group. A subset of is said to be a sub-group of if is a group by itself, with respect to the same group operation.

As a simple example, note that given any vector space , itself forms a commutative group with as the group operation: for any , . In this case is the group identity, and for any , is the inverse of .

As another example more relevant to the current discussion, consider the set consisting of all invertible matrices matrices, and define the binary map as follows: given , . Here, denotes the familiar matrix multiplication of the matrices and . It is easy to check that is a group with respect to this binary operation. Note that is not a commutative group.

Additive groups of linear maps

A linear map is said to be symmetric if , and is said to be skew-symmetric if . To see the connection with the elementary definitions of symmetry and skew-symmetry, let be an orthonormal basis of . If is symmetric, it follows that It follows from a similar argument that if is skew-symmetric, then .

The set of all symmetric linear maps on is called the symmetric linear group on , and written : The skew-symmetric linear group on , written , is similarly defined as The group operation for both and is the addition of linear maps. Note that both and are sub-groups of with respect to the group operation being the vector addition in .

Remark

There is an important relationship between skew-symmetric linear maps and the cross product, in the context of the three dimensional Euclidean space . Given any , associate with it the skew-symmetric linear map , defined as follows: for any , It is straightforward to check that is indeed a linear map. To verify that it is skew symmetric, note that for any , Since this is true for any , it follows that . The vector is called the axial vector of the skew-symmetric linear map . In terms of the standard basis of , it can be verified using a simple calculation that, for any , Using matrix notation, the foregoing equations can be written as follows: Note that the matrix representation of the skew-symmetric linear map is also skew-symmetric, as expected.

A linear map is said to be positive definite if it is true that for any , , and iff . An especially important sub-group of is the group , defined as of all symmetric and positive definite linear maps on . Note that the group operation here is again addition in . This sub-group will turn out to be very useful in the study of continuum mechanics.

Mutliplicative groups of linear maps

The general linear group on , written , is the set of all linear maps on with non-zero determinant: The group operation, in this case, is the product of linear maps: given , . It is easy to check that is indeed a group: indeed, given any , note that . Note that if , then is invertible: this means that there exists a linear map , called the inverse of , such that where is the identity map in .

A few important sub-groups of are discussed next. The set of all linear maps with determinant is called the special linear group on , and written : A linear map is said to be orthogonal if the inverse of is the transpose of . The set of all orthogonal linear maps, written is called the orthogonal group of : The determinant of an orthogonal linear map is . To see this, note that . In deriving this result, use has been made of the fact that for any . The set of all orthogonal maps with determinant equal to is called the special orthogonal group on , written : It is straightforward to check that .

Remark

In the special case when , it is customary to denote the various groups discussed above as , , , and .

Eigenvalues and Eigenvectors of linear maps

To conclude the discussion of the algebraic preliminaries, a simplified introduction to the important ideas of eigenvalues and eigenvectors of a linear map are now discussed. Throughout this section, it is assumed that is a finite dimensional inner product space of dimension .

Definition

Given a linear map , a vector is called an eigenvector of with respect to the eigenvalue if Note that the zero vector trivially satisfies this equation. It is implicitly assumed that this trivial eigenvector is excluded from the discussion.

Remark

It is noted that the eigenvalues can, in general be complex, and the vector space is typically taken to be a complex vector space. Since only a special class of linear maps which admit only real eigenvalues are considered in this section, the more general theory is not developed here.

The eigenvectors and eigenvalues of are readily computing by noting that the equation can be written as , where is the identity map on . The condition that this equation admits non-trivial solutions immediately yields the following condition: This is a polynomial equation of order in that can be solved to obtain the eigenvalues of . Using these eigenvalues in the equations , where , and solving them results in the eigenvectors of .

Symmetric and positive definite linear maps

Of special interest here are linear maps that are both symmetric and positive definite; thus, the discussion to follows focuses on linear maps . Given any , note that if are eigenvectors of corresponding to the eigenvalues , respectively, it follows that Thus, . This immediately shows that then . In words, this expresses the fact that the eigenvectors of a symmetric and positive definite map corresponding to distinct eigenvalues are mutually orthogonal. Further, it follows from the positive definiteness of that Thus, the eigenvectors of a symmetric and positive definite linear map are real and positive.

If are the eigenvectors of corresponding to the eigenvalues , it can be shown that the eigenvectors of constitute a basis of . Further, it can be checked using direct substitution that admits the representation This equation, which expresses the linear map in terms of the basis of formed by the eigenvectors of is called the spectral representation of .

In the special case of symmetric and positive definite linear maps, the fact that its eigenvectors are positive can be used to defined various functions of linear maps. Specifically, if is a function that takes a positive real number and returns a real number, it can be extended to the linear space as follows: for any , define As important examples of such functions, the logarithm of a symmetric and positive definite linear map is defined as Similarly, the square root of is defined as It can be checked with a simple calculation that , as expected.

Invariants of a linear map

Given a linear map , the equation is called the characteristic equation of . When expanded, the characteristic equation takes the form where the set of constants are called the invariants of the linear map . The reason for calling them invariants is that their values do not depend on the choice of any basis for , and are hence invariant with respect to the choice of basis.

The Cayley-Hamilton theorem states that the linear map also satisfies the characteristic equation: Here is to be understood ( factors).

It is of interest to consider the case when is a three dimensional vector space. In this case, the invariants of an invertible linear map can be shown to be as follows: for any , An elegant means to prove this is by applying the definition of the determinant, to evaluate the determinant in the characteristic equation of : .