Linear form

(Redirected from Covector)

In mathematics, a linear form (also known as a linear functional,[1] a one-form, or a covector) is a linear map[nb 1] from a vector space to its field of scalars (often, the real numbers or the complex numbers).

If V is a vector space over a field k, the set of all linear functionals from V to k is itself a vector space over k with addition and scalar multiplication defined pointwise. This space is called the dual space of V, or sometimes the algebraic dual space, when a topological dual space is also considered. It is often denoted Hom(V, k),[2] or, when the field k is understood, ;[3] other notations are also used, such as ,[4][5] or [2] When vectors are represented by column vectors (as is common when a basis is fixed), then linear functionals are represented as row vectors, and their values on specific vectors are given by matrix products (with the row vector on the left).

Examples edit

The constant zero function, mapping every vector to zero, is trivially a linear functional. Every other linear functional (such as the ones below) is surjective (that is, its range is all of k).

  • Indexing into a vector: The second element of a three-vector is given by the one-form   That is, the second element of   is
     
  • Mean: The mean element of an  -vector is given by the one-form   That is,
     
  • Sampling: Sampling with a kernel can be considered a one-form, where the one-form is the kernel shifted to the appropriate location.
  • Net present value of a net cash flow,   is given by the one-form   where   is the discount rate. That is,
     

Linear functionals in Rn edit

Suppose that vectors in the real coordinate space   are represented as column vectors

 

For each row vector   there is a linear functional   defined by

 
and each linear functional can be expressed in this form.

This can be interpreted as either the matrix product or the dot product of the row vector   and the column vector  :

 

Trace of a square matrix edit

The trace   of a square matrix   is the sum of all elements on its main diagonal. Matrices can be multiplied by scalars and two matrices of the same dimension can be added together; these operations make a vector space from the set of all   matrices. The trace is a linear functional on this space because   and   for all scalars   and all   matrices  

(Definite) Integration edit

Linear functionals first appeared in functional analysis, the study of vector spaces of functions. A typical example of a linear functional is integration: the linear transformation defined by the Riemann integral

 
is a linear functional from the vector space   of continuous functions on the interval   to the real numbers. The linearity of   follows from the standard facts about the integral:
 

Evaluation edit

Let   denote the vector space of real-valued polynomial functions of degree   defined on an interval   If   then let   be the evaluation functional

 
The mapping   is linear since
 

If   are   distinct points in   then the evaluation functionals     form a basis of the dual space of   (Lax (1996) proves this last fact using Lagrange interpolation).

Non-example edit

A function   having the equation of a line   with   (for example,  ) is not a linear functional on  , since it is not linear.[nb 2] It is, however, affine-linear.

Visualization edit

 
Geometric interpretation of a 1-form α as a stack of hyperplanes of constant value, each corresponding to those vectors that α maps to a given scalar value shown next to it along with the "sense" of increase. The   zero plane is through the origin.

In finite dimensions, a linear functional can be visualized in terms of its level sets, the sets of vectors which map to a given value. In three dimensions, the level sets of a linear functional are a family of mutually parallel planes; in higher dimensions, they are parallel hyperplanes. This method of visualizing linear functionals is sometimes introduced in general relativity texts, such as Gravitation by Misner, Thorne & Wheeler (1973).

Applications edit

Application to quadrature edit

If   are   distinct points in [a, b], then the linear functionals   defined above form a basis of the dual space of Pn, the space of polynomials of degree   The integration functional I is also a linear functional on Pn, and so can be expressed as a linear combination of these basis elements. In symbols, there are coefficients   for which

 
for all   This forms the foundation of the theory of numerical quadrature.[6]

In quantum mechanics edit

Linear functionals are particularly important in quantum mechanics. Quantum mechanical systems are represented by Hilbert spaces, which are antiisomorphic to their own dual spaces. A state of a quantum mechanical system can be identified with a linear functional. For more information see bra–ket notation.

Distributions edit

In the theory of generalized functions, certain kinds of generalized functions called distributions can be realized as linear functionals on spaces of test functions.

Dual vectors and bilinear forms edit

 
Linear functionals (1-forms) α, β and their sum σ and vectors u, v, w, in 3d Euclidean space. The number of (1-form) hyperplanes intersected by a vector equals the inner product.[7]

Every non-degenerate bilinear form on a finite-dimensional vector space V induces an isomorphism VV : vv such that

 

where the bilinear form on V is denoted   (for instance, in Euclidean space,   is the dot product of v and w).

The inverse isomorphism is VV : vv, where v is the unique element of V such that

 
for all  

The above defined vector vV is said to be the dual vector of  

In an infinite dimensional Hilbert space, analogous results hold by the Riesz representation theorem. There is a mapping VV from V into its continuous dual space V.

Relationship to bases edit

Basis of the dual space edit

Let the vector space V have a basis  , not necessarily orthogonal. Then the dual space   has a basis   called the dual basis defined by the special property that

 

Or, more succinctly,

 

where δ is the Kronecker delta. Here the superscripts of the basis functionals are not exponents but are instead contravariant indices.

A linear functional   belonging to the dual space   can be expressed as a linear combination of basis functionals, with coefficients ("components") ui,

 

Then, applying the functional   to a basis vector   yields

 

due to linearity of scalar multiples of functionals and pointwise linearity of sums of functionals. Then

 

So each component of a linear functional can be extracted by applying the functional to the corresponding basis vector.

The dual basis and inner product edit

When the space V carries an inner product, then it is possible to write explicitly a formula for the dual basis of a given basis. Let V have (not necessarily orthogonal) basis   In three dimensions (n = 3), the dual basis can be written explicitly

 
for   where ε is the Levi-Civita symbol and   the inner product (or dot product) on V.

In higher dimensions, this generalizes as follows

 
where   is the Hodge star operator.

Over a ring edit

Modules over a ring are generalizations of vector spaces, which removes the restriction that coefficients belong to a field. Given a module M over a ring R, a linear form on M is a linear map from M to R, where the latter is considered as a module over itself. The space of linear forms is always denoted Homk(V, k), whether k is a field or not. It is a right module, if V is a left module.

The existence of "enough" linear forms on a module is equivalent to projectivity.[8]

Dual Basis Lemma — An R-module M is projective if and only if there exists a subset   and linear forms   such that, for every   only finitely many   are nonzero, and

 

Change of field edit

Suppose that   is a vector space over   Restricting scalar multiplication to   gives rise to a real vector space[9]   called the realification of   Any vector space   over   is also a vector space over   endowed with a complex structure; that is, there exists a real vector subspace   such that we can (formally) write   as  -vector spaces.

Real versus complex linear functionals edit

Every linear functional on   is complex-valued while every linear functional on   is real-valued. If   then a linear functional on either one of   or   is non-trivial (meaning not identically  ) if and only if it is surjective (because if   then for any scalar    ), where the image of a linear functional on   is   while the image of a linear functional on   is   Consequently, the only function on   that is both a linear functional on   and a linear function on   is the trivial functional; in other words,   where   denotes the space's algebraic dual space. However, every  -linear functional on   is an  -linear operator (meaning that it is additive and homogeneous over  ), but unless it is identically   it is not an  -linear functional on   because its range (which is  ) is 2-dimensional over   Conversely, a non-zero  -linear functional has range too small to be a  -linear functional as well.

Real and imaginary parts edit

If   then denote its real part by   and its imaginary part by   Then   and   are linear functionals on   and   The fact that   for all   implies that for all  [9]

 
and consequently, that   and  [10]

The assignment   defines a bijective[10]  -linear operator   whose inverse is the map   defined by the assignment   that sends   to the linear functional   defined by

 
The real part of   is   and the bijection   is an  -linear operator, meaning that   and   for all   and  [10] Similarly for the imaginary part, the assignment   induces an  -linear bijection   whose inverse is the map   defined by sending   to the linear functional on   defined by  

This relationship was discovered by Henry Löwig in 1934 (although it is usually credited to F. Murray),[11] and can be generalized to arbitrary finite extensions of a field in the natural way. It has many important consequences, some of which will now be described.

Properties and relationships edit

Suppose   is a linear functional on   with real part   and imaginary part  

Then   if and only if   if and only if  

Assume that   is a topological vector space. Then   is continuous if and only if its real part   is continuous, if and only if  's imaginary part   is continuous. That is, either all three of   and   are continuous or none are continuous. This remains true if the word "continuous" is replaced with the word "bounded". In particular,   if and only if   where the prime denotes the space's continuous dual space.[9]

Let   If   for all scalars   of unit length (meaning  ) then[proof 1][12]

 
Similarly, if   denotes the complex part of   then   implies
 
If   is a normed space with norm   and if   is the closed unit ball then the supremums above are the operator norms (defined in the usual way) of   and   so that [12]
 
This conclusion extends to the analogous statement for polars of balanced sets in general topological vector spaces.
  • If   is a complex Hilbert space with a (complex) inner product   that is antilinear in its first coordinate (and linear in the second) then   becomes a real Hilbert space when endowed with the real part of   Explicitly, this real inner product on   is defined by   for all   and it induces the same norm on   as   because   for all vectors   Applying the Riesz representation theorem to   (resp. to  ) guarantees the existence of a unique vector   (resp.  ) such that   (resp.  ) for all vectors   The theorem also guarantees that   and   It is readily verified that   Now   and the previous equalities imply that   which is the same conclusion that was reached above.

In infinite dimensions edit

Below, all vector spaces are over either the real numbers   or the complex numbers  

If   is a topological vector space, the space of continuous linear functionals — the continuous dual — is often simply called the dual space. If   is a Banach space, then so is its (continuous) dual. To distinguish the ordinary dual space from the continuous dual space, the former is sometimes called the algebraic dual space. In finite dimensions, every linear functional is continuous, so the continuous dual is the same as the algebraic dual, but in infinite dimensions the continuous dual is a proper subspace of the algebraic dual.

A linear functional f on a (not necessarily locally convex) topological vector space X is continuous if and only if there exists a continuous seminorm p on X such that  [13]

Characterizing closed subspaces edit

Continuous linear functionals have nice properties for analysis: a linear functional is continuous if and only if its kernel is closed,[14] and a non-trivial continuous linear functional is an open map, even if the (topological) vector space is not complete.[15]

Hyperplanes and maximal subspaces edit

A vector subspace   of   is called maximal if   (meaning   and  ) and does not exist a vector subspace   of   such that   A vector subspace   of   is maximal if and only if it is the kernel of some non-trivial linear functional on   (that is,   for some linear functional   on   that is not identically 0). An affine hyperplane in   is a translate of a maximal vector subspace. By linearity, a subset   of   is a affine hyperplane if and only if there exists some non-trivial linear functional   on   such that  [11] If   is a linear functional and   is a scalar then   This equality can be used to relate different level sets of   Moreover, if   then the kernel of   can be reconstructed from the affine hyperplane   by  

Relationships between multiple linear functionals edit

Any two linear functionals with the same kernel are proportional (i.e. scalar multiples of each other). This fact can be generalized to the following theorem.

Theorem[16][17] — If   are linear functionals on X, then the following are equivalent:

  1. f can be written as a linear combination of  ; that is, there exist scalars   such that  ;
  2.  ;
  3. there exists a real number r such that   for all   and all  

If f is a non-trivial linear functional on X with kernel N,   satisfies   and U is a balanced subset of X, then   if and only if   for all  [15]

Hahn–Banach theorem edit

Any (algebraic) linear functional on a vector subspace can be extended to the whole space; for example, the evaluation functionals described above can be extended to the vector space of polynomials on all of   However, this extension cannot always be done while keeping the linear functional continuous. The Hahn–Banach family of theorems gives conditions under which this extension can be done. For example,

Hahn–Banach dominated extension theorem[18](Rudin 1991, Th. 3.2) — If   is a sublinear function, and   is a linear functional on a linear subspace   which is dominated by p on M, then there exists a linear extension   of f to the whole space X that is dominated by p, i.e., there exists a linear functional F such that

 
for all   and
 
for all  

Equicontinuity of families of linear functionals edit

Let X be a topological vector space (TVS) with continuous dual space  

For any subset H of   the following are equivalent:[19]

  1. H is equicontinuous;
  2. H is contained in the polar of some neighborhood of   in X;
  3. the (pre)polar of H is a neighborhood of   in X;

If H is an equicontinuous subset of   then the following sets are also equicontinuous: the weak-* closure, the balanced hull, the convex hull, and the convex balanced hull.[19] Moreover, Alaoglu's theorem implies that the weak-* closure of an equicontinuous subset of   is weak-* compact (and thus that every equicontinuous subset weak-* relatively compact).[20][19]

See also edit

Notes edit

Footnotes edit

  1. ^ In some texts the roles are reversed and vectors are defined as linear maps from covectors to scalars
  2. ^ For instance,  

Proofs edit

  1. ^ It is true if   so assume otherwise. Since   for all scalars   it follows that   If   then let   and   be such that   and   where if   then take  Then   and because   is a real number,   By assumption   so   Since   was arbitrary, it follows that    

References edit

  1. ^ Axler (2015) p. 101, §3.92
  2. ^ a b Tu (2011) p. 19, §3.1
  3. ^ Katznelson & Katznelson (2008) p. 37, §2.1.3
  4. ^ Axler (2015) p. 101, §3.94
  5. ^ Halmos (1974) p. 20, §13
  6. ^ Lax 1996
  7. ^ Misner, Thorne & Wheeler (1973) p. 57
  8. ^ Clark, Pete L. Commutative Algebra (PDF). Unpublished. Lemma 3.12.
  9. ^ a b c Rudin 1991, pp. 57.
  10. ^ a b c Narici & Beckenstein 2011, pp. 9–11.
  11. ^ a b Narici & Beckenstein 2011, pp. 10–11.
  12. ^ a b Narici & Beckenstein 2011, pp. 126–128.
  13. ^ Narici & Beckenstein 2011, p. 126.
  14. ^ Rudin 1991, Theorem 1.18
  15. ^ a b Narici & Beckenstein 2011, p. 128.
  16. ^ Rudin 1991, pp. 63–64.
  17. ^ Narici & Beckenstein 2011, pp. 1–18.
  18. ^ Narici & Beckenstein 2011, pp. 177–220.
  19. ^ a b c Narici & Beckenstein 2011, pp. 225–273.
  20. ^ Schaefer & Wolff 1999, Corollary 4.3.

Bibliography edit