Theorem on the linear dependence of linear combinations. P.1.4. Theorem on linear independence of vectors. Theorem The set l of vectors of space V is linear. A subspace of this space performs

Theorem 1. (On the linear independence of orthogonal vectors). Let Then the system of vectors is linearly independent.

Let's make a linear combination ∑λ i x i =0 and consider the scalar product (x j , ∑λ i x i)=λ j ||x j || 2 =0, but ||x j || 2 ≠0⇒λ j =0.

Definition 1. Vector systemor (e i ,e j)=δ ij - Kronecker symbol, called orthonormal (ONS).

Definition 2. For an arbitrary element x of an arbitrary infinite-dimensional Euclidean space and an arbitrary orthonormal system of elements, the Fourier series of an element x over the system is called a formally composed infinite sum (series) of the form , in which the real numbers λ i are called the Fourier coefficients of the element x in the system, where λ i =(x,e i).

A comment. (Naturally, the question arises about the convergence of this series. To study this issue, we fix an arbitrary number n and find out what distinguishes the nth partial sum of the Fourier series from any other linear combination of the first n elements of the orthonormal system.)

Theorem 2. For any fixed number n, among all sums of the form, the nth partial sum of the Fourier series of the element has the smallest deviation from the element x according to the norm of a given Euclidean space

Taking into account the orthonormality of the system and the definition of the Fourier coefficient, we can write


The minimum of this expression is achieved at c i =λ i, since in this case the non-negative first sum on the right side always vanishes, and the remaining terms do not depend on c i.

Example. Consider the trigonometric system

in the space of all Riemann integrable functions f(x) on the segment [-π,π]. It is easy to check that this is an ONS, and then the Fourier Series of the function f(x) has the form where .

A comment. (The trigonometric Fourier series is usually written in the form Then )

An arbitrary ONS in an infinite-dimensional Euclidean space without additional assumptions, generally speaking, is not a basis of this space. On an intuitive level, without giving strict definitions, we will describe the essence of the matter. In an arbitrary infinite-dimensional Euclidean space E, consider the ONS, where (e i ,e j)=δ ij is the Kronecker symbol. Let M be a subspace of Euclidean space, and k=M ⊥ be a subspace orthogonal to M such that Euclidean space E=M+M ⊥ . The projection of the vector x∈E onto the subspace M is the vector ∈M, where


We will look for those values ​​of the expansion coefficients α k for which the residual (squared residual) h 2 =||x-|| 2 will be the minimum:

h 2 =||x-|| 2 =(x-,x-)=(x-∑α k e k ,x-∑α k e k)=(x,x)-2∑α k (x,e k)+(∑α k e k ,∑α k e k)= ||x|| 2 -2∑α k (x,e k)+∑α k 2 +∑(x,e k) 2 -∑(x,e k) 2 =||x|| 2 +∑(α k -(x,e k)) 2 -∑(x,e k) 2 .

It is clear that this expression will take a minimum value at α k =0, which is trivial, and at α k =(x,e k). Then ρ min =||x|| 2 -∑α k 2 ≥0. From here we obtain Bessel’s inequality ∑α k 2 ||x|| 2. At ρ=0 an orthonormal system of vectors (ONS) is called a complete orthonormal system in the Steklov sense (PONS). From here we can obtain the Steklov-Parseval equality ∑α k 2 =||x|| 2 - the “Pythagorean theorem” for infinite-dimensional Euclidean spaces that are complete in the sense of Steklov. Now it would be necessary to prove that in order for any vector in space to be uniquely represented in the form of a Fourier series converging to it, it is necessary and sufficient for the Steklov-Parseval equality to hold. The system of vectors pic=""> ONB forms? system of vectors Consider for the partial sum of the series Then like the tail of a convergent series. Thus, the system of vectors is a PONS and forms an ONB.

Example. Trigonometric system

in the space of all Riemann-integrable functions f(x) on the segment [-π,π] is a PONS and forms an ONB.

Definition 1. A system of vectors is called linearly dependent if one of the vectors of the system can be represented as a linear combination of the remaining vectors of the system, and linearly independent - otherwise.

Definition 1´. A system of vectors is called linearly dependent if there are numbers With 1 , With 2 , …, With k , not all equal to zero, such that the linear combination of vectors with given coefficients is equal to the zero vector: = , otherwise the system is called linearly independent.

Let us show that these definitions are equivalent.

Let Definition 1 be satisfied, i.e. one of the system vectors is equal to a linear combination of the others:

A linear combination of a system of vectors is equal to the zero vector, and not all coefficients of this combination are equal to zero, i.e. Definition 1´ is satisfied.

Let Definition 1´ hold. A linear combination of a system of vectors is equal to , and not all coefficients of the combination are equal to zero, for example, the coefficients of the vector .

We presented one of the system vectors as a linear combination of the others, i.e. Definition 1 is satisfied.

Definition 2. A unit vector, or unit vector, is called n-dimensional vector, which one i The -th coordinate is equal to one, and the rest are zero.

. (1, 0, 0, …, 0),

(0, 1, 0, …, 0),

(0, 0, 0, …, 1).

Theorem 1. Various unit vectors n-dimensional space are linearly independent.

Proof. Let the linear combination of these vectors with arbitrary coefficients be equal to the zero vector.

From this equality it follows that all coefficients are equal to zero. We got a contradiction.

Each vector n-dimensional space ā (A 1 , A 2 , ..., A n) can be represented as a linear combination of unit vectors with coefficients equal to the vector coordinates

Theorem 2. If a system of vectors contains a zero vector, then it is linearly dependent.

Proof. Let a system of vectors be given and one of the vectors is zero, for example = . Then, with the vectors of this system, you can make a linear combination equal to the zero vector, and not all coefficients will be zero:

Therefore, the system is linearly dependent.

Theorem 3. If some subsystem of a system of vectors is linearly dependent, then the entire system is linearly dependent.

Proof. A system of vectors is given. Let us assume that the system is linearly dependent, i.e. there are numbers With 1 , With 2 , …, With r , not all equal to zero, such that = . Then

It turned out that the linear combination of vectors of the entire system is equal to , and not all coefficients of this combination are equal to zero. Consequently, the system of vectors is linearly dependent.

Consequence. If a system of vectors is linearly independent, then any of its subsystems is also linearly independent.

Proof.

Let's assume the opposite, i.e. some subsystem is linearly dependent. It follows from the theorem that the entire system is linearly dependent. We have arrived at a contradiction.

Theorem 4 (Steinitz's theorem). If each of the vectors is a linear combination of vectors and m>n, then the system of vectors is linearly dependent.

Consequence. In any system of n-dimensional vectors there cannot be more than n linearly independent ones.

Proof. Every n-dimensional vector is expressed as a linear combination of n unit vectors. Therefore, if the system contains m vectors and m>n, then, according to the theorem, this system is linearly dependent.

3.3. Linear independence of vectors. Basis.

Linear combination vector systems

called a vector

where a 1, a 2, ..., a n - arbitrary numbers.

If all a i = 0, then the linear combination is called trivial . In this case, obviously

Definition 5.

If for a system of vectors

there is a non-trivial linear combination (at least one ai¹ 0) equal to the zero vector:

then the system of vectors is called linear dependent.

If equality (1) is possible only in the case when all a i =0, then the system of vectors is called linear independent .

Theorem 2 (Conditions of linear dependence).

Definition 6.

From Theorem 3 it follows that if a basis is given in space, then by adding an arbitrary vector to it, we obtain a linearly dependent system of vectors. In accordance with Theorem 2 (1) , one of them (it can be shown that the vector) can be represented as a linear combination of the others:

.

Definition 7.

Numbers

are called coordinates vectors in the basis

(denoted

If the vectors are considered on the plane, then the basis will be an ordered pair of non-collinear vectors

and the coordinates of the vector in this basis are a pair of numbers:

Note 3. It can be shown that for a given basis, the coordinates of the vector are determined uniquely . From this, in particular, it follows that if the vectors are equal, then their corresponding coordinates are equal, and vice versa .

Thus, if a basis is given in a space, then each vector of the space corresponds to an ordered triple of numbers (coordinates of the vector in this basis) and vice versa: each triple of numbers corresponds to a vector.

On the plane, a similar correspondence is established between vectors and pairs of numbers.

Theorem 4 (Linear operations through vector coordinates).

If in some basis

And a is an arbitrary number, then in this basis

In other words:

When a vector is multiplied by a number, its coordinates are multiplied by that number ;

when adding vectors, their corresponding coordinates are added .

Example 1 . In some basis the vectorshave coordinates

Show that the vectors form a basis and find the coordinates of the vector in this basis.

Vectors form a basis if they are non-coplanar, therefore (in accordance with by Theorem 3(2) ) are linearly independent.

By definition 5 this means that equality

only possible ifx = y = z = 0.


The concepts of linear dependence and independence of a system of vectors are very important when studying vector algebra, since the concepts of dimension and basis of space are based on them. In this article we will give definitions, consider the properties of linear dependence and independence, obtain an algorithm for studying a system of vectors for linear dependence, and analyze in detail the solutions of examples.

Page navigation.

Determination of linear dependence and linear independence of a system of vectors.

Let's consider a set of p n-dimensional vectors, denote them as follows. Let's make a linear combination of these vectors and arbitrary numbers (real or complex): . Based on the definition of operations on n-dimensional vectors, as well as the properties of the operations of adding vectors and multiplying a vector by a number, it can be argued that the written linear combination represents some n-dimensional vector, that is, .

This is how we approached the definition of the linear dependence of a system of vectors.

Definition.

If a linear combination can represent a zero vector then when among the numbers there is at least one non-zero, then the system of vectors is called linearly dependent.

Definition.

If a linear combination is a zero vector only when all numbers are equal to zero, then the system of vectors is called linearly independent.

Properties of linear dependence and independence.

Based on these definitions, we formulate and prove properties of linear dependence and linear independence of a system of vectors.

    If several vectors are added to a linearly dependent system of vectors, the resulting system will be linearly dependent.

    Proof.

    Since the system of vectors is linearly dependent, equality is possible if there is at least one non-zero number from the numbers . Let .

    Let's add s more vectors to the original system of vectors , and we obtain the system . Since and , then the linear combination of vectors of this system is of the form

    represents the zero vector, and . Consequently, the resulting system of vectors is linearly dependent.

    If several vectors are excluded from a linearly independent system of vectors, then the resulting system will be linearly independent.

    Proof.

    Let us assume that the resulting system is linearly dependent. By adding all the discarded vectors to this system of vectors, we obtain the original system of vectors. By condition, it is linearly independent, but due to the previous property of linear dependence, it must be linearly dependent. We have arrived at a contradiction, therefore our assumption is incorrect.

    If a system of vectors has at least one zero vector, then such a system is linearly dependent.

    Proof.

    Let the vector in this system of vectors be zero. Let us assume that the original system of vectors is linearly independent. Then vector equality is possible only when . However, if we take any , different from zero, then the equality will still be true, since . Consequently, our assumption is incorrect, and the original system of vectors is linearly dependent.

    If a system of vectors is linearly dependent, then at least one of its vectors is linearly expressed in terms of the others. If a system of vectors is linearly independent, then none of the vectors can be expressed in terms of the others.

    Proof.

    First, let's prove the first statement.

    Let the system of vectors be linearly dependent, then there is at least one nonzero number and the equality is true. This equality can be resolved with respect to , since in this case we have

    Consequently, the vector is linearly expressed through the remaining vectors of the system, which is what needed to be proved.

    Now let's prove the second statement.

    Since the system of vectors is linearly independent, equality is possible only for .

    Let us assume that some vector of the system is expressed linearly in terms of the others. Let this vector be , then . This equality can be rewritten as , on its left side there is a linear combination of system vectors, and the coefficient in front of the vector is different from zero, which indicates a linear dependence of the original system of vectors. So we came to a contradiction, which means the property is proven.

An important statement follows from the last two properties:
if a system of vectors contains vectors and , where is an arbitrary number, then it is linearly dependent.

Study of a system of vectors for linear dependence.

Let's pose a problem: we need to establish a linear dependence or linear independence of a system of vectors.

The logical question is: “how to solve it?”

Something useful from a practical point of view can be learned from the definitions and properties of linear dependence and independence of a system of vectors discussed above. These definitions and properties allow us to establish a linear dependence of a system of vectors in the following cases:

What to do in other cases, which are the majority?

Let's figure this out.

Let us recall the formulation of the theorem on the rank of a matrix, which we presented in the article.

Theorem.

Let r – rank of matrix A of order p by n, . Let M be the basis minor of the matrix A. All rows (all columns) of the matrix A that do not participate in the formation of the basis minor M are linearly expressed through the rows (columns) of the matrix generating the basis minor M.

Now let us explain the connection between the theorem on the rank of a matrix and the study of a system of vectors for linear dependence.

Let's compose a matrix A, the rows of which will be the vectors of the system under study:

What would linear independence of a system of vectors mean?

From the fourth property of linear independence of a system of vectors, we know that none of the vectors of the system can be expressed in terms of the others. In other words, no row of matrix A will be linearly expressed in terms of other rows, therefore, linear independence of the system of vectors will be equivalent to the condition Rank(A)=p.

What will the linear dependence of the system of vectors mean?

Everything is very simple: at least one row of the matrix A will be linearly expressed in terms of the others, therefore, linear dependence of the system of vectors will be equivalent to the condition Rank(A)

.

So, the problem of studying a system of vectors for linear dependence is reduced to the problem of finding the rank of a matrix composed of vectors of this system.

It should be noted that for p>n the system of vectors will be linearly dependent.

Comment: when compiling matrix A, the vectors of the system can be taken not as rows, but as columns.

Algorithm for studying a system of vectors for linear dependence.

Let's look at the algorithm using examples.

Examples of studying a system of vectors for linear dependence.

Example.

A system of vectors is given. Examine it for linear dependence.

Solution.

Since the vector c is zero, the original system of vectors is linearly dependent due to the third property.

Answer:

The vector system is linearly dependent.

Example.

Examine a system of vectors for linear dependence.

Solution.

It is not difficult to notice that the coordinates of the vector c are equal to the corresponding coordinates of the vector multiplied by 3, that is, . Therefore, the original system of vectors is linearly dependent.

Let L – linear space over the field R . Let А1, а2, …, аn (*) finite system of vectors from L . Vector IN = a1× A1 +a2× A2 + … + an× An (16) is called Linear combination of vectors ( *), or they say that the vector IN linearly expressed through a system of vectors (*).

Definition 14. The system of vectors (*) is called Linearly dependent , if and only if there exists a non-zero set of coefficients a1, a2, … , an such that a1× A1 +a2× A2 + … + an× An = 0. If a1× A1 +a2× A2 + … + an× An = 0 Û a1 = a2 = … = an = 0, then the system (*) is called Linearly independent.

Properties of linear dependence and independence.

10. If a system of vectors contains a zero vector, then it is linearly dependent.

Indeed, if in the system (*) the vector A1 = 0, That's 1× 0 + 0× A2 + … + 0 × Аn = 0 .

20. If a system of vectors contains two proportional vectors, then it is linearly dependent.

Let A1 = L×a2. Then 1× A1 –l× A2 + 0× A3 + … + 0× A N= 0.

30. A finite system of vectors (*) for n ³ 2 is linearly dependent if and only if at least one of its vectors is a linear combination of the remaining vectors of this system.

Þ Let (*) be linearly dependent. Then there is a non-zero set of coefficients a1, a2, …, an, for which a1× A1 +a2× A2 + … + an× An = 0 . Without loss of generality, we can assume that a1 ¹ 0. Then there exists A1 = ×a2× A2 + … + ×an× A N. So, vector A1 is a linear combination of the remaining vectors.

Ü Let one of the vectors (*) be a linear combination of the others. We can assume that this is the first vector, i.e. A1 = B2 A2+ … + bn A N, Hence (–1)× A1 + b2 A2+ … + bn A N= 0 , i.e. (*) is linearly dependent.

Comment. Using the last property, we can define the linear dependence and independence of an infinite system of vectors.

Definition 15. Vector system А1, а2, …, аn , … (**) is called Linearly dependent, If at least one of its vectors is a linear combination of some finite number of other vectors. Otherwise, the system (**) is called Linearly independent.

40. A finite system of vectors is linearly independent if and only if none of its vectors can be linearly expressed in terms of its remaining vectors.

50. If a system of vectors is linearly independent, then any of its subsystems is also linearly independent.

60. If some subsystem of a given system of vectors is linearly dependent, then the entire system is also linearly dependent.

Let two systems of vectors be given А1, а2, …, аn , … (16) and В1, В2, …, Вs, … (17). If each vector of system (16) can be represented as a linear combination of a finite number of vectors of system (17), then system (17) is said to be linearly expressed through system (16).

Definition 16. The two vector systems are called Equivalent , if each of them is linearly expressed through the other.

Theorem 9 (basic linear dependence theorem).

Let it be – two finite systems of vectors from L . If the first system is linearly independent and linearly expressed through the second, then N£s.

Proof. Let's pretend that N> S. According to the conditions of the theorem

(21)

Since the system is linearly independent, equality (18) Û X1=x2=…=xN= 0. Let us substitute here the expressions of the vectors: …+=0 (19). Hence (20). Conditions (18), (19) and (20) are obviously equivalent. But (18) is satisfied only when X1=x2=…=xN= 0. Let's find when equality (20) is true. If all its coefficients are zero, then it is obviously true. Equating them to zero, we obtain system (21). Since this system has zero , then it

joint Since the number of equations is greater than the number of unknowns, the system has infinitely many solutions. Therefore, it has a non-zero X10, x20, …, xN0. For these values, equality (18) will be true, which contradicts the fact that the system of vectors is linearly independent. So our assumption is wrong. Hence, N£s.

Consequence. If two equivalent systems of vectors are finite and linearly independent, then they contain the same number of vectors.

Definition 17. The vector system is called Maximal linearly independent system of vectors Linear space L , if it is linearly independent, but when adding to it any vector from L , not included in this system, it becomes linearly dependent.

Theorem 10. Any two finite maximal linearly independent systems of vectors from L Contain the same number of vectors.

Proof follows from the fact that any two maximal linearly independent systems of vectors are equivalent .

It is easy to prove that any linearly independent system of space vectors L can be expanded to a maximal linearly independent system of vectors in this space.

Examples:

1. In the set of all collinear geometric vectors, any system consisting of one nonzero vector is maximally linearly independent.

2. In the set of all coplanar geometric vectors, any two non-collinear vectors constitute a maximal linearly independent system.

3. In the set of all possible geometric vectors of three-dimensional Euclidean space, any system of three non-coplanar vectors is maximally linearly independent.

4. In the set of all polynomials, degrees are not higher than N With real (complex) coefficients, a system of polynomials 1, x, x2, … , xn Is maximally linearly independent.

5. In the set of all polynomials with real (complex) coefficients, examples of a maximal linearly independent system are

A) 1, x, x2, ... , xn, ... ;

b) 1, (1 – x), (1 – x)2, … , (1 – x)N, ...

6. Set of dimension matrices M´ N is linear space(check it). An example of a maximal linearly independent system in this space is the matrix system E11= , E12 =, …, EMn = .

Let a system of vectors be given C1, c2, …, cf (*). The subsystem of vectors from (*) is called Maximum linearly independent Subsystem Systems ( *) , if it is linearly independent, but when adding any other vector of this system to it, it becomes linearly dependent. If the system (*) is finite, then any of its maximal linearly independent subsystems contains the same number of vectors. (Prove it yourself). The number of vectors in the maximum linearly independent subsystem of the system (*) is called Rank This system. Obviously, equivalent systems of vectors have the same ranks.