Calculate the determinant of the product of two matrices. Determinants of square matrices. Systems of linear equations

Lecture 6

4.6 Determinant of the product of two square matrices.

Product of two square matrices n th order is always defined. Here the following theorem is of great importance.

Theorem. The determinant of the product matrix is ​​equal to the product of the determinants of the factor matrices:

Proof. Let

And
,

.

Compose an auxiliary determinant

.

By the corollary of Laplace's theorem, we have:

.

So,
, we will show that
. To do this, we transform the determinant as follows. first first P
, add to
-th column. Then the first P columns multiplied respectively by
, add to
-th column, etc. At the last step to
-th column will be added the first P columns multiplied respectively by
. As a result, we get the determinant

.

Expanding the resulting determinant using the Laplace theorem in terms of the last P columns, we find:

So we have proved the equalities
And
, from which it follows that
.

4.7 Inverse matrix

Definition 1 . Let a square matrix be given A P-th order. Square matrix
of the same order are called reverse to the matrix A, if , where E-identity matrix P-th order.

Statement. If there is a matrix inverse to the matrix A, then such a matrix is ​​unique.

Proof. Let's assume that the matrix
is not the only matrix inverse to the matrix A. Let's take another inverse matrix B. Then the conditions

Consider the product
. It has the equalities

from which it follows that
. Thus, the uniqueness of the inverse matrix is ​​proved.

When proving the theorem on the existence of an inverse matrix, we need the concept of "adjoint matrix".

Definition 2 . Let the matrix

.

whose elements are algebraic complements elements matrices A, is called attached matrix to matrix A.

Note that in order to construct the adjoint matrix WITH matrix elements A you need to replace them with algebraic complements, and then transpose the resulting matrix.

Definition 3. square matrix A called non-degenerate , If
.

Theorem. In order for the matrix A had an inverse matrix
, it is necessary and sufficient that the matrix A was undegenerate. In this case, the matrix
is determined by the formula

, (1)

Where - algebraic complements of matrix elements A.

Proof. Let the matrix A has an inverse matrix
. Then the conditions are satisfied that imply . From the last equality we get that the determinants
And
. These determinants are related by the relation
. matrices A And
non-degenerate, since their determinants are non-zero.

Now let the matrix A non-degenerate. Let us prove that the matrix A has an inverse matrix
and it is determined by formula (1). For this, consider the work

matrices A and the matrix attached to it WITH.

By the rule of matrix multiplication, the element works
matrices A And WITH has the form: . Since the sum of the products of the elements i-th line on the algebraic complements of the corresponding elements j- th row is zero at
and the determinant at
. Hence,

Where E– identity matrix P-th order. The equality
. Thus,

, which means that
and matrix is the inverse of the matrix A. Therefore, the nonsingular matrix A has an inverse matrix, which is determined by formula (1).

Corollary 1 . Matrix determinants A And
related by the ratio
.

Consequence 2 . The main property of the associated matrix WITH to the matrix A expressed

equalities
.

Corollary 3 . Determinant of a nondegenerate matrix A and the matrix attached to it

WITH bound by equality
.

Corollary 3 follows from the equality
and properties of determinants, according to which, when multiplied by P- th power of this number. In this case

whence it follows that
.

Example. A:

.

Solution. Matrix determinant

different from zero. Therefore, the matrix A has a reverse. To find it, we first calculate the algebraic complements:

,
,
,

,
,
,


,
.

Now, using formula (1), we write the inverse matrix

.

4.8. Elementary transformations over matrices. Gauss algorithm.

Definition 1. Under elementary transformations above size matrix

understand the following steps.

    Multiplication of any row (column) of a matrix by any non-zero number.

    addition to any i-th row of the matrix of any of its j- th line, multiplied by an arbitrary number.

    addition to any i-th column of a matrix of any of its j- th column multiplied by an arbitrary number.

    Permutation of rows (columns) of a matrix.

Definition 2. matrices A And IN we will call equivalent , if one of them can be transformed into the other by elementary transformations. Will write
.

Matrix equivalence has the following properties:


Definition 3 . stepped called matrix A having the following properties:

1) if i-th row is zero, i.e. consists of only zeros, then
-th string is also null;

2) if the first non-zero elements i-th and
-th lines are arranged in columns with numbers k And l, That
.

Example. matrices

And

are stepped, and the matrix

is not a step.

Let us show how, using elementary transformations, we can reduce the matrix A to a stepped view.

Gauss algorithm . Consider the matrix A size
. Without loss of generality, we can assume that
. (If in the matrix A there is at least a non-zero element, then by interchanging the rows and then the columns, you can ensure that this element falls at the intersection of the first row and the first column.) Let's add to the second row of the matrix A first multiplied by , to the third line - the first, multiplied by etc.

As a result, we get

.

Items in recent
lines are defined by the formulas:

,
,
.

Consider the matrix

.

If all matrix elements are equal to zero, then

and the equivalent step matrix. If among the elements of the matrix at least one is different from zero, then we can assume without loss of generality that
(this can be achieved by rearranging the rows and columns of the matrix ). In this case, transforming the matrix same as matrix A, we get

respectively,

.

Here
,
,
.

and
,
, … ,
. In the matrix A T rows and to reduce it to a stepped form in the indicated way, it will take no more than T steps. The process may then terminate k-th step if and only if all elements of the matrix

are equal to zero. In this case

and
,
, … ,
.

4.9. Finding the inverse matrix using elementary transformations.

For a large matrix, it is convenient to find the inverse matrix using elementary transformations over matrices. This method is as follows. Write out a composite matrix
and according to the scheme of the Gauss method, they are performed on the rows of this matrix (i.e. simultaneously in the matrix A and in the matrix E) elementary transformations. As a result, the matrix A is transformed into the identity matrix, and the matrix E- into a matrix
.

Example. Find matrix inverse to matrix

.

Solution. Let's write a composite matrix
and transform it using elementary string transformations in accordance with the Gauss method. As a result, we get:

.

From these transformations we conclude that

.

4.10 Matrix rank.

Definition. Integer r called rank matrices A, if it has a minor of order r, different from zero, and all minors of order higher r are equal to zero. The rank of a matrix will be denoted by the symbol
.

The rank of the matrix is ​​calculated by the method edging minors .


Example. Calculate the rank of a matrix using the fringing minor method

.

Solution.


The above method is not always convenient, because. associated with the calculation of a large

the number of determinants.

Statement. The rank of a matrix does not change under elementary transformations of its rows and columns.

The stated statement indicates the second way to calculate the rank of a matrix. It is called method of elementary transformations . To find the rank of a matrix, it is necessary to bring it to a stepped form using the Gaussian method, and then select the maximum nonzero minor. Let's explain this with an example.

Example. Using elementary transformations, calculate the rank of a matrix

.

Solution. Let's perform a chain of elementary transformations in accordance with the Gauss method. As a result, we obtain a chain of equivalent matrices.

Theorem. Let A and B be two square matrices of order n. Then the determinant of their product is equal to the product of the determinants, i.e.

| AB | = | A| | B|.

< Пусть A = (aij) (n x n), B = (bij) (n x n). Рассмотрим определитель (d) (2n) порядка 2n

(d) (2n) = | A | | b | (-1)(^1+...+n+1+...+n) = | A | | B|.

If we show that the determinant (d) (2n) is equal to the determinant of the matrix C=AB, then the theorem will be proved.

In (d) (2n) we will do the following transformations: to 1 row we add (n + 1) row multiplied by a11; (n+2) string multiplied by a12, etc. (2n) string multiplied by (a) (1n) . In the resulting determinant, the first n elements of the first row will be zero, and the other n elements will become like this:

a11* b11 + a12 * b21 + ... + (a) (1n) * (d) (1n) = c11 ;

a11* b12 + a12 * b21 + ... + (a) (1n) * (d) (2n) = c12 ;

a11* (d) (1n) + a12 * (d) (2n) + ... + (a) (1n) * (d) (nn) = (c) (1n) .

Similarly, we get zeros in 2, ..., n rows of the determinant (d) (2n) , and the last n elements in each of these rows will become the corresponding elements of the matrix C. As a result, the determinant (d) (2n) is transformed into an equal determinant:

(d) (2n) = | c | (-1))(^1+...+n+...+2n) = |AB|. >

Consequence. The determinant of the product of a finite number of square matrices is equal to the product of their determinants.

< Доказательство проводится индукцией: | A1 ... (A) (j+1) | = | A1... Aj | | (A) (j+1) | = ... = | A 1 | ... | A i +1 | . Эта цепочка равенств верна по теореме.>

INVERSE MATRIX.

Let A = (aij) (n x n) be a square matrix over the field P.

Definition 1. Matrix A will be called degenerate if its determinant is equal to 0. Matrix A will be called nondegenerate otherwise.

Definition 2. Let А н Pn. A matrix B Î Pn will be called inverse to A if AB = BA=E.

Theorem (criterion for matrix invertibility). Matrix A is invertible if and only if it is nondegenerate.

< Пусть А имеет обратную матрицу. Тогда АА(^-1) = Е и, применяя теорему об умножении определителей, получаем | A | | A(^-1) | = | E | или | A | | A(^-1) | = 1. Следовательно, | A | ¹ 0.

Let, back, | A | ¹ 0. We must show that there exists a matrix B such that AB = BA = E. As B we take the following matrix:

where A ij is the algebraic complement to the element a ij . Then

It should be noted that the result will be an identity matrix (it suffices to use Corollaries 1 and 2 from Laplace's theorem), i.e. AB \u003d E. Similarly, it is shown that BA \u003d E. >

Example. For matrix A, find the inverse matrix, or prove that it does not exist.

det A = -3 Þ the inverse matrix exists. Now we consider algebraic additions.

A 11 \u003d -3 A 21 \u003d 0 A 31 \u003d 6

A 12 \u003d 0 A 22 \u003d 0 A 32 \u003d -3



A 13 \u003d 1 A 23 \u003d -1 A 33 \u003d -1

So, the inverse matrix looks like: B = =

Algorithm for finding the inverse matrix for a matrix

1. Calculate det A.

2. If it is equal to 0, then the inverse matrix does not exist. If det A is not equal

0, we consider algebraic additions.

3. We put the algebraic additions in the appropriate places.

4. Divide all elements of the resulting matrix by det A.

SYSTEMS OF LINEAR EQUATIONS.

Definition 1. An equation of the form a1x1+ ....+an xn=b , where a, ... ,an are numbers; x1, ... ,xn are unknowns, is called a linear equation with n unknown.

s equations with n unknown is called the system s linear equations With n unknown, i.e.

(1)
The matrix A, composed of the coefficients of the unknowns of system (1), is called the matrix of system (1). .

If we add a column of free terms to matrix A, then we get the extended matrix of system (1).

X = - column of unknowns. - column of free members.

In matrix form, the system has the form: AX=B (2).

The solution of system (1) is the ordered set n numbers (α1 ,…, αn) such that if we substitute into (1) x1 = α1, x2 = α2 ,…, xn = αn , then we obtain numerical identities.

Definition 2. System (1) is called consistent if it has solutions, and inconsistent otherwise.

Definition 3. Two systems are called equivalent if the sets of their solutions are the same.

There is a universal way to solve system (1) - the Gauss method (the method of successive elimination of unknowns)

Let us consider in more detail the case when s = n. There is a Cramer method for solving such systems.

Let d = det ,

dj - the determinant of d, in which the jth column is replaced by a column of free members.

CRAMER'S RULE

Theorem (Cramer's rule). If the determinant of the system is d ¹ 0, then the system has a unique solution obtained from the formulas:

x1 = d1 / d …xn = dn / d

<Идея доказательства заключается в том, чтобы переписать систему (1) в форме матричного уравнения. Положим



and consider the equation AX = B (2) with unknown column matrix X. Since A, X, B are matrices of dimensions n x n, n x 1, n x 1 accordingly, the product of rectangular matrices AX is defined and has the same dimensions as the matrix B. Thus, equation (2) makes sense.

The connection between system (1) and equation (2) is what is the solution to this system if and only if

the column is the solution of equation (2).

Indeed, this statement means that the equality

The last equality, as an equality of matrices, is equivalent to the system of equalities

which means that is a solution to system (1).

Thus, the solution of system (1) is reduced to the solution of the matrix equation (2). Since the determinant d of matrix A is non-zero, it has an inverse matrix A -1 . Then AX = B z A(^-1)(AX) = A(^-1)B z (A(^-1)A)X = A(^-1)B z EX = A(^-1) In z X = A(^-1)B (3). Therefore, if equation (2) has a solution, then it is given by formula (3). On the other hand, A(A(^-1)B) = (A A(^-1))B = EB = B.

Therefore, X \u003d A (^-1) B is the only solution to equation (2).

Because ,

where A ij is the algebraic complement of the element a ij in the determinant d, then

whence (4).

In equality (4) in parentheses is written the expansion by elements of the jth column of the determinant dj, which is obtained from the determinant d after the replacement in it

j-th column by a column of free members. That's why, xj = dj/ d.>

Consequence. If a homogeneous system of n linear equations from n of unknowns has a nonzero solution, then the determinant of this system is equal to zero.

Definition. The product of two matrices A And IN called matrix WITH, whose element, located at the intersection i-th line and j-th column, is equal to the sum of products of elements i-th row of the matrix A on the corresponding (in order) elements j-th column of the matrix IN.

This definition implies the formula for the matrix element C:

Matrix product A to matrix IN denoted AB.

Example 1 Find the product of two matrices A And B, If

,

.

Solution. It is convenient to find the product of two matrices A And IN write as in Fig. 2:

In the diagram, the gray arrows show the elements of which row of the matrix A on the elements of which column of the matrix IN need to multiply to get the elements of the matrix WITH, and the colors of the matrix element C the corresponding elements of the matrices are connected A And B, whose products are added to obtain a matrix element C.

As a result, we obtain the elements of the product of matrices:



Now we have everything to write down the product of two matrices:

.

Product of two matrices AB makes sense only when the number of columns of the matrix A matches the number of matrix rows IN.

This important feature will be easier to remember if you use the following reminders more often:

There is another important feature of the product of matrices with respect to the number of rows and columns:

In the product of matrices AB the number of rows is equal to the number of matrix rows A, and the number of columns is equal to the number of columns of the matrix IN .

Example 2 Find the number of rows and columns of a matrix C, which is the product of two matrices A And B the following dimensions:

a) 2 X 10 and 10 X 5;

b) 10 X 2 and 2 X 5;

Example 3 Find product of matrices A And B, If:

.

A B- 2. Therefore, the dimension of the matrix C = AB- 2 X 2.

Calculate matrix elements C = AB.

Found product of matrices: .

You can check the solution of this and other similar problems on matrix product calculator online .

Example 5 Find product of matrices A And B, If:

.

Solution. Number of rows in matrix A- 2, the number of columns in the matrix B C = AB- 2 X 1.

Calculate matrix elements C = AB.

The product of matrices will be written as a column matrix: .

You can check the solution of this and other similar problems on matrix product calculator online .

Example 6 Find product of matrices A And B, If:

.

Solution. Number of rows in matrix A- 3, the number of columns in the matrix B- 3. Therefore, the dimension of the matrix C = AB- 3 X 3.

Calculate matrix elements C = AB.

Found product of matrices: .

You can check the solution of this and other similar problems on matrix product calculator online .

Example 7 Find product of matrices A And B, If:

.

Solution. Number of rows in matrix A- 1, the number of columns in the matrix B- 1. Consequently, the dimension of the matrix C = AB- 1 X 1.

Calculate the element of the matrix C = AB.

The product of matrices is a matrix of one element: .

You can check the solution of this and other similar problems on matrix product calculator online .

The software implementation of the product of two matrices in C++ is discussed in the corresponding article in the "Computers and Programming" block.

Matrix exponentiation

Raising a matrix to a power is defined as multiplying a matrix by the same matrix. Since the product of matrices exists only when the number of columns of the first matrix is ​​the same as the number of rows of the second matrix, only square matrices can be raised to a power. n th power of a matrix by multiplying the matrix by itself n once:

Example 8 Given a matrix. Find A² and A³ .

Find the product of the matrices yourself, and then see the solution

Example 9 Given a matrix

Find the product of the given matrix and the transposed matrix, the product of the transposed matrix and the given matrix.

Properties of the product of two matrices

Property 1. The product of any matrix A and the identity matrix E of the corresponding order both on the right and on the left coincides with the matrix A, i.e. AE = EA = A.

In other words, the role of the identity matrix in matrix multiplication is the same as the role of units in the multiplication of numbers.

Example 10 Make sure property 1 is true by finding the products of the matrix

to the identity matrix on the right and left.

Solution. Since the matrix A contains three columns, then you need to find the product AE, Where

-
the identity matrix of the third order. Let's find the elements of the work WITH = AE :



It turns out that AE = A .

Now let's find the work EA, Where E is the identity matrix of the second order, since the matrix A contains two rows. Let's find the elements of the work WITH = EA :

Theorem. Let A and B be two square matrices of order n. Then the determinant of their product is equal to the product of the determinants, i.e.

| AB | = | A| | B|.

¢ Let A = (a ij) n x n , B = (b ij) n x n . Consider the determinant d 2 n of order 2n

d 2n = | A | | b | (-1) 1 + ... + n + 1 + ... + n = | A | | B|.

If we show that the determinant d 2 n is equal to the determinant of the matrix С=AB, then the theorem will be proved.

Let's do the following transformations in d 2 n: add (n+1) row multiplied by a 11 to row 1; (n+2) string multiplied by a 12, etc. (2n) string multiplied by a 1 n . In the resulting determinant, the first n elements of the first row will be zero, and the other n elements will become like this:

a 11 b 11 + a 12 b 21 + ... + a 1n b n1 = c 11;

a 11 b 12 + a 12 b 22 + ... + a 1n b n2 = c 12;

a 11 b 1n + a 12 b 2n + ... + a 1n b nn = c 1n.

Similarly, we get zeros in 2, ..., n rows of the determinant d 2 n , and the last n elements in each of these rows will become the corresponding elements of the matrix C. As a result, the determinant d 2 n is transformed into an equal determinant:

d 2n = | c | (-1) 1 + ... + n + ... + 2n = |AB|. £

Consequence. The determinant of the product of a finite number of square matrices is equal to the product of their determinants.

¢ The proof is by induction: | A 1 ... A i +1 | = | A 1 ... A i | | A i +1 | = ... = = | A 1 | ... | A i +1 | . This chain of equalities is true by the theorem. £

Inverse matrix.

Let A = (a ij) n x n be a square matrix over the field Р.

Definition 1. A matrix A will be called degenerate if its determinant is equal to 0. A matrix A will be called non-degenerate otherwise.

Definition 2. Let А н P n . A matrix В О P n will be called inverse to А if АВ = ВА=Е.

Theorem (criterion for matrix invertibility). The matrix A is invertible if and only if it is nondegenerate.

¢ Let A have an inverse matrix. Then AA -1 = E and, applying the theorem on multiplication of determinants, we obtain | A | | A -1 | = | e | or | A | | A -1 | = 1. Therefore, | A | ¹0.

Let, back, | A | ¹ 0. We must show that there exists a matrix B such that AB = BA = E. As B we take the following matrix:

where A ij is the algebraic complement to the element a ij . Then

It should be noted that the result will be an identity matrix (it suffices to use Corollaries 1 and 2 from Laplace's theorem § 6), i.e. AB = E. Similarly, it is shown that BA = E. £

Example. For matrix A, find the inverse matrix, or prove that it does not exist.

det A = -3 inverse matrix exists. Now we consider algebraic additions.

A 11 \u003d -3 A 21 \u003d 0 A 31 \u003d 6

A 12 \u003d 0 A 22 \u003d 0 A 32 \u003d -3

A 13 \u003d 1 A 23 \u003d -1 A 33 \u003d -1



So, the inverse matrix looks like: B = =

Algorithm for finding the inverse matrix for matrix A.

1. Calculate det A.

2. If it is equal to 0, then the inverse matrix does not exist. If det A is not equal to 0, we count algebraic additions.

3. We put the algebraic additions in the appropriate places.

4. Divide all elements of the resulting matrix by det A.

Exercise 1. Find out if the inverse matrix is ​​single-valued.

Exercise 2. Let the elements of the matrix A be rational integers. Will the elements of the inverse matrix be integer rational numbers?

Systems of linear equations.

Definition 1. An equation of the form a 1 x 1 + ....+a n x n =b , where a, ... ,a n are numbers; x 1 , ... ,x n - unknown, is called a linear equation with n unknown.

s equations with n unknown is called the system s linear equations with n unknown, i.e.

The matrix A, composed of the coefficients of the unknowns of system (1), is called the matrix of system (1).

.


If we add a column of free terms to matrix A, then we get the extended matrix of system (1).

X = - column of unknowns.

Column of free members.

In matrix form, the system has the form: AX=B (2).

The solution of system (1) is the ordered set n numbers (α 1 ,…, α n) such that if we make a substitution in (1) x 1 = α 1 , x 2 = α 2 ,…, x n = α n , then we get numerical identities.

Definition 2. System (1) is called consistent if it has solutions, and inconsistent otherwise.

Definition 3. Two systems are said to be equivalent if their solution sets are the same.

There is a universal way to solve system (1) - the Gauss method (the method of successive elimination of unknowns), see, p.15.

Let us consider in more detail the case when s = n. There is a Cramer method for solving such systems.

Let d = det ,

d j - determinant d, in which the j-th column is replaced by a column of free terms.



Theorem (Cramer's rule). If the determinant of the system is d ¹ 0, then the system has a unique solution obtained from the formulas:

x 1 \u003d d 1 / d ... x n \u003d d n / d

¢The idea of ​​the proof is to rewrite system (1) in the form of a matrix equation. Let's put

and consider the equation AX = B (2) with unknown column matrix X. Since A, X, B are matrices of dimensions n x n, n x 1, n x 1 accordingly, the product of rectangular matrices AX is defined and has the same dimensions as the matrix B. Thus, equation (2) makes sense.

The connection between system (1) and equation (2) is what is the solution to this system if and only if

the column is the solution of equation (2).

Indeed, this statement means that the equality

=

Because ,

where A ij is the algebraic complement of the element a ij in the determinant d, then

= ,

whence (4).

In equality (4) in parentheses is the expansion by elements of the j-th column of the determinant d j , which is obtained from the determinant d after the replacement in it

j-th column by a column of free members. That's why, x j = d j / d.£

Consequence. If a homogeneous system of n linear equations from n of unknowns has a nonzero solution, then the determinant of this system is equal to zero.

THEME 3. Polynomials in one variable.

The matrix determinant is a number that characterizes the square matrix A and is closely related to the solution of systems of linear equations. The determinant of matrix A is denoted by or . Any square matrix A of order n is assigned, according to a certain law, a calculated number called the determinant, or determinant, of the nth order of this matrix. Consider the determinants of the second and third orders.

Let the matrix

,

then its second-order determinant is calculated by the formula

.

Example. Calculate the determinant of matrix A:

Answer: -10.

The third order determinant is calculated by the formula

Example. Calculate the determinant of matrix B

.

Answer: 83.

The calculation of the nth order determinant is based on the properties of the determinant and the following Laplace theorem: the determinant is equal to the sum of the products of the elements of any row (column) of the matrix and their algebraic complements:

Algebraic addition element equals , where is the element minor, obtained by deleting the i-th row and the j-th column in the determinant.

Minor the order of the element of the matrix A is the determinant of the matrix (n-1)-th order, obtained from the matrix A by deleting the i-th row and the j-th column.

Example. Find algebraic complements of all elements of matrix A:

.

Answer: .

Example. Calculate the matrix determinant of a triangular matrix:

Answer: -15.

Properties of determinants:

1. If any row (column) of the matrix consists of only zeros, then its determinant is 0.

2. If all the elements of any row (column) of the matrix are multiplied by a number, then its determinant will be multiplied by this number.

3. When transposing a matrix, its determinant will not change.

4. When two rows (columns) of a matrix are interchanged, its determinant changes sign to the opposite.

5. If a square matrix contains two identical rows (columns), then its determinant is 0.

6. If the elements of two rows (columns) of a matrix are proportional, then its determinant is 0.

7. The sum of the product of the elements of any row (column) of the matrix by the algebraic complements of the elements of another row (column) of this matrix is ​​0.

8. The matrix determinant will not change if the elements of any row (column) of the matrix are added to the elements of another row (column), previously multiplied by the same number.

9. The sum of the products of arbitrary numbers and the algebraic complements of the elements of any row (column) is equal to the determinant of the matrix obtained from the given one by replacing the elements of this row (column) with numbers.

10. The determinant of the product of two square matrices is equal to the product of their determinants.

Inverse matrix.

Definition. A matrix is ​​called the inverse of a square matrix A if, when this matrix is ​​multiplied by the given one both on the right and on the left, the identity matrix is ​​obtained:

.

It follows from the definition that only a square matrix has an inverse; in this case, the inverse matrix is ​​also square of the same order. If the determinant of a matrix is ​​nonzero, then such a square matrix is ​​called nondegenerate.

Necessary and sufficient condition for the existence of an inverse matrix: An inverse matrix exists (and is unique) if and only if the original matrix is ​​nonsingular.

The first algorithm for calculating the inverse matrix:

1. Find the determinant of the original matrix. If the determinant is non-zero, then the original matrix is ​​nonsingular and the inverse matrix exists.

2. Find the matrix transposed to A.

3. We find the algebraic complements of the elements of the transposed matrix and compose the adjoint matrix from them.

4. Calculate the inverse matrix by the formula: .

5. We check the correctness of the calculation of the inverse matrix, based on its definition .

Example.

.

Answer: .

The second algorithm for calculating the inverse matrix:

The inverse matrix can be calculated based on the following elementary transformations on the rows of the matrix:

Swap two lines;

Multiplying a matrix row by any non-zero number;

Adding to one row of a matrix another row, multiplied by any non-zero number.

In order to calculate the inverse matrix for the matrix A, it is necessary to compose the matrix , then by elementary transformations bring the matrix A to the form of the identity matrix E, then in place of the identity matrix we get the matrix .

Example. Calculate the inverse matrix for matrix A:

.

We compose a matrix B of the form:

.

Element = 1 and the first line containing this element will be called guides. Let's carry out elementary transformations, as a result of which the first column is transformed into a single column with a unit in the first row. To do this, to the second and third lines, add the first line, respectively multiplied by 1 and -2. As a result of these transformations, we get:

.

Finally we get

.

Where .

Matrix rank. The rank of a matrix A is the highest order of non-zero minors of this matrix. The rank of matrix A is denoted by rang(A) or r(A).

It follows from the definition: a) the rank of a matrix does not exceed the smallest of its dimensions, i.e. r(A) is less than or equal to the minimum of the numbers m or n; b) r(A)=0 if and only if all elements of the matrix A are equal to zero; c) for a square matrix of the nth order r(A)=n if and only if the matrix A is nonsingular.

Example: calculate the ranks of matrices:

.

Answer: r(A)=1. Answer: r(A)=2.

We call the following matrix transformations elementary:

1) Rejection of the zero row (column).

2) Multiplication of all elements of a row (column) of a matrix by a non-zero number.

3) Changing the order of rows (columns) of the matrix.

4) Adding to each element of one row (column) the corresponding elements of another row (column), multiplied by any number.

5) Matrix transposition.

The rank of a matrix does not change under elementary matrix transformations.

Examples: Calculate matrix , where

; ;

Answer: .

Example: Calculate matrix , Where

; ; ; E is the identity matrix.

Answer: .

Example: Calculate matrix determinant

.

Answer: 160.

Example: Determine if matrix A has an inverse, and if so, calculate it:

.

Answer: .

Example: Find the rank of a matrix

.

Answer: 2.

2.4.2. Systems of linear equations.

The system of m linear equations with n variables has the form:

,

where , are arbitrary numbers, called, respectively, the coefficients of the variables and the free terms of the equations. The solution of a system of equations is such a set of n numbers (), when substituting which each equation of the system turns into a true equality.

A system of equations is called consistent if it has at least one solution, and inconsistent if it has no solutions. A joint system of equations is called definite if it has a unique solution, and indefinite if it has more than one solution.

Cramer's theorem: Let - the determinant of the matrix A, composed of the coefficients of the variables "x", and - the determinant of the matrix obtained from the matrix A by replacing the j-th column of this matrix with a column of free terms. Then, if , then the system has a unique solution, determined by the formulas: (j=1, 2, …, n). These equations are called Cramer's formulas.

Example. Solve systems of equations using Cramer's formulas:

Answers: (4, 2, 1). (1, 2, 3) (1, -2, 0)

Gauss method- the method of successive elimination of variables, consists in the fact that with the help of elementary transformations the system of equations is reduced to an equivalent system of a stepped (or triangular) form, from which all other variables are found sequentially, starting from the last variables by number.

Example: Solve systems of equations using the Gaussian method.

Answers: (1, 1, 1). (1, -1, 2, 0). (1, 1, 1).

For consistent systems of linear equations, the following statements are true:

· if the rank of the matrix of the joint system is equal to the number of variables, i.e. r = n, then the system of equations has a unique solution;

if the rank of the matrix of the joint system less than number variables, i.e. r

2.4.3. Technology for performing operations on matrices in the EXCEL environment.

Let's consider some aspects of working with the Excel spreadsheet processor, which allow us to simplify the calculations necessary to solve optimization problems. A spreadsheet processor is a software product designed to automate the processing of data in a tabular form.

Working with formulas. In spreadsheet programs, formulas are used to perform many different calculations. Using Excel, you can quickly create a formula. The formula has three main parts:

Equal sign;

Operators.

Use in function formulas. To make it easier to enter formulas, you can use Excel functions. Functions are formulas built into Excel. To activate a particular formula, press the buttons Insert, Functions. In the window that appears Function Wizard on the left is a list of function types. After selecting the type, a list of the functions themselves will be placed on the right. The choice of functions is carried out by clicking the mouse button on the corresponding name.

When performing operations on matrices, solving systems of linear equations, solving optimization problems, you can use the following Excel functions:

MULTIPLE - matrix multiplication;

TRANSPOSE - matrix transposition;

MOPRED - calculation of the determinant of the matrix;

MOBR - calculation of the inverse matrix.

The button is on the toolbar. Functions for performing operations with matrices are in the category Mathematical.

Matrix multiplication with a function MUMNOZH . The MULTIP function returns the product of matrices (matrices are stored in arrays 1 and 2). The result is an array with the same number of rows as array 1 and the same number of columns as array 2.

Example. Find the product of two matrices A and B in Excel (see Figure 2.9):

; .

Enter matrices A in cells A2:C3 and B in cells E2:F4.

Select the range of cells for the multiplication result - H2:I2.

Enter the formula for matrix multiplication =MMULT(A2:C3, E2:F4).

Press CTRL+SHIFT+ENTER.

Inverse Matrix Calculations Using the NIBR Function.

The MIN function returns the inverse of a matrix stored in an array. Syntax: NBR(array). On fig. 2.10 shows the solution of the example in the Excel environment.

Example. Find the matrix inverse to the given one:

.

Figure 2.9. Initial data for matrix multiplication.