Method for finding the inverse matrix. Find inverse matrix online

1. Find the determinant of the original matrix. If , then the matrix is ​​singular and there is no inverse matrix. If, then a non-degenerate and inverse matrix exists.

2. Find the matrix transposed to.

3. Find the algebraic complements of the elements and compose the adjoint matrix from them.

4. We compose the inverse matrix using the formula.

5. We check the correctness of the calculation of the inverse matrix, based on its definition:.

Example. Find the matrix inverse of the given one: .

Solution.

1) Matrix determinant

.

2) Find the algebraic complements of the matrix elements and compose the adjoint matrix from them:

3) Calculate the inverse matrix:

,

4) Check:

№4Matrix rank. Linear independence of matrix rows

For solving and studying a number of mathematical and applied problems, the concept of matrix rank is important.

In a matrix of size, by deleting any rows and columns, you can isolate square submatrices of the th order, where. The determinants of such submatrices are called minors of the matrix order .

For example, from matrices you can obtain submatrices of 1st, 2nd and 3rd order.

Definition. The rank of a matrix is ​​the highest order of the nonzero minors of that matrix. Designation: or.

From the definition it follows:

1) The rank of the matrix does not exceed the smaller of its dimensions, i.e.

2) if and only if all elements of the matrix are equal to zero, i.e.

3) For a square matrix of nth order if and only if the matrix is ​​non-singular.

Since directly enumerating all possible minors of the matrix, starting with the largest size, is difficult (time-consuming), they use elementary matrix transformations that preserve the rank of the matrix.

Elementary matrix transformations:

1) Discarding the zero row (column).

2) Multiplying all elements of a row (column) by a number.

3) Changing the order of rows (columns) of the matrix.

4) Adding to each element of one row (column) the corresponding elements of another row (column), multiplied by any number.

5) Matrix transposition.

Definition. A matrix obtained from a matrix using elementary transformations is called equivalent and is denoted A IN.

Theorem. The rank of the matrix does not change during elementary matrix transformations.

Using elementary transformations, you can reduce the matrix to the so-called step form, when calculating its rank is not difficult.

A matrix is ​​called echelon if it has the form:

Obviously, the rank of a step matrix is ​​equal to the number of non-zero rows, since there is a minor order that is not equal to zero:

.

Example. Determine the rank of a matrix using elementary transformations.

The rank of the matrix is ​​equal to the number of non-zero rows, i.e. .

№5Linear independence of matrix rows

Given a size matrix

Let's denote the rows of the matrix as follows:

The two lines are called equal , if their corresponding elements are equal. .

Let us introduce the operations of multiplying a string by a number and adding strings as operations carried out element-by-element:

Definition. A row is called a linear combination of rows of a matrix if it is equal to the sum of the products of these rows by arbitrary real numbers (any numbers):

Definition. The rows of the matrix are called linearly dependent , if there are numbers that are not simultaneously equal to zero, such that a linear combination of matrix rows is equal to the zero row:

Where . (1.1)

Linear dependence of matrix rows means that at least 1 row of the matrix is ​​a linear combination of the rest.

Definition. If a linear combination of rows (1.1) is equal to zero if and only if all coefficients are , then the rows are called linearly independent .

Matrix rank theorem . The rank of a matrix is ​​equal to the maximum number of its linearly independent rows or columns through which all other rows (columns) are linearly expressed.

The theorem plays a fundamental role in matrix analysis, in particular, in the study of systems of linear equations.

№6Solving a system of linear equations with unknowns

Systems of linear equations are widely used in economics.

The system of linear equations with variables has the form:

,

where () are arbitrary numbers called coefficients for variables And free terms of the equations , respectively.

Brief entry: ().

Definition. The solution of the system is such a set of values ​​, upon substitution of which each equation of the system turns into a true equality.

1) The system of equations is called joint , if it has at least one solution, and non-joint, if it has no solutions.

2) The simultaneous system of equations is called certain , if it has a unique solution, and uncertain , if it has more than one solution.

3) Two systems of equations are called equivalent (equivalent ) , if they have the same set of solutions (for example, one solution).

Methods for finding the inverse matrix. Consider a square matrix

Let us denote Δ = det A.

The square matrix A is called non-degenerate, or not special, if its determinant is nonzero, and degenerate, or special, IfΔ = 0.

A square matrix B is for a square matrix A of the same order if their product is A B = B A = E, where E is the identity matrix of the same order as the matrices A and B.

Theorem . In order for matrix A to have an inverse matrix, it is necessary and sufficient that its determinant be different from zero.

The inverse matrix of matrix A, denoted by A- 1, so B = A - 1 and is calculated by the formula

, (1)

where A i j are algebraic complements of elements a i j of matrix A..

Calculating A -1 using formula (1) for high-order matrices is very labor-intensive, so in practice it is convenient to find A -1 using the method of elementary transformations (ET). Any non-singular matrix A can be reduced to the identity matrix E by applying only the columns (or only the rows) to the identity matrix. If the transformations perfect over the matrix A are applied in the same order to the identity matrix E, the result will be an inverse matrix. It is convenient to perform EP on matrices A and E simultaneously, writing both matrices side by side through a line. Let us note once again that when searching for the canonical form of a matrix, in order to find it, you can use transformations of rows and columns. If you need to find the inverse of a matrix, you should use only rows or only columns during the transformation process.

Example 1. For matrix find A -1 .

Solution.First we find the determinant of matrix A
This means that the inverse matrix exists and we can find it using the formula: , where A i j (i,j=1,2,3) are algebraic additions of elements a i j of the original matrix.

Where .

Example 2. Using the method of elementary transformations, find A -1 for the matrix: A = .

Solution.We assign to the original matrix on the right an identity matrix of the same order: . Using elementary transformations of the columns, we will reduce the left “half” to the identity one, simultaneously performing exactly the same transformations on the right matrix.
To do this, swap the first and second columns:
~ . To the third column we add the first, and to the second - the first, multiplied by -2: . From the first column we subtract the second doubled, and from the third - the second multiplied by 6; . Let's add the third column to the first and second: . Multiply the last column by -1: . The square matrix obtained to the right of the vertical bar is the inverse matrix of the given matrix A. So,
.

Matrix A -1 is called the inverse matrix with respect to matrix A if A*A -1 = E, where E is the identity matrix of the nth order. An inverse matrix can only exist for square matrices.

Purpose of the service. Using this service online you can find algebraic complements, transposed matrix A T, allied matrix and inverse matrix. The decision is carried out directly on the website (online) and is free. The calculation results are presented in a report in Word and Excel format (i.e., it is possible to check the solution). see design example.

Instructions. To obtain a solution, it is necessary to specify the dimension of the matrix. Next, fill out matrix A in the new dialog box.

See also Inverse matrix using the Jordano-Gauss method

Algorithm for finding the inverse matrix

  1. Finding the transposed matrix A T .
  2. Definition of algebraic complements. Replace each element of the matrix with its algebraic complement.
  3. Compiling an inverse matrix from algebraic additions: each element of the resulting matrix is ​​divided by the determinant of the original matrix. The resulting matrix is ​​the inverse of the original matrix.
Next algorithm for finding the inverse matrix similar to the previous one except for some steps: first the algebraic complements are calculated, and then the allied matrix C is determined.
  1. Determine whether the matrix is ​​square. If not, then there is no inverse matrix for it.
  2. Calculation of the determinant of the matrix A. If it is not equal to zero, we continue the solution, otherwise the inverse matrix does not exist.
  3. Definition of algebraic complements.
  4. Filling out the union (mutual, adjoint) matrix C .
  5. Compiling an inverse matrix from algebraic additions: each element of the adjoint matrix C is divided by the determinant of the original matrix. The resulting matrix is ​​the inverse of the original matrix.
  6. They do a check: they multiply the original and the resulting matrices. The result should be an identity matrix.

Example No. 1. Let's write the matrix in the form:

Algebraic additions. ∆ 1.2 = -(2·4-(-2·(-2))) = -4 ∆ 2.1 = -(2 4-5 3) = 7 ∆ 2.3 = -(-1 5-(-2 2)) = 1 ∆ 3.2 = -(-1·(-2)-2·3) = 4
A -1 =
0,6 -0,4 0,8
0,7 0,2 0,1
-0,1 0,4 -0,3

Another algorithm for finding the inverse matrix

Let us present another scheme for finding the inverse matrix.
  1. Find the determinant of a given square matrix A.
  2. We find algebraic complements to all elements of the matrix A.
  3. We write algebraic additions of row elements to columns (transposition).
  4. We divide each element of the resulting matrix by the determinant of the matrix A.
As we see, the transposition operation can be applied both at the beginning, on the original matrix, and at the end, on the resulting algebraic additions.

A special case: The inverse of the identity matrix E is the identity matrix E.

inverse matrix is a matrix A−1, when multiplied by which the given initial matrix A results in the identity matrix E:

AA −1 = A −1 A =E.

Inverse matrix method.

Inverse matrix method- this is one of the most common methods for solving matrices and is used to solve systems of linear algebraic equations (SLAEs) in cases where the number of unknowns corresponds to the number of equations.

Let there be a system n linear equations with n unknown:

Such a system can be written as a matrix equation A*X = B,

Where
- system matrix,

- column of unknowns,

- column of free odds.

From the derived matrix equation, we express X by multiplying both sides of the matrix equation on the left by A-1, resulting in:

A -1 * A * X = A -1 * B

Knowing that A -1 * A = E, Then E * X = A -1 * B or X = A -1 * B.

The next step is to determine the inverse matrix A-1 and multiplied by the column of free terms B.

Inverse matrix to matrix A exists only when det A≠ 0 . In view of this, when solving SLAEs using the inverse matrix method, the first step is to find det A. If det A≠ 0 , then the system has only one solution, which can be obtained using the inverse matrix method, but if det A = 0, then such a system inverse matrix method can't be solved.

Solving the inverse matrix.

Sequence of actions for inverse matrix solutions:

  1. We obtain the determinant of the matrix A. If the determinant is greater than zero, we solve the inverse of the matrix further; if it is equal to zero, then we cannot find the inverse matrix here.
  2. Finding the transposed matrix AT.
  3. We look for algebraic complements, after which we replace all elements of the matrix with their algebraic complements.
  4. We assemble the inverse matrix from algebraic additions: we divide all the elements of the resulting matrix by the determinant of the initially given matrix. The final matrix will be the required inverse matrix relative to the original one.

Below algorithm inverse matrix solutions essentially the same as the one above, the difference is only in a few steps: first of all we define the algebraic complements, and after that we calculate the allied matrix C.

  1. Determine whether a given matrix is ​​square. If the answer is negative, it becomes clear that there cannot be an inverse matrix for it.
  2. Determine whether a given matrix is ​​square. If the answer is negative, it becomes clear that there cannot be an inverse matrix for it.
  3. We calculate algebraic complements.
  4. We compose a union (mutual, adjoint) matrix C.
  5. We compose the inverse matrix from algebraic additions: all elements of the adjoint matrix C divide by the determinant of the initial matrix. The final matrix will be the required inverse matrix relative to the given one.
  6. We check the work done: multiply the initial and resulting matrices, the result should be an identity matrix.

This is best done using an attached matrix.

Theorem: If we assign an identity matrix of the same order to a square matrix on the right side and, using elementary transformations over the rows, transform the initial matrix on the left into the identity matrix, then the one obtained on the right side will be inverse to the initial one.

An example of finding an inverse matrix.

Exercise. For matrix find the inverse using the adjoint matrix method.

Solution. Add to the given matrix A on the right is a 2nd order identity matrix:

From the 1st line we subtract the 2nd:

From the second line we subtract the first 2:

For any non-singular matrix A there is a unique matrix A -1 such that

A*A -1 =A -1 *A = E,

where E is the identity matrix of the same orders as A. The matrix A -1 is called the inverse of matrix A.

In case someone forgot, in the identity matrix, except for the diagonal filled with ones, all other positions are filled with zeros, an example of an identity matrix:

Finding the inverse matrix using the adjoint matrix method

The inverse matrix is ​​defined by the formula:

where A ij - elements a ij.

Those. To calculate the inverse matrix, you need to calculate the determinant of this matrix. Then find the algebraic complements for all its elements and compose a new matrix from them. Next you need to transport this matrix. And divide each element of the new matrix by the determinant of the original matrix.

Let's look at a few examples.

Find A -1 for a matrix

Solution. Let's find A -1 using the adjoint matrix method. We have det A = 2. Let us find the algebraic complements of the elements of matrix A. In this case, the algebraic complements of the matrix elements will be the corresponding elements of the matrix itself, taken with a sign in accordance with the formula

We have A 11 = 3, A 12 = -4, A 21 = -1, A 22 = 2. We form the adjoint matrix

We transport the matrix A*:

We find the inverse matrix using the formula:

We get:

Using the adjoint matrix method, find A -1 if

Solution. First of all, we calculate the definition of this matrix to verify the existence of the inverse matrix. We have

Here we added to the elements of the second row the elements of the third row, previously multiplied by (-1), and then expanded the determinant for the second row. Since the definition of this matrix is ​​nonzero, its inverse matrix exists. To construct the adjoint matrix, we find the algebraic complements of the elements of this matrix. We have

According to the formula

transport matrix A*:

Then according to the formula

Finding the inverse matrix using the method of elementary transformations

In addition to the method of finding the inverse matrix, which follows from the formula (adjoint matrix method), there is a method for finding the inverse matrix, called the method of elementary transformations.

Elementary matrix transformations

The following transformations are called elementary matrix transformations:

1) rearrangement of rows (columns);

2) multiplying a row (column) by a number other than zero;

3) adding to the elements of a row (column) the corresponding elements of another row (column), previously multiplied by a certain number.

To find the matrix A -1, we construct a rectangular matrix B = (A|E) of orders (n; 2n), assigning to matrix A on the right the identity matrix E through a dividing line:

Let's look at an example.

Using the method of elementary transformations, find A -1 if

Solution. We form matrix B:

Let us denote the rows of matrix B by α 1, α 2, α 3. Let us perform the following transformations on the rows of matrix B.