Matrices

Richard Bronson , ... John T. Saccoman , in Linear Algebra (Third Edition), 2014

Important Terms

augmented matrix

block diagonal matrix

coefficient matrix

cofactor

column matrix

component

consistent equations

derived set

determinant

diagonal element

diagonal matrix

dimension

directed line segment

element

elementary matrix

elementary row operations

equivalent directed line segments

expansion by cofactor

Gaussian elimination

homogeneous equations

identity matrix

inconsistent equations

inverse

invertible matrix

linear equation

lower triangular matrix

LU decomposition

main diagonal

mathematical induction

matrix

nonhomogeneous equations

nonsingular matrix

n-tuple

order

partitioned matrix

pivot

pivotal condersation

power of a matrix

row matrix

row-reduced form

scalar

singular matrix

skew-symmetric matrix

square

submatrix

symmetric matrix

transpose

trivial solution

upper triangular matrix

zero matrix

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123914200000019

A

Fred E. Szabo PhD , in The Linear Algebra Survival Guide, 2015

Augmented Matrix

The augmented matrix of a linear system is the matrix of the coefficients of the variables of the system and the vector of constants of the system.

Illustration

The augmented matrix of a linear system in three variables and two equations

system   =   {3x   +   5y - z == 1, x - 2y   +   4 z == 5};

MatrixForm[augA   =   {{3, 5, −1, 1}, {1, −2, 4, 5}}]

3 5 1 1 1 2 4 5

Combining the coefficient matrix and a constant vector using ArrayFlatten

MatrixForm[A   =   {{3, 5, −   1}, {1, −   2, 4}}]

3 5 1 1 2 4

MatrixForm[v   =   {{1}, {5}}]

1 5

MatrixForm [augA   =   ArrayFlatten[{{A, v}}]]

3 5 1 1 1 2 4 5

Combining the coefficient matrix and a constant vector using CoefficientArrays and ArrayFlatten

system   =   {3x   +   5y - z == 1, x - 2y   +   4z == 5};

A   =   Normal [CoefficientArrays [system, {x, y, z}]][[2]];

v   =   {{1}, {5}};

MatrixForm [augA   =   ArrayFlatten[{{A, v}}]]

3 5 1 1 1 2 4 5

Combining the coefficient matrix and a constant vector using Join

system   =   {3x   +   5y - z == 1, x - 2y   +   4z == 5};

A   =   {{3, 5, −1}, {1, −2, 4}};

v   =   {{1}, {5}};

MatrixForm [Join[A, v, 2]]

3 5 1 1 1 2 4 5

The ClassroomUtilities add-on package for Mathematica contains two elegant converters for alternating between linear systems and augmented matrices.

Using the ClassroomUtilities

Needs["ClassroomUtilities‵"]

eqns   =   {3x   +   y == 4, x - y == 1}; vars   =   {x, y} ;

MatrixForm[CreateAugmentedMatrix[eqns, vars]]

3 1 4 1 1 1

Clear[x, y]

A   =   {{3, 1, −   4}, {1, −   1, −   1}}; vars   =   {x, y};

CreateEquations[A, vars]

{3   x   +   y ==     4, x - y   =   –1}

Manipulation

Linear systems and their augmented matrices

We use Manipulate and MatrixForm to explore the connection between linear systems and their augmented matrices. If we let a  =   3, b  =   2, and c  = − 5, for example, the manipulation displays the resulting linear system and its augmented matrix.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124095205500084

Systems of Linear Equations

Stephen Andrilli , David Hecker , in Elementary Linear Algebra (Fourth Edition), 2010

Equivalent Systems and Row Equivalence of Matrices

The first two definitions below involve related concepts. The connection between them will be shown in Theorem 2.3.

Definition

Two systems of m linear equations in n variables are equivalent if and only if they have exactly the same solution set.

For example, the systems

{ 2 x y = 1 3 x + y = 9 and { x + 4 y = 14 5 x 2 y = 4

are equivalent, because the solution set of both is exactly {(2,3)}.

Definition

An (augmented) matrix D is row equivalent to a matrix C if and only if D is obtained from C by a finite number of row operations of types (I), (II), and (III).

For example, given any matrix, either Gaussian elimination or the Gauss-Jordan row reduction method produces a matrix that is row equivalent to the original.

Now, if D is row equivalent to C, then C is also row equivalent to D. The reason is that each row operation is reversible; that is, the effect of any row operation can be undone by performing another row operation. These reverse, or inverse, row operations are shown in Table 2.1. Notice a row operation of type (I) is reversed by using the reciprocal 1/c and an operation of type (II) is reversed by using the additive inverse −c. (Do you see why?)

Table 2.1. Row operations and their inverses

Type of Operation Operation Reverse Operation
(I) i c i i 1 c i
(II) j c i + j j c i + j
(III) i j i j

Thus, if D is obtained from C by the sequence

C R 1 A 1 R 2 A 2 R 3 R n A n R n + 1 D ,

then C can be obtained from D using the reverse operations in reverse order:

D R n + 1 1 A n R n 1 A n 1 R n 1 1 R 2 1 A 1 R 1 1 C

( R i 1 represents the inverse operation of R i , as indicated in Table 2.1.) These comments provide a sketch for the proof of the following theorem. You are asked to fill in the details of the proof in Exercise 13(a).

Theorem 2.2

If a matrix D is row equivalent to a matrix C, then C is row equivalent to D.

The next theorem asserts that if two augmented matrices are obtained from each other using only row operations, then their corresponding systems have the same solution set. This result guarantees that the Gaussian elimination and Gauss-Jordan methods provided in Sections 2.1 and 2.2 are correct because the only steps allowed in those procedures were row operations. Therefore, a final augmented matrix produced by either method represents a system equivalent to the original — that is, a system with precisely the same solution set.

Theorem 2.3

Let AX = B be a system of linear equations. If [C| D] is row equivalent to [A| B], then the system CX = D is equivalent to AX = B.

Proof

(Abridged) Let S A represent the complete solution set of the system AX = B, and let S C be the solution set of CX = D. Our goal is to prove that if [C| D] is row equivalent to [A| B], then S A = S C . It will be enough to show that [C| D] row equivalent to [A| B] implies S A S C . This fact, together with Theorem 2.2, implies the reverse inclusion, S C S A (why?).

Also, it is enough to assume that [C| D] = R([A| B]) for a single row operation R because an induction argument extends the result to the case where any (finite) number of row operations are required to produce [C| D] from [A| B]. Therefore, we need only consider the effect of each type of row operation in turn. We present the proof for a type (II) operation and leave the proofs for the other types as Exercise 13(b).

type (II) Operation: Suppose that the original system has the form

{ a 11 x 1 + a 12 x 2 + a 13 x 3 + + a 1 n x n = b 1 a 21 x 1 + a 22 x 2 + a 23 x 3 + + a 2 n x n = b 2 a m 1 x 1 + a m 2 x 2 + a m 3 x 3 + + a m n x n = b m

and that the row operation used is j q i + j (where ij). When this row operation is applied to the corresponding augmented matrix, all rows except the jth row remain unchanged. The new jth equation then has the form

( q a i 1 + a j 1 ) x 1 + ( q a i 2 + a j 2 ) x 2 + + ( q a i n + a j n ) x n = q b i + b j .

We must show that any solution (s 1,s 2,…,s n ) of the original system is a solution of the new one. Now, since (s 1,s 2,…,s n ) is a solution of both the ith and jth equations in the original system, we have

a i 1 s 1 + a i 2 s 2 + + a i n s n = b i and a j 1 s 1 + a j 2 s 2 + + a j n s n = b j .

Multiplying the first equation by q and then adding equations yields

( q a i 1 + a j 1 ) s 1 + ( q a i 2 + a j 2 ) s 2 + + ( q a i n + a j n ) s n = q b i + b j .

Hence, (s 1,s 2,…,s n ) is also a solution of the new jth equation. And (s 1,s 2,…,s n ) is certainly a solution of every other equation in the new system as well, since none of those have changed.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123747518000184

Systems of Linear Equations

Stephen Andrilli , David Hecker , in Elementary Linear Algebra (Fifth Edition), 2016

Equivalent Systems and Row Equivalence of Matrices

The first two definitions below involve related concepts. The connection between them will be shown in Theorem 2.5.

Definition

Two systems of m linear equations in n variables are equivalent if and only if they have exactly the same solution set.

For example, the systems

2 x y = 1 3 x + y = 9 and x + 4 y = 14 5 x 2 y = 4

are equivalent, because the solution set of both is exactly {(2,3)}.

Definition

An (augmented) matrix C is row equivalent to a matrix D if and only if D is obtained from C by a finite number of row operations of Types (I), (II), and (III).

For example, given any matrix, either Gaussian Elimination or the Gauss-Jordan Method produces a matrix that is row equivalent to the original.

Now, if C is row equivalent to D, then D is also row equivalent to C. The reason is that each row operation is reversible; that is, the effect of any row operation can be undone by performing another row operation. These reverse, or inverse, row operations are shown in Table 2.1. Notice a row operation of Type (I) is reversed by using the reciprocal 1/c and an operation of Type (II) is reversed by using the additive inverse − c. (Do you see why?)

Table 2.1. Row operations and their inverses

Type of Operation Operation Reverse Operation
(I) i c i i 1 c i
(II) j c i + j j c i + j
(III) i j i j

Thus, if D is obtained from C by the sequence

C R 1 A 1 R 2 A 2 R 3 R n A n R n + 1 D ,

then C can be obtained from D using the reverse operations in reverse order:

D R n + 1 1 A n R n 1 A n 1 R n 1 1 R 2 1 A 1 R 1 1 C .

( R i 1 represents the inverse operation of R i , as indicated in Table 2.1.) These comments provide a sketch for the proof of part (1) of the following theorem. You are asked to fill in the details for part (1) in Exercise 13 (a), and to prove part (2) in Exercise 13 (b).

Theorem 2.4

Let C, D, and E be matrices of the same size.

(1)

If C is row equivalent to D, then D is row equivalent to C.

(2)

If C is row equivalent to D, and D is row equivalent to E, then C is row equivalent to E.

The next theorem asserts that if two augmented matrices are obtained from each other using only row operations, then their corresponding systems have the same solution set. This result guarantees that Gaussian Elimination and the Gauss-Jordan Method (as given in Sections 2.1 and 2.2 ) are valid because the only steps allowed in those procedures were the three familiar row operations. Therefore, a final augmented matrix produced by either method represents a system equivalent to the original — that is, a system with precisely the same solution set.

Theorem 2.5

Let AX  = B be a system of linear equations. If C D is row equivalent to A B , then the system CX  = D is equivalent to AX  = B.

Proof

(Abridged) Let S A represent the complete solution set of the system AX  = B, and let S C be the solution set of CX  = D. Our goal is to prove that if C D is row equivalent to A B , then S A   = S C . It will be enough to show that C D row equivalent to A B implies S A S C . This fact, together with Theorem 2.4, implies the reverse inclusion, S C S A (why?).

Also, it is enough to assume that C D = R ( A B ) for a single row operation R because an induction argument extends the result to the case where any (finite) number of row operations are required to produce C D from A B . Therefore, we need only consider the effect of each type of row operation in turn. We present the proof for a Type (II) operation and leave the proofs for the other types as Exercise 13 (c).

Type (II) Operation: Suppose that the original system has the form

a 11 x 1 + a 12 x 2 + a 13 x 3 + + a 1 n x n = b 1 a 21 x 1 + a 22 x 2 + a 23 x 3 + + a 2 n x n = b 2 a m 1 x 1 + a m 2 x 2 + a m 3 x 3 + + a m n x n = b m

and that the row operation used is j q i + j (where ij). When this row operation is applied to the corresponding augmented matrix, all rows except the jth row remain unchanged. The new jth equation then has the form

( q a i 1 + a j 1 ) x 1 + ( q a i 2 + a j 2 ) x 2 + + ( q a i n + a j n ) x n = q b i + b j .

We must show that any solution (s 1,s 2,…,s n ) of the original system is a solution of the new one. Now, since (s 1,s 2,…,s n ) is a solution of both the ith and jth equations in the original system, we have

a i 1 s 1 + a i 2 s 2 + + a i n s n = b i and a j 1 s 1 + a j 2 s 2 + + a j n s n = b j .

Multiplying the first equation by q and then adding equations yields

( q a i 1 + a j 1 ) s 1 + ( q a i 2 + a j 2 ) s 2 + + ( q a i n + a j n ) s n = q b i + b j .

Hence, (s 1,s 2,…,s n ) is also a solution of the new jth equation. And (s 1,s 2,…,s n ) is certainly a solution of every other equation in the new system as well, since none of those have changed.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128008539000025

The Solution of Simultaneous Algebraic Equations with More than Two Unknowns

Robert G. Mortimer , in Mathematics for Physical Chemistry (Fourth Edition), 2013

14.4 Gauss–Jordan Elimination

This procedure is very similar to the Gauss–Jordanmethod for finding the inverse of a matrix, described in Chapter 13. If the set of equations is written in the vector form

AX = C .

We write an augmented matrix consisting of the A matrix and the C column vector written side by side. For a set of four equations, the augmented matrix is

(14.16) a 11 a 12 a 13 a 14 c 1 a 21 a 22 a 23 a 24 c 2 a 31 a 32 a 33 a 34 c 3 a 41 a 42 a 42 a 44 c 4 .

Row operations are carried out on this augmented matrix: a row can be multiplied by a constant, and one row can be subtracted from or added to another row. These operations will not change the roots to the set of equations, since such operations are equivalent to multiplying all terms of one equation by a constant or to taking the sum or difference of two equations. In Gauss–Jordan elimination, our aim is to transform the left part of the augmented matrix into the identity matrix, which will transform the right column into the four roots, since the set of equations will then be

(14.17) EX = C

and the solutions are contained in the column vector C . The row operations are carried out exactly as in Section 13.4 except for having only one column in the right part of the augmented matrix.

Exercise 14.7

Use Gauss–Jordan elimination to solve the set of simultaneous equations in the previous example. The same row operations will be required that were used in Example 13.10.

There is a similar procedure known as Gausselimination, in which row operations are carried out until the left part of the augmented matrix is in upper triangular form. The bottom row of the augmented matrix then provides the root for one variable. This is substituted into the equation represented by the next-to-bottom row, and it is solved to give the root for the second variable. The two values are substituted into the next equation up, and so on.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124158092000148

Linear Equations

William Ford , in Numerical Linear Algebra with Applications, 2015

Gaussian Elimination

To execute Gaussian elimination, create the augmented matrix and perform row operations that reduce the coefficient matrix to upper-triangular form. The solution to the upper-triangular system is the same as the solution to the original linear system. Solve the upper-triangular system by back substitution, as long as the element at position ( n, n) is not zero. The unknown xn is immediately available using the last row of the augmented matrix. Using xn in the equation represented by row n    1, we find x n    1, and so forth, until determining x 1. If position (n, n) is zero, then the entire last row of the coefficient matrix is zero, and there is either no solution or infinitely many solutions.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123944351000028

Simultaneous linear equations

Richard Bronson , Gabriel B. Costa , in Matrix Methods (Fourth Edition), 2021

Problems 2.3

In Problems 1 through 5, construct augmented matrices for the given systems of equations:

1.

x + 2 y = 3 , 3 x + y = 1.

2.

x + 2 y z = 1 , 2 x 3 y + 2 z = 4.

3.

a + 2 b = 5 , 3 a + b = 13 , 4 a + 3 b = 0.

4.

2 r + 4 s = 2 , 3 r + 2 s + t = 8 , 5 r 3 s + 7 t = 15.

5.

2 r + 3 s 4 t = 12 , 3 r 2 s = 1 , 8 r s 4 t = 10.

6.

A b = [ 1 2 5 0 1 7 ] variables: x and y .

7.

A b = [ 1 2 3 10 0 1 5 3 0 0 1 4 ] variables : x , y , and z .

8.

A b = [ 1 3 12 40 0 1 6 200 0 0 1 25 ] variables : r , s , and t .

9.

A b = [ 1 3 0 8 0 1 4 2 0 0 0 0 ] variables : x , y , and z .

10.

A b = [ 1 7 2 0 0 1 1 0 0 0 0 0 ] variables : a , b , and c .

11.

A b = [ 1 1 0 1 0 1 2 2 0 0 0 1 ] variables : u , v , and w .

12.

Solve the system of equations defined in Problem 6.

13.

Solve the system of equations defined in Problem 7.

14.

Solve the system of equations defined in Problem 8.

15.

Solve the system of equations defined in Problem 9.

16.

Solve the system of equations defined in Problem 10.

17.

Solve the system of equations defined in Problem 11.

In Problems 18 through 24, use elementary row operations to transform the given matrices into row-reduced form:

18.

[ 1 2 5 3 7 8 ] .

19.

[ 4 24 20 2 11 8 ] .

20.

[ 0 1 6 2 7 5 ] .

21.

[ 1 2 3 4 1 1 2 3 2 3 0 0 ] .

22.

[ 0 1 2 4 1 3 2 1 2 3 1 2 ] .

23.

[ 1 3 2 0 1 4 3 1 2 0 1 3 2 1 4 2 ] .

24.

[ 2 3 4 6 0 10 5 8 15 1 3 40 3 3 5 4 4 20 ] .

25.

Solve Problem 1.

26.

Solve Problem 2.

27.

Solve Problem 3.

28.

Solve Problem 4.

29.

Solve Problem 5.

30.

Use Gaussian elimination to solve Problem 1 of Section 2.2.

31.

Use Gaussian elimination to solve Problem 2 of Section 2.2.

32.

Use Gaussian elimination to solve Problem 3 of Section 2.2.

33.

Use Gaussian elimination to solve Problem 4 of Section 2.2.

34.

Use Gaussian elimination to solve Problem 5 of Section 2.2.

35.

Determine a production schedule that satisfies the requirements of the manufacturer described in Problem 12 of Section 2.1.

36.

Determine a production schedule that satisfies the requirements of the manufacturer described in Problem 13 of Section 2.1.

37.

Determine a production schedule that satisfies the requirements of the manufacturer described in Problem 14 of Section 2.1.

38.

Determine feed blends that satisfy the nutritional requirements of the pet store described in Problem 15 of Section 2.1.

39.

Determine the bonus for the company described in Problem 16 of Section 2.1.

40.

Determine the number of barrels of gasoline that the producer described in Problem 17 of Section 2.1 must manufacture to break even.

41.

Determine the annual incomes of each sector of the Leontief closed model described in Problem 18 of Section 2.1.

42.

Determine the wages of each person in the Leontief closed model described in Problem 19 of Section 2.1.

43.

Determine the total sales revenue for each country of the Leontief closed model described in Problem 20 of Section 2.1.

44.

Determine the production quotas for each sector of the economy described in Problem 22 of Section 2.1.

45.

An elementary matrix is a square matrix E having the property that the product EA is the result of applying a single elementary row operation on the matrix A. Form a matrix H from the 4   ×   4 identity matrix I by interchanging any two rows of I, and then compute the product HA for any 4   ×   4 matrix A of your choosing. Is H an elementary matrix? How would one construct elementary matrices corresponding to operation (E1)?

46.

Form a matrix G from the 4   ×   4 identity matrix I by multiplying any one row of I by the number 5 and then compute the product GA for any 4   ×   4 matrix A of your choosing. Is G an elementary matrix? How would one construct elementary matrices corresponding to operation (E2)?

47.

Form a matrix F from the 4   ×   4 identity matrix I by adding to one row of I five times another row of I. Use any two rows of your choosing. Compute the product FA for any 4   ×   4 matrix A of your choosing. Is F an elementary matrix? How would one construct elementary matrices corresponding to operation (E3)?

48.

A solution procedure uniquely suited to matrix equations of the form x  = Ax  + d is iteration. A trial solution x (0) is proposed and then progressively better estimates x (1), x (2), x (3), for the solution are obtained iteratively from the formula

x ( i + 1 ) = A x ( i ) + d .

49.

Use the iteration method described in the previous problem to solve the system defined in Problem 23 of Section 2.1. In particular, find the first two iterations by hand calculations, and then use a computer to complete the iteration process.

50.

Use the iteration method described in Problem 48 to solve the system defined in Problem 24 of Section 2.1. In particular, find the first two iterations by hand calculations, and then use a computer to complete the iteration process.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128184196000022

Matrix Representation of Linear Algebraic Equations

Stormy Attaway , in Matlab (Second Edition), 2012

Gauss elimination

The Gauss elimination method consists of:

creating the augmented matrix [A|b]

applying EROs to this augmented matrix to get an upper triangular form (this is called forward elimination )

back substitution to solve

For example, for a 2 × 2 system, the augmented matrix would be:

[ a 11 a 12 b 1 a 21 a 22 b 2 ]

Then, elementary row operations are applied to get the augmented matrix into an upper triangular form (i.e., the square part of the matrix on the left is in upper triangular form):

[ a 11 a 12 b 1 0 a 22 b 2 ]

So, the goal is simply to replace a 21 with 0. Here, the primes indicate that the values (may) have been changed.

Putting this back into the equation form yields

[ a 11 a 12 0 a 22 ] [ x 1 x 2 ] = [ b 1 b 2 ]

Performing this matrix multiplication for each row results in:

a′11 x1 + a′12 x2 = b′1

a′22 x2 = b′2

So, the solution is

x2 = b′2 / a′22

x1 = (b′1 − a′12 x2) / a′11

Similarly, for a 3 × 3 system, the augmented matrix is reduced to upper triangular form:

[ a 11 a 12 a 13 b 1 a 21 a 22 a 23 b 2 a 31 a 32 a 33 b 3 ] [ a 11 a 12 a 13 b 1 0 a 22 a 23 b 2 0 0 a 33 b 3 ]

(This will be done systematically by first getting a 0 in the a 21 position, then a 31, and finally a 32.) Then, the solution will be:

x3 = b3′ / a33

x2 = (b2′ − a23′x3) / a22

x1 = (b1′ − a13x3 − a12′x2) / a11

Note that we find the last unknown, x3, first, then the second unknown, and then the first unknown. This is why it is called back substitution.

As an example, consider the following 2 × 2 system of equations:

x1 + 2x2 = 2

2x1 + 2x2 = 6

As a matrix equation Ax = b, this is:

[ 1 2 2 2 ] [ x 1 x 2 ] = [ 2 6 ]

The first step is to augment the coefficient matrix A with b to get an augmented matrix [A|b]:

[ 1 2 2 2 2 6 ]

For forward elimination, we want to get a 0 in the a 21 position. To accomplish this, we can modify the second line in the matrix by subtracting from it 2 * the first row.

The way we would write this ERO follows:

[ 1 2 2 2 2 6 ] r 2 2 r 1 r 2 [ 1 2 2 0 2 2 ]

Now, putting it back in matrix equation form:

[ 1 2 0 2 ] [ x 1 x 2 ] = [ 2 2 ]

says that the second equation is now −2x2 = 2, so x2 = −1. Plugging into the first equation,

x 1 + 2 ( - 1 ) = 2 , so x 1 = 4

This is back substitution.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123850812000120

Computational Error and Complexity in Science and Engineering

V. Lakshmikantham , S.K. Sen , in Mathematics in Science and Engineering, 2005

6.2.1 Evolutionary approach for error estimate in exact computation

Consider the m × (n + 1 ) augmented matrix D = [d ij] = [A, b] of the system Ax = b. Let the error introduced in each element of D be 0.05% and D′ be the m × (n + 1) matrix whose each element d i j is an ordered pair [dij − 0.0005 dij, dij + 0.0005 dij], The evolutionary procedure is as follows.

S. 1 Compute error-free the solution xc of the system represented by D.

S. 2 Generate m × (n + 1) uniformly distributed random (pseudo-random) numbers in the interval [dij − 0.0005 dij, dij + 0.0005 dij]. Call the resulting augmented matrix Dr.

S. 3 Compute error-free the solution xr of the linear system represented by Dr.

S. 4 Obtain the relative error (in the solution vector xr ) ek = ∥xc xr ∥/∥xc ∥.

S. 5 Repeat S.2-4 for k = 1 to s (=100, say) times.

S. 6 Obtain the largest ek — this will give an estimate of relative error-bound for the error-free computation.

This probabilistic approach is polynomial time O(smn2), where s is independent of m and n. We may compute the mean and standard deviation of the s errors ek. These computations will reveal the degree of sensitivity (ill-conditioning) of the system. By varying the value of s and computing the corresponding standard deviation, we would get a relationship of the standard deviation against s. This relationship will give an estimate of our confidence that we can place on the error-free solution.

Confidence estimate For different values of s (= uniformly distributed random numbers), say, 100, 150, and 200, we compute the standard deviations d, of the errors. To be 100% confident will be impossible. However, the difference between successive standard deviations for increasing s should be usually a monotonically decreasing (more strictly nonincreasing) function of s. Once this happens, we compute the confidence estimate as ((1 – the difference of last two successive standard deviations)/(last standard deviation)) × 100.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S0076539205800578

Gaussian Elimination and the LU Decomposition

William Ford , in Numerical Linear Algebra with Applications, 2015

In Chapter 2, we presented the process of solving a nonsingular linear system Ax = b using Gaussian elimination. We formed the augmented matrix A |b and applied the elementary row operations

1.

Multiplying a row by a scalar.

2.

Subtracting a multiple of one row from another

3.

Exchanging two rows

to reduce A to upper-triangular form. Following this step, back substitution computed the solution. In many applications where linear systems appear, one needs to solve Ax = b for many different vectors b. For instance, suppose a truss must be analyzed under several different loads. The matrix remains the same, but the right-hand side changes with each new load. Most of the work in Gaussian elimination is applying row operations to arrive at the upper-triangular matrix. If we need to solve several different systems with the same A, then we would like to avoid repeating the steps of Gaussian elimination on A for every different b. This can be accomplished by the LU decomposition, which in effect records the steps of Gaussian elimination.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123944351000119