Page 746 - Elementary_Linear_Algebra_with_Applications_Anton__9_edition
P. 746
(9)
(10)
Comparison of Methods for Solving Linear Systems
In practical applications it is common to encounter linear systems with thousands of equations in thousands of unknowns. Thus we
shall be interested in Table 1 for large values of n. It is a fact about polynomials that for large values of the variable, a polynomial
can be approximated well by its term of highest degree; that is, if , then
(Exercise 12). Thus, for large values of n, the operation counts in Table 1 can be approximated as shown in Table 2.
Table 2 Matrix A for Large n
Approximate Operation Counts for an Invertible
Method Number of Additions Number of Multiplications
Solve by Gauss–Jordan elimination
Solve by Gaussian elimination
Find by reducing to
Solve as
Find by row reduction
Solve by Cramer's rule
It follows from Table 2 that for large n, the best of these methods for solving are Gaussian elimination and Gauss–Jordan
elimination. The method of multiplying by is much worse than these (it requires three times as many operations), and the
poorest of the four methods is Cramer's rule.
Remark
We observed in the remark following Table 1 that if Gauss–Jordan elimination is performed by introducing zeros above and
below leading 1's as soon as they are obtained, then the operation count is
Thus, for large n, this procedure requires multiplications, which is 50% greater than the multiplications required by
the text method. Similarly for additions.
It is reasonable to ask if it is possible to devise other methods for solving linear systems that might require significantly fewer than
the additions and multiplications needed in Gaussian elimination and Gauss–Jordan elimination. The answer is a qualified
“yes.” In recent years, methods have been devised that require multiplications, where q is slightly larger than 2.3. However,
these methods have little practical value because the programming is complicated, the constant C is very large, and the number of

