text
stringlengths
235
3.08k
you would approach Exercise 32 above if they needed to use up a pound of Type I tea to make room on the shelf for a new canister. 34. If you were to try to make 100 mL of a 60% acid solution using stock solutions at 20% and 40%, respectively, what would the triangular form of the resulting system look like? Explain. 564 Systems of Equations and Matrices 8.1.2 Answers 1. Consistent independent Solution 6, − 1 2 3. Consistent independent Solution − 16 7, − 62 7 2. Consistent independent Solution − 7 3, −3 4. Consistent independent Solution 49 12, − 25 18 5. Consistent dependent Solution t, 3 for all real numbers t 2 t + 3 7. Inconsistent No solution 6. Consistent dependent Solution (6 − 4t, t) for all real numbers t 8. Inconsistent No solution Because triangular form is not unique, we give only one possible answer to that part of the question. Yours may be different and still be correct.          10. 11. 12 + 3z = 15 + 3z = 15 0 = 1 13. x + y + z = −17 y − 3z = 0    14. x − 2y + 3z = y − 11 7 5 z = − 16 5 1 z = Consistent independent Solution (−2, 7) Consistent independent Solution (1, 2, 0) Consistent dependent Solution (−t + 5, −3t + 15, t) for all real numbers t Inconsistent No solution Consistent dependent Solution (−4t − 17, 3t, t) for all real numbers t Consistent independent Solution (2, −1, 1) 8.1 Systems of Linear Equations: Gaussian Elimination 565 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25.                  �
��                0 x + y + 2z = 4 y − 7z = 17 z = −2 x − 2y + 2z = − − 3y − 4z = 3 13 z = 4 13 0 = 0 y + 11 − 2z = − = − 11 2 0 0 = Consistent independent Solution (1, 3, −2) Inconsistent no solution Consistent independent Solution (1, 3, −2) Consistent independent Solution −3, 1 2, 1 Consistent independent Solution 1 3, 2 3, 1 Consistent dependent Solution 19 13, − 11 for all real numbers t 13 t + 51 13 t + 4 13, t Inconsistent no solution Consistent independent Solution (4, −3, 1) Consistent dependent Solution −2t − 35 for all real numbers t 4, −t − 11 2, t x1 + 2 3 x3 − x4 = 25 3 x2 − 16 3 x2 + 4x3 − 3x4 = 2 0 = 0 0 = 0 x1 − x3 = −2 x2 − 1 2 x4 = 0 x3 − 1 2 x4 = x4 = 1 4 Consistent dependent Solution (8s − t + 7, −4s + 3t + 2, s, t) for all real numbers s and t Consistent independent Solution (1, 2, 3, 4) 566 Systems of Equations and Matrices 26.    x1 − x2 − 5x3 + 3x4 = −1 1 x2 + 5x3 − 3x4 = 2 1 0 = 0 0 = Inconsistent No solution 27. If x is the free variable then the solution is (t, 3t, −t + 5) and if y is the free variable then the solution is 1 3 t, t, − 1 3 t + 5. 28. 13 chose the basic buffet and 14 chose the deluxe buffet.
29. Mavis needs 20 pounds of $3 per pound coffee and 30 pounds of $8 per pound coffee. 30. Skippy needs to invest $6000 in the 3% account and $4000 in the 8% account. 31. 22.5 gallons of the 10% solution and 52.5 gallons of pure water. 32. 4 3 − 1 2 t pounds of Type I, 2 3 − 1 2 t pounds of Type II and t pounds of Type III where 0 ≤ t ≤ 4 3. 8.2 Systems of Linear Equations: Augmented Matrices 567 8.2 Systems of Linear Equations: Augmented Matrices In Section 8.1 we introduced Gaussian Elimination as a means of transforming a system of linear equations into triangular form with the ultimate goal of producing an equivalent system of linear equations which is easier to solve. If we take a step back and study the process, we see that all of our moves are determined entirely by the coefficients of the variables involved, and not the variables themselves. Much the same thing happened when we studied long division in Section 3.2. Just as we developed synthetic division to streamline that process, in this section, we introduce a similar bookkeeping device to help us solve systems of linear equations. To that end, we define a matrix as a rectangular array of real numbers. We typically enclose matrices with square brackets, ‘[ ’ and ‘ ]’, and we size matrices by the number of rows and columns they have. For example, the size (sometimes called the dimension) of 3 0 −1 10 2 −5 is 2 × 3 because it has 2 rows and 3 columns. The individual numbers in a matrix are called its entries and are usually labeled with double subscripts: the first tells which row the element is in and the second tells which column it is in. The rows are numbered from top to bottom and the columns are numbered from left to right. Matrices themselves are usually denoted by uppercase letters (A, B, C, etc.) while their entries are usually denoted by the corresponding letter. So, for instance, if we have A = 3 0 −1 10 2 −5 then a11 = 3, a12 = 0, a13 = −1, a21 = 2, a22 = −5, and a23 = 10. We shall explore matrices as mathematical
objects with their own algebra in Section 8.3 and introduce them here solely as a bookkeeping device. Consider the system of linear equations from number 2 in Example 8.1.2    2x + 3y − z = 1 (E1) 10x − z = 2 (E2) (E3) 4x − 9y + 2z = 5 We encode this system into a matrix by assigning each equation to a corresponding row. Within that row, each variable and the constant gets its own column, and to separate the variables on the left hand side of the equation from the constants on the right hand side, we use a vertical bar, |. Note that in E2, since y is not present, we record its coefficient as 0. The matrix associated with this system is (E1) → (E2) → (E3) →   x 2 10 4 −9 y c z 3 −1 1 0 −1 2 2 5   568 Systems of Equations and Matrices This matrix is called an augmented matrix because the column containing the constants is appended to the matrix containing the coefficients.1 To solve this system, we can use the same kind operations on the rows of the matrix that we performed on the equations of the system. More specifically, we have the following analog of Theorem 8.1 below. Theorem 8.2. Row Operations: Given an augmented matrix for a system of linear equations, the following row operations produce an augmented matrix which corresponds to an equivalent system of linear equations. Interchange any two rows. Replace a row with a nonzero multiple of itself.a Replace a row with itself plus a nonzero multiple of another row.b aThat is, the row obtained by multiplying each entry in the row by the same nonzero number. bWhere we add entries in corresponding columns. As a demonstration of the moves in Theorem 8.2, we revisit some of the steps that were used in solving the systems of linear equations in Example 8.1.2 of Section 8.1. The reader is encouraged to perform the indicated operations on the rows of the augmented matrix to see that the machinations are identical to what is done to the coefficients of the variables in the equations. We first see a demonstration of switching two rows using the �
�rst step of part 1 in Example 8.1.2.    3x − y + z = 3 (E1) (E2) 2x − 4y + 3z = 16 x − y + z = 5 (E3) Switch E1 and E3 −−−−−−−−−−−→   3 −1 2 −4 1 −1   1 3 3 16 5 1 Switch R1 and R3 −−−−−−−−−−−→      x − y + z = 5 (E1) (E2) 2x − 4y + 3z = 16 3x − y + z = 3 (E3) 1 −1 2 −4 3 −1   5 1 3 16 3 1 Next, we have a demonstration of replacing a row with a nonzero multiple of itself using the first step of part 3 in Example 8.1.2.    3x1 + x2 + x4 = 6 (E1) 2x1 + x2 − x3 = 4 (E2) (E3) x2 − 3x3 − 2x4 = 0 Replace E1 with 1 3 E1 −−−−−−−−−−−−−→   3 2 0 1 6 1 0 1 −1 0 4 1 −3 −2 0   Replace R1 with 1 3 R1 −−−−−−−−−−−−−→      3 x2 + 1 (E1) x1 + 1 3 x4 = 2 2x1 + x2 − x3 = 4 (E2) (E3) x2 − 3x3 − 2x4 = 1 0 4 1 −3 −2 0   Finally, we have an example of replacing a row with itself plus a multiple of another row using the second step from part 2 in Example 8.1.2. 1We shall study the coefficient and constant matrices separately in Section 8.3. 8.2 Systems of Linear Equations: Augmented Matrices 569 �
��   E1) 2 10x − z = 2 (E2) (E3) 4x − 9y + 2z = 5 Replace E2 with −10E1 + E2 −−−−−−−−−−−−−−−−−−→ Replace E3 with −4E1 + E3   3 1 10 4 −9 2 − 1 2 0 −1 2   1 2 2 5 Replace R2 with −10R1 + R2 −−−−−−−−−−−−−−−−−−→ Replace R3 with −4R1 + R3      2 y − 1 (E1) x + 3 (E2) (E3) 1 2 −15y + 4z = −3 −15y + 4z = 15 0 −15   1 2 2 4 −3 3 4 The matrix equivalent of ‘triangular form’ is row echelon form. The reader is encouraged to refer to Definition 8.3 for comparison. Note that the analog of ‘leading variable’ of an equation is ‘leading entry’ of a row. Specifically, the first nonzero entry (if it exists) in a row is called the leading entry of that row. Definition 8.4. A matrix is said to be in row echelon form provided all of the following conditions hold: 1. The first nonzero entry in each row is 1. 2. The leading 1 of a given row must be to the right of the leading 1 of the row above it. 3. Any row of all zeros cannot be placed above a row with nonzero entries. To solve a system of a linear equations using an augmented matrix, we encode the system into an augmented matrix and apply Gaussian Elimination to the rows to get the matrix into row-echelon form. We then decode the matrix and back substitute. The next example illustrates this nicely. Example 8.2.1. Use an augmented matrix to transform the following system of linear equations into triangular form. Solve the system.    3x − y + z = 8 x + 2
y − z = 4 2x + 3y − 4z = 10 Solution. We first encode the system into an augmented matrix.    3x − y + z = 8 x + 2y − z = 4 2x + 3y − 4z = 10 Encode into the matrix −−−−−−−−−−−−−−→   3 −1 1 2 1 8 2 −1 4 3 −4 10   Thinking back to Gaussian Elimination at an equations level, our first order of business is to get x in E1 with a coefficient of 1. At the matrix level, this means getting a leading 1 in R1. This is in accordance with the first criteria in Definition 8.4. To that end, we interchange R1 and R2.   3 −1 1 2 8 1 2 −1 4 3 −4 10   Switch R1 and R2 −−−−−−−−−−−→   1 3 −1 2 2 −1 4 8 1 3 −4 10   570 Systems of Equations and Matrices Our next step is to eliminate the x’s from E2 and E3. From a matrix standpoint, this means we need 0’s below the leading 1 in R1. This guarantees the leading 1 in R2 will be to the right of the leading 1 in R1 in accordance with the second requirement of Definition 8.4.   1 3 −1 2 2 −1 4 1 8 3 −4 10   Replace R2 with −3R1 + R2 −−−−−−−−−−−−−−−−−→ Replace R3 with −2R1 + R3   2 −1 1 0 −7 0 −1 −2 4 4 −4 2   Now we repeat the above process for the variable y which means we need to get the leading entry in R2 to be 1.   2 −1 1 0 −7 0 −1 −2 4 4 −4 2   Replace R
2 with − 1 7 R2 −−−−−−−−−−−−−−→   2 −1 1 1 − 4 0 7 0 −1 −2   4 4 7 2 To guarantee the leading 1 in R3 is to the right of the leading 1 in R2, we get a 0 in the second column of R3.   2 −1 1 1 − 4 0 7 0 −1 −2   4 4 7 2 Replace R3 with R2 + R3 −−−−−−−−−−−−−−−−→    1 0 0 2 −1 1 − 4 7 0 − 18 7    4 4 7 18 7 Finally, we get the leading entry in R3 to be 1.    1 0 0 2 −1 1 − 4 7 0 − 18 7    4 4 7 18 7 Replace R3 with − 7 18 R3 −−−−−−−−−−−−−−−→   1 0 0 2 −1 0   Decoding from the matrix gives a system in triangular form    1 0 0 2 −1 0 We get z = −1 final answer of (3, 0, −1). We leave it to the reader to check. Decode from the matrix −−−−−−−−−−−−−−→ 7 (−1) + 4     x + 2y − 1 7 = 0 and x = −2y + z + 4 = −2(0) + (−1) + 4 = 3 for a As part of Gaussian Elimination, we used row operations to obtain 0’s beneath each leading 1 to put the matrix into row echelon form. If we also require that 0’s are the only numbers above a leading 1, we have what is known as the reduced row echelon form of the matrix. Definition 8.5. A matrix is said to be in reduced row echelon form provided both of the following conditions hold: 1. The matrix is in row echelon
form. 2. The leading 1s are the only nonzero entry in their respective columns. 8.2 Systems of Linear Equations: Augmented Matrices 571 Of what significance is the reduced row echelon form of a matrix? To illustrate, let’s take the row echelon form from Example 8.2.1 and perform the necessary steps to put into reduced row echelon form. We start by using the leading 1 in R3 to zero out the numbers in the rows above it.   1 0 0 2 −1 0   Replace R1 with R3 + R1 −−−−−−−−−−−−−−−−−→ Replace R2 with 4 7 R3 + R2   1   Finally, we take care of the 2 in R1 above the leading 1 in R21   Replace R1 with −2R2 + R1 −−−−−−−−−−−−−−−−−→   1   To our surprise and delight, when we decode this matrix, we obtain the solution instantly without having to deal with any back-substitution at all1   Decode from the matrix −−−−−−−−−−−−−−→    3 x = 0 y = z = −1 Note that in the previous discussion, we could have started with R2 and used it to get a zero above its leading 1 and then done the same for the leading 1 in R3. By starting with R3, however, we get more zeros first, and the more zeros there are, the faster the remaining calculations will be.2 It is also worth noting that while a matrix has several3 row echelon forms, it has only one reduced row echelon form. The process by which we have put a matrix into reduced row echelon form is called Gauss-Jordan Elimination. Example 8.2.2. Solve the following system using an augmented matrix. Use Gauss-Jordan Elimination to put the augmented matrix into reduced row echelon form.    x2 − 3x1 + x4 = 2 2x
1 + 4x3 = 5 4x2 − x4 = 3 Solution. We first encode the system into a matrix. (Pay attention to the subscripts!)    x2 − 3x1 + x4 = 2 2x1 + 4x3 = 5 4x2 − x4 = 3 Encode into the matrix −−−−−−−−−−−−−−→   −1 3   Next, we get a leading 1 in the first column of R1.   −1 3   Replace R1 with − 1 3 R1 −−−−−−−−−−−−−−→   1 3   2Carl also finds starting with R3 to be more symmetric, in a purely poetic way. 3infinite, in fact 572 Systems of Equations and Matrices Now we eliminate the nonzero entry below our leading 11 3   Replace R2 with −2R1 + R2 −−−−−−−−−−−−−−−−−→    1 − 1 3 2 0 3 4 0 We proceed to get a leading 1 in R2. 0 − 1 2 4 3 0 −1 3 − 2    3 19 1 3 − 2    Replace R2 with 3 2 R2 −−−−−−−−−−−−−→    1 3 19 2 3    3 19 3 3 We now zero out the entry below the leading 1 in R21 3 19 2 3    Replace R3 with −4R2 + R3 −−−−−−−−−−−−−−−−−→ Next, it’s time for a leading 1 in R3.   1 − 1 3 3 19 1 0 2 0 −24 −5 −35 0 0 − 1 3 − 2 1 6   Replace R3 with − 1 24 R3 −−−−−−−−−
−−−−−−→      1 − 1 3 3 19 1 0 2 0 −24 −5 −35 24 3 19 2 35 24    The matrix is now in row echelon form. To get the reduced row echelon form, we start with the last leading 1 we produced and work to get 0’s above it 24 3 19 2 35 24    Replace R2 with −6R3 + R2 −−−−−−−−−−−−−−−−−→ Lastly, we get a 0 above the leading 1 of R2 24 3 − 2 3 3 4 35 24    Replace R1 with 1 3 R2 + R1 −−−−−−−−−−−−−−−−−→    At last, we decode to get 24 12 − 24       3 3 4 35 24 12 3 4 35 24    1 0 0 0 1 0 12 − 24    Decode from the matrix −−−−−−−−−−−−−−→    x1 − 5 x2 − 1 x3 + 5 12 x4 = − 5 4 x4 = 24 x4 = 12 3 4 35 24 12 3 4 35 24 We have that x4 is free and we assign it the parameter t. We obtain x3 = − 5 and x1 = 5 the reader to check. 12. Our solution is 5 4 t + 3 24 t + 35 4, 24, t : −∞ < t < ∞ and leave it to 24, x2 = 1 24 t + 35 12 t − 5 12 12, 1 8.2 Systems of Linear Equations: Augmented Matrices 573 Like all good algorithms, putting a matrix in row echelon or reduced row echelon form can easily be programmed into a calculator, and, doubtless, your graphing calculator has such a feature. We use this in our next example. Example 8.2.3. Find the quadratic function passing through the points (−1, 3),
(2, 4), (5, −2). Solution. According to Definition 2.5, a quadratic function has the form f (x) = ax2 +bx+c where a = 0. Our goal is to find a, b and c so that the three given points are on the graph of f. If (−1, 3) is on the graph of f, then f (−1) = 3, or a(−1)2 + b(−1) + c = 3 which reduces to a − b + c = 3, an honest-to-goodness linear equation with the variables a, b and c. Since the point (2, 4) is also on the graph of f, then f (2) = 4 which gives us the equation 4a + 2b + c = 4. Lastly, the point (5, −2) is on the graph of f gives us 25a + 5b + c = −2. Putting these together, we obtain a system of three linear equations. Encoding this into an augmented matrix produces    a − b + c = 3 4 4a + 2b + c = 25a + 5b + c = −2 Encode into the matrix −−−−−−−−−−−−−−→   1 −1 2 4 5 25   1 3 1 4 1 −2 18, b = 13 Using a calculator,4 we find a = − 7 9. Hence, the one and only quadratic which fits the bill is f (x) = − 7 18 x + 37 9. To verify this analytically, we see that f (−1) = 3, f (2) = 4, and f (5) = −2. We can use the calculator to check our solution as well by plotting the three data points and the function f. 18 and c = 37 18 x2 + 13 The graph of f (x) = − 7 18 x + 37 with the points (−1, 3), (2, 4) and (5, −2) 18 x2 + 13 9 4We’ve tortured you enough already with fractions in this exposition! 574 Systems of Equations and Matrices 8.2.1 Exercises In Exercises 1 - 6, state whether the
given matrix is in reduced row echelon form, row echelon form only or in neither of those forms. 1 0 3 0 1 3 1.   4. 5. 3 −1 2 −4 1 −1   1 3 3 16. 6   In Exercises 7 - 12, the following matrices are in reduced row echelon form. Determine the solution of the corresponding system of linear equations or state that the system is inconsistent. 1 0 −2 7 0 1 7.   10. 11. 1 0 0 −3 20 0 1 0 0 0 1 19   1 0 0 0 0 −8 1 0 0 1 7 4 −. 124 0 0 9 −3 20 0     In Exercises 13 - 26, solve the following systems of linear equations using the techniques discussed in this section. Compare and contrast these techniques with those you used to solve the systems in the Exercises in Section 8.1. 13. −5x + y = 17 x + y = 5          15. 17. 19. 4x − y + z = 5 2y + 6z = 30 x + z = 5 3x − 2y + z = −5 x + 3y − z = 12 0 x + y + 2z = x − y + z = −4 −3x + 2y + 4z = −5 x − 5y + 2z = −18             14. 16. 18. 20. x + y + z = 3 2x − y + z = 0 −3x + 5y + 7z = 7 x − 2y + 3z = 7 −3x + y + 2z = −5 3 2x + 2y + z = 2x − y + z = −1 1 4 4x + 3y + 5z = 5y + 3z = 2x − 4y + z = −7 x − 2y + 2z = −2 3 −x + 4y − 2z = 8.2 Systems
of Linear Equations: Augmented Matrices 575          21. 23. 25. 2x − y + z = 1 2x + 2y − z = 1 3x + 6y + 4z = 9 x + y + z = 4 2x − 4y − z = −1 x − y = 2 2x − 3y + z = −1 4x − 4y + 4z = −13 6x − 5y + 7z = −25 22. 24. 26.          x − 3y − 4z = 3 3x + 4y − z = 13 2x − 19y − 19z = 2 x − y + z = 8 3x + 3y − 9z = −6 7x − 2y + 5z = 39 x1 − x3 = −2 2x2 − x4 = 0 x1 − 2x2 + x3 = 0 −x3 + x4 = 1 27. It’s time for another meal at our local buffet. This time, 22 diners (5 of whom were children) feasted for $162.25, before taxes. If the kids buffet is $4.50, the basic buffet is $7.50, and the deluxe buffet (with crab legs) is $9.25, find out how many diners chose the deluxe buffet. 28. Carl wants to make a party mix consisting of almonds (which cost $7 per pound), cashews (which cost $5 per pound), and peanuts (which cost $2 per pound.) If he wants to make a 10 pound mix with a budget of $35, what are the possible combinations almonds, cashews, and peanuts? (You may find it helpful to review Example 8.1.3 in Section 8.1.) 29. Find the quadratic function passing through the points (−2, 1), (1, 4), (3, −2) 30. At 9 PM, the temperature was 60◦F; at midnight, the temperature was 50
◦F; and at 6 AM, the temperature was 70◦F. Use the technique in Example 8.2.3 to fit a quadratic function to these data with the temperature, T, measured in degrees Fahrenheit, as the dependent variable, and the number of hours after 9 PM, t, measured in hours, as the independent variable. What was the coldest temperature of the night? When did it occur? 31. The price for admission into the Stitz-Zeager Sasquatch Museum and Research Station is $15 for adults and $8 for kids 13 years old and younger. When the Zahlenreich family visits the museum their bill is $38 and when the Nullsatz family visits their bill is $39. One day both families went together and took an adult babysitter along to watch the kids and the total admission charge was $92. Later that summer, the adults from both families went without the kids and the bill was $45. Is that enough information to determine how many adults and children are in each family? If not, state whether the resulting system is inconsistent or consistent dependent. In the latter case, give at least two plausible solutions. 32. Use the technique in Example 8.2.3 to find the line between the points (−3, 4) and (6, 1). How does your answer compare to the slope-intercept form of the line in Equation 2.3? 33. With the help of your classmates, find at least two different row echelon forms for the matrix 1 2 3 4 12 8 576 Systems of Equations and Matrices 8.2.2 Answers 1. Reduced row echelon form 2. Neither 3. Row echelon form only 4. Reduced row echelon form 5. Reduced row echelon form 6. Row echelon form only 7. (−2, 7) 9. (−3t + 4, −6t − 6, 2, t) for all real numbers t 8. (−3, 20, 19) 10. Inconsistent 11. (8s − t + 7, −4s + 3t + 2, s, t) for all real numbers s and t 12. (−9t − 3, 4t + 20, t) for all real numbers t 13. (−2, 7) 15. (−t + 5, −3t + 15, t) for all real
numbers t 17. (1, 3, −2) 19. (1, 3, −2) 14. (1, 2, 0) 16. (2, −1, 1) 18. Inconsistent 20. −3, 1 2, 1 21. 1 3, 2 3, 1 22. 19 13, t 13, − 11 13 t + 51 for all real numbers t 13 t + 4 23. Inconsistent 25. −2t − 35 2, t for all real numbers t 4, −t − 11 24. (4, −3, 1) 26. (1, 2, 3, 4) 27. This time, 7 diners chose the deluxe buffet. 28. If t represents the amount (in pounds) of peanuts, then we need 1.5t − 7.5 pounds of almonds and 17.5 − 2.5t pounds of cashews. Since we can’t have a negative amount of nuts, 5 ≤ t ≤ 7. 29. f (x) = − 4 30. T (t) = 20 5 x2 + 1 27 t2 − 50 5 x + 23 9 t + 60. Lowest temperature of the evening 595 5 12 ≈ 49.58◦F at 12:45 AM. 8.2 Systems of Linear Equations: Augmented Matrices 577 31. Let x1 and x2 be the numbers of adults and children, respectively, in the Zahlenreich family and let x3 and x4 be the numbers of adults and children, respectively, in the Nullsatz family. The system of equations determined by the given information is   15x1 + 8x2 = 38 15x3 + 8x4 = 39 15x1 + 8x2 + 15x3 + 8x4 = 77 15x1 + 15x3 = 45  We subtracted the cost of the babysitter in E3 so the constant is 77, not 92. This system is 5, t. Our variables repreconsistent dependent and its solution is 8 15 t + 2 sent numbers of adults and children so they must be whole numbers. Running through the values t = 0, 1, 2, 3, 4 yields only one solution where all four variables are whole numbers; t = 3 gives us (2, 1, 1, 3). Thus
there are 2 adults and 1 child in the Zahlenreichs and 1 adult and 3 kids in the Nullsatzs. 5, −t + 4, − 8 15 t + 13 578 Systems of Equations and Matrices 8.3 Matrix Arithmetic In Section 8.2, we used a special class of matrices, the augmented matrices, to assist us in solving systems of linear equations. In this section, we study matrices as mathematical objects of their own accord, temporarily divorced from systems of linear equations. To do so conveniently requires some more notation. When we write A = [aij]m×n, we mean A is an m by n matrix1 and aij is the entry found in the ith row and jth column. Schematically, we have j counts columns from left to right −−−−−−−−−−−−−−−→ a1n a11 a12 a2n a21 a22......... · · · amn am1 am2 · · · · · ·      counts rows from top to bottom With this new notation we can define what it means for two matrices to be equal. Definition 8.6. Matrix Equality: Two matrices are said to be equal if they are the same size and their corresponding entries are equal. More specifically, if A = [aij]m×n and B = [bij]p×r, we write A = B provided 1. m = p and n = r 2. aij = bij for all 1 ≤ i ≤ m and all 1 ≤ j ≤ n. Essentially, two matrices are equal if they are the same size and they have the same numbers in the same spots.2 For example, the two 2 × 3 matrices below are, despite appearances, equal. 0 −2 9 25 117 −3 = √ 3 ln(1) e2 ln(3) −8 1252/3 32 · 13 log(0.001) Now that we have an agreed upon understanding of what it means for two matrices to equal each other, we may begin defining arithmetic operations on matrices. Our first operation is addition. Definition 8.7. Matrix Addition: Given two matrices of the same size
, the matrix obtained by adding the corresponding entries of the two matrices is called the sum of the two matrices. More specifically, if A = [aij]m×n and B = [bij]m×n, we define A + B = [aij]m×n + [bij]m×n = [aij + bij]m×n As an example, consider the sum below. 1Recall that means A has m rows and n columns. 2Critics may well ask: Why not leave it at that? Why the need for all the notation in Definition 8.6? It is the authors’ attempt to expose you to the wonderful world of mathematical precision. 8.3 Matrix Arithmetic 579   3 2 4 −1 0 −7    +  −1 4 −5 −3 1 8    =  2 + (−1) 4 + (−5) 0 + 8 3 + 4 (−1) + (−3) (−71 −4 8 −6 It is worth the reader’s time to think what would have happened had we reversed the order of the summands above. As we would expect, we arrive at the same answer. In general, A + B = B + A for matrices A and B, provided they are the same size so that the sum is defined in the first place. This is the commutative property of matrix addition. To see why this is true in general, we appeal to the definition of matrix addition. Given A = [aij]m×n and B = [bij]m×n, A + B = [aij]m×n + [bij]m×n = [aij + bij]m×n = [bij + aij]m×n = [bij]m×n + [aij]m×n = B + A where the second equality is the definition of A + B, the third equality holds by the commutative law of real number addition, and the fourth equality is the definition of B + A. In other words, matrix addition is commutative because real number addition is. A similar
argument shows the associative property of matrix addition also holds, inherited in turn from the associative law of real number addition. Specifically, for matrices A, B, and C of the same size, (A + B) + C = A + (B + C). In other words, when adding more than two matrices, it doesn’t matter how they are grouped. This means that we can write A + B + C without parentheses and there is no ambiguity as to what this means.3 These properties and more are summarized in the following theorem. Theorem 8.3. Properties of Matrix Addition Commutative Property: For all m × n matrices, A + B = B + A Associative Property: For all m × n matrices, (A + B) + C = A + (B + C) Identity Property: If 0m×n is the m × n matrix whose entries are all 0, then 0m×n is called the m × n additive identity and for all m × n matrices A A + 0m×n = 0m×n + A = A Inverse Property: For every given m × n matrix A, there is a unique matrix denoted −A called the additive inverse of A such that A + (−A) = (−A) + A = 0m×n The identity property is easily verified by resorting to the definition of matrix addition; just as the number 0 is the additive identity for real numbers, the matrix comprised of all 0’s does the same job for matrices. To establish the inverse property, given a matrix A = [aij]m×n, we are looking for a matrix B = [bij]m×n so that A + B = 0m×n. By the definition of matrix addition, we must have that aij + bij = 0 for all i and j. Solving, we get bij = −aij. Hence, given a matrix A, its additive inverse, which we call −A, does exist and is unique and, moreover, is given by the formula: −A = [−aij]m×n. The long and short of this is: to get the additive inverse of a matrix, 3A technical detail which is sadly lost on most readers. 580 Systems of Equations and Matrices take additive inverses of
each of its entries. With the concept of additive inverse well in hand, we may now discuss what is meant by subtracting matrices. You may remember from arithmetic that a − b = a + (−b); that is, subtraction is defined as ‘adding the opposite (inverse).’ We extend this concept to matrices. For two matrices A and B of the same size, we define A − B = A + (−B). At the level of entries, this amounts to A − B = A + (−B) = [aij]m×n + [−bij]m×n = [aij + (−bij)]m×n = [aij − bij]m×n Thus to subtract two matrices of equal size, we subtract their corresponding entries. Surprised? Our next task is to define what it means to multiply a matrix by a real number. Thinking back to arithmetic, you may recall that multiplication, at least by a natural number, can be thought of as ‘rapid addition.’ For example. We know from algebra4 that 3x = x + x + x, so it seems natural that given a matrix A, we define 3A = A + A + A. If A = [aij]m×n, we have 3A = A + A + A = [aij]m×n + [aij]m×n + [aij]m×n = [aij + aij + aij]m×n = [3aij]m×n In other words, multiplying the matrix in this fashion by 3 is the same as multiplying each entry by 3. This leads us to the following definition. Definition 8.8. Scalara Multiplication: We define the product of a real number and a matrix to be the matrix obtained by multiplying each of its entries by said real number. More specifically, if k is a real number and A = [aij]m×n, we define kA = k [aij]m×n = [kaij]m×n aThe word ‘scalar’ here refers to real numbers. ‘Scalar multiplication’ in this context means we are multiplying a matrix by a
real number (a scalar). One may well wonder why the word ‘scalar’ is used for ‘real number.’ It has everything to do with ‘scaling’ factors.5 A point P (x, y) in the plane can be represented by its position matrix, P : (x, y) ↔ P = x y Suppose we take the point (−2, 1) and multiply its position matrix by 3. We have 3P = 3 −2 1 = 3(−2) 3(1) = −6 3 which corresponds to the point (−6, 3). We can imagine taking (−2, 1) to (−6, 3) in this fashion as a dilation by a factor of 3 in both the horizontal and vertical directions. Doing this to all points (x, y) in the plane, therefore, has the effect of magnifying (scaling) the plane by a factor of 3. 4The Distributive Property, in particular. 5See Section 1.7. 8.3 Matrix Arithmetic 581 As did matrix addition, scalar multiplication inherits many properties from real number arithmetic. Below we summarize these properties. Theorem 8.4. Properties of Scalar Multiplication Associative Property: For every m × n matrix A and scalars k and r, (kr)A = k(rA). Identity Property: For all m × n matrices A, 1A = A. Additive Inverse Property: For all m × n matrices A, −A = (−1)A. Distributive Property of Scalar Multiplication over Scalar Addition: For every m × n matrix A and scalars k and r, (k + r)A = kA + rA Distributive Property of Scalar Multiplication over Matrix Addition: For all m × n matrices A and B scalars k, k(A + B) = kA + kB Zero Product Property: If A is an m × n matrix and k is a scalar, then kA = 0m×n if and only if k = 0 or A = 0m×n As with the other results in this section, Theorem 8.4 can be proved using the definitions of scalar multiplication and matrix addition. For example, to prove that k(A + B) = kA + kB for a scalar k and m
× n matrices A and B, we start by adding A and B, then multiplying by k and seeing how that compares with the sum of kA and kB. k(A + B) = k [aij]m×n + [bij]m×n = k [aij + bij]m×n = [k (aij + bij)]m×n = [kaij + kbij]m×n As for kA + kB, we have kA + kB = k [aij]m×n + k [bij]m×n = [kaij]m×n + [kbij]m×n = [kaij + kbij]m×n which establishes the property. The remaining properties are left to the reader. The properties in Theorems 8.3 and 8.4 establish an algebraic system that lets us treat matrices and scalars more or less as we would real numbers and variables, as the next example illustrates. Example 8.3.1. Solve for the matrix A: 3A − 2 −1 5 3 using the definitions and properties of matrix arithmetic. + 5A = −4 2 6 −2 + 1 3 9 12 −3 39 582 Solution. Systems of Equations and Matrices 3A − 3A + − 3A + (−1) 2 −1 3 5 2 −1 5 3 2 −1 3 5 + 5A = + 5A + 5A = = = 2 −1 3 5 2 −1 5 3 (−1)(−1) (−1)(5) −2 3A + 1 −3 −5 + (−1)(5A) + (−1)(5A) = + ((−1)(5))A = + (−5)A = 3A + (−1) 3A + (−1) 3A + (−1)(2) (−1)(3) 3A + (−5)A + −2 1 −3 −5 (3 + (−5))A + −2 1 −3 −5 + − −2 1 −3 −5 = = (−2)A + 02×2 = (−2)A = (−2)A = −4 2 6 −2 −4 2 6 −2 −4 2 6 −2 1 3 + + + 9 12 −3 39 1 (9) 1 3 3 (−3) 1 1 3 3 4 3 −1 13
(12) (39) −1 6 5 11 −1 6 5 11 −1 6 5 11 −1 6 5 11 −1 6 5 11 −1 6 5 11 + − −2 1 −3 −5 −1 6 5 11 −1 − (−2) − −2 1 −3 −5 6 − 1 5 − (−3) 11 − (−5) − 1 2 ((−2)A) = − 1 2 − 1 2 (−2) A = 1 5 8 16 1 5 8 16 (1) − 1 (5) 2 (16) (8 1A = A = − 1 2 − 5 2 −4 − 16 2 − 1 2 − 5 2 −4 −8 The reader is encouraged to check our answer in the original equation. 8.3 Matrix Arithmetic 583 While the solution to the previous example is written in excruciating detail, in practice many of the steps above are omitted. We have spelled out each step in this example to encourage the reader to justify each step using the definitions and properties we have established thus far for matrix arithmetic. The reader is encouraged to solve the equation in Example 8.3.1 as they would any other linear equation, for example: 3a − (2 + 5a) = −4 + 1 We now turn our attention to matrix multiplication - that is, multiplying a matrix by another matrix. Based on the ‘no surprises’ trend so far in the section, you may expect that in order to multiply two matrices, they must be of the same size and you find the product by multiplying the corresponding entries. While this kind of product is used in other areas of mathematics,6 we define matrix multiplication to serve us in solving systems of linear equations. To that end, we begin by defining the product of a row and a column. We motivate the general definition with an example. Consider the two matrices A and B below. 3 (9). A = 2 −10 0 −8 1 8 −5 9 0 −2 −12   Let R1 denote the first row of A and C1 denote the first column of B. To find the ‘product’ of R1 with C1, denoted R1 · C1, we first find the product of the first
entry in R1 and the first entry in C1. Next, we add to that the product of the second entry in R1 and the second entry in C1. Finally, we take that sum and we add to that the product of the last entry in R1 and the last entry in C1. Using entry notation, R1·C1 = a11b11 +a12b21 +a13b31 = (2)(3)+(0)(4)+(−1)(5) = 6+0+(−5) = 1. We can visualize this schematically as follows 2 −10 0 −8 1 8 −5 9 0 −2 −12   −−−−−−−−−→ 0 −1 2 a11b11 (2)(3) 3 4 5      −−−−−−−−−→ 0 −1 2 + + a12b21 (0)(4) 3 4 5      −−−−−−−−−→ 0 −1 2 + + a13b31 (−1)(5) 3 4 5      To find R2 · C3 where R2 denotes the second row of A and C3 denotes the third column of B, we proceed similarly. We start with finding the product of the first entry of R2 with the first entry in C3 then add to it the product of the second entry in R2 with the second entry in C3, and so forth. Using entry notation, we have R2·C3 = a21b13+a22b23+a23b33 = (−10)(2)+(3)(−5)+(5)(−2) = −45. Schematically, 2 −10 0 −8 1 8 −5 9 0 −2 −12   6See this article on the Hadamard Product. 584 Systems of Equations and Matrices −−−−−−−−−→ −10 3 5 2 −5 −2      a21b13 = (−10)(2) = −20 + −−−−
−−−−−→ −10 5 3 2 −5 −2      −−−−−−−−−→ −10 3 5 2 −5 −2      a22b23 = (3)(−5) = −15 a23b33 = (5)(−2) = −10 + Generalizing this process, we have the following definition. Definition 8.9. Product of a Row and a Column: Suppose A = [aij]m×n and B = [bij]n×r. Let Ri denote the ith row of A and let Cj denote the jth column of B. The product of Ri and Cj, denoted Ri · Cj is the real number defined by Ri · Cj = ai1b1j + ai2b2j +... ainbnj Note that in order to multiply a row by a column, the number of entries in the row must match the number of entries in the column. We are now in the position to define matrix multiplication. Definition 8.10. Matrix Multiplication: Suppose A = [aij]m×n and B = [bij]n×r. Let Ri denote the ith row of A and let Cj denote the jth column of B. The product of A and B, denoted AB, is the matrix defined by that is AB = [Ri · Cj]m×r      AB = R1 · C1 R1 · C2 R2 · C1 R2 · C2... R1 · Cr... R2 · Cr......... Rm · C1 Rm · C2... Rm · Cr      There are a number of subtleties in Definition 8.10 which warrant closer inspection. First and foremost, Definition 8.10 tells us that the ij-entry of a matrix product AB is the ith row of A times the jth column of B. In order for this to be de�
��ned, the number of entries in the rows of A must match the number of entries in the columns of B. This means that the number of columns of A must match7 the number of rows of B. In other words, to multiply A times B, the second dimension of A must match the first dimension of B, which is why in Definition 8.10, Am×n is being multiplied by a matrix Bn×r. Furthermore, the product matrix AB has as many rows as A and as many columns of B. As a result, when multiplying a matrix Am×n by a matrix Bn×r, the result is the matrix ABm×r. Returning to our example matrices below, we see that A is a 2 × 3 matrix and B is a 3 × 4 matrix. This means that the product matrix AB is defined and will be a 2 × 4 matrix. A = 2 −10 0 −8 1 8 −5 9 0 −2 −12   7The reader is encouraged to think this through carefully. 8.3 Matrix Arithmetic 585 Using Ri to denote the ith row of A and Cj to denote the jth column of B, we form AB according to Definition 8.10. AB = R1 · C1 R1 · C2 R1 · C3 R1 · C4 R2 · C1 R2 · C2 R2 · C3 R2 · C4 = 1 7 2 14 −45 6 −4 47 Note that the product BA is not defined, since B is a 3 × 4 matrix while A is a 2 × 3 matrix; B has more columns than A has rows, and so it is not possible to multiply a row of B by a column of A. Even when the dimensions of A and B are compatible such that AB and BA are both defined, the product AB and BA aren’t necessarily equal.8 In other words, AB may not equal BA. Although there is no commutative property of matrix multiplication in general, several other real number properties are inherited by matrix multiplication, as illustrated in our next theorem. Theorem 8.5. Properties of Matrix Multiplication Let A, B and C be matrices such that all of the matrix products below are defined and let k be a real number. Associative Property of
Matrix Multiplication: (AB)C = A(BC) Associative Property with Scalar Multiplication: k(AB) = (kA)B = A(kB) Identity Property: For a natural number k, the k × k identity matrix, denoted Ik, is defined by Ik = [dij]k×k where dij = 1, if i = j 0, otherwise For all m × n matrices, ImA = AIn = A. Distributive Property of Matrix Multiplication over Matrix Addition: A(B ± C) = AB ± AC and (A ± B)C = AC ± BC The one property in Theorem 8.5 which begs further investigation is, without doubt, the multiplicative identity. The entries in a matrix where i = j comprise what is called the main diagonal of the matrix. The identity matrix has 1’s along its main diagonal and 0’s everywhere else. A few examples of the matrix Ik mentioned in Theorem 8.5 are given below. The reader is encouraged to see how they match the definition of the identity matrix presented there   I2 I3 [1] I1         I4 8And may not even have the same dimensions. For example, if A is a 2 × 3 matrix and B is a 3 × 2 matrix, then AB is defined and is a 2 × 2 matrix while BA is also defined... but is a 3 × 3 matrix! 586 Systems of Equations and Matrices The identity matrix is an example of what is called a square matrix as it has the same number of rows as columns. Note that to in order to verify that the identity matrix acts as a multiplicative identity, some care must be taken depending on the order of the multiplication. For example, take the matrix 2 × 3 matrix A from earlier A = 2 −10 0 −1 5 3 In order for the product IkA to be defined, k = 2; similarly, for AIk to be defined, k = 3. We leave it to the reader to show I2A = A and AI3 = A. In other words, and 1 0 0 1 2 −10 0 −1 5 3 = 2 −10 0 −
1 5 3 2 −10 0 −10 0 −1 5 3 While the proofs of the properties in Theorem 8.5 are computational in nature, the notation becomes quite involved very quickly, so they are left to a course in Linear Algebra. The following example provides some practice with matrix multiplication and its properties. As usual, some valuable lessons are to be learned. Example 8.3.2. 1. Find AB for A = −23 −1 46 17 2 −34 and B =   −3 2 1 5 −4 3   2. Find C2 − 5C + 10I2 for C = 1 −2 4 3 3. Suppose M is a 4 × 4 matrix. Use Theorem 8.5 to expand (M − 2I4) (M + 3I4). Solution. 1. We have AB = −23 −1 46 17 2 −34   −3 2 1 5 −. Just as x2 means x times itself, C2 denotes the matrix C times itself. We get 8.3 Matrix Arithmetic 587 C2 − 5C + 10I2 = = = = 1 −2 4 3 1 −2 4 3 2 − 5 1 −2 4 3 1 −2 4 3 + + 10 1 0 0 1 −5 10 −15 −20 10 0 0 10 + + 5 10 −15 −10 −5 −10 10 15 0 0 0 0 3. We expand (M − 2I4) (M + 3I4) with the same pedantic zeal we showed in Example 8.3.1. The reader is encouraged to determine which property of matrix arithmetic is used as we proceed from one step to the next. (M − 2I4) (M + 3I4) = (M − 2I4) M + (M − 2I4) (3I4) = M M − (2I4) M + M (3I4) − (2I4) (3I4) = M 2 − 2 (I4M ) + 3 (M I4) − 2 (I4 (3I4)) = M 2 − 2M + 3M − 2 (3 (I4I4)) = M 2 + M − 6I4 Example 8.3.2 illustrates some interesting features of matrix multiplication. First note that in part 1, neither A nor B is the zero matrix, yet
the product AB is the zero matrix. Hence, the the zero product property enjoyed by real numbers and scalar multiplication does not hold for matrix multiplication. Parts 2 and 3 introduce us to polynomials involving matrices. The reader is encouraged to step back and compare our expansion of the matrix product (M − 2I4) (M + 3I4) in part 3 with the product (x − 2)(x + 3) from real number algebra. The exercises explore this kind of parallel further. As we mentioned earlier, a point P (x, y) in the xy-plane can be represented as a 2 × 1 position matrix. We now show that matrix multiplication can be used to rotate these points, and hence graphs of equations. Example 8.3.3. Let. Plot P (2, −2), Q(4, 0), S(0, 3), and T (−3, −3) in the plane as well as the points RP, RQ, RS, and RT. Plot the lines y = x and y = −x as guides. What does R appear to be doing to these points? 2. If a point P is on the hyperbola x2 − y2 = 4, show that the point RP is on the curve y = 2 x. 588 Systems of Equations and Matrices Solution. For P (2, −2), the position matrix is P = 2 −2, and RP = = √ 2 √ √ √ We have that R takes (2, −2) to (2 2), (0, 3) √ 2 2, 3 is moved to 2). Plotting these in the coordinate plane along with the lines y = x and y = −x, we see that the matrix R is rotating these points counterclockwise by 45◦. 2, 0). Similarly, we find (4, 0) is moved to (2 √, and (−3, −3) is moved to (0, −3 − 3 2 RQ RP RS −4 −3 −2 −1 −1 1 2 3 Q x T −2 −3 −4 P RT For a generic point P (x, y) on the hyperbola x2 − y2 = 4, we have RP = = √ which means R takes (x, y) to y = 2 x, we replace x with √, √ 2 2 y and y with √
√ +. To show that this point is on the curve √ 2 2 y and simplify. 8.3 Matrix Arithmetic 589 √ √ = 2 2 x − y2 x2 x2 − y2 Since (x, y) is on the hyperbola x2 − y2 = 4, we know that this last equation is true. Since all of our steps are reversible, this last equation is equivalent to our original equation, which establishes the point is, indeed, on the graph of y = 2 x is a hyperbola, and it is none other than the hyperbola x2 − y2 = 4 rotated counterclockwise by 45◦.9 Below we have the graph of x2 − y2 = 4 (solid line) and y = 2 x. This means the graph of y = 2 x (dashed line) for comparison3 −1 −1 −2 −3 When we started this section, we mentioned that we would temporarily consider matrices as their own entities, but that the algebra developed here would ultimately allow us to solve systems of linear equations. To that end, consider the system    3x − y + z = 8 x + 2y − z = 4 2x + 3y − 4z = 10 In Section 8.2, we encoded this system into the augmented matrix   3 −1 1 2 8 1 2 −1 4 3 −4 10   9See Section 7.5 for more details. 590 Systems of Equations and Matrices Recall that the entries to the left of the vertical line come from the coefficients of the variables in the system, while those on the right comprise the associated constants. For that reason, we may form the coefficient matrix A, the unknowns matrix X and the constant matrix B as below   A = 3 −1 1 2 1 2 −1 3 − 10 x y z We now consider the matrix equation AX = B. AX = B   3 −1 3 −4 x y z 3x − y + z x + 2y − z 2x + 3y − 4z   =   =         8 4 10 8 4 10
  We see that finding a solution (x, y, z) to the original system corresponds to finding a solution X for the matrix equation AX = B. If we think about solving the real number equation ax = b, we would simply ‘divide’ both sides by a. Is it possible to ‘divide’ both sides of the matrix equation AX = B by the matrix A? This is the central topic of Section 8.4. 8.3 Matrix Arithmetic 8.3.1 Exercises For each pair of matrices A and B in Exercises 1 - 7, find the following, if defined 3A A − 2B −B AB 1. A = 3. A = 2 −3 1 4 −1 3 5 2, B = 5 −3 1 4 A2 BA 2. A = −1 5 −3 6, B = 2 10 1 −7 41 3 −5 11 7 −9 591     5. A = 7. A =  , B = 1 2 3 7 8 9 6. A =   −3 1 −2 4 5 −6  , B = −5 1 8 2 −3 3 −7 5 1 −2 1 −1   , B =  1 2 1 17 33 19 10 19 11   In Exercises 8 - 21, use the matrices 3 2 −5 C = 10 − 11 13 9 0 −5 to compute the following or state that the indicated operation is undefined. 8. 7B − 4A 11. E + D 14. A − 4I2 9. AB 12. ED 15. A2 − B2 10. BA 13. CD + 2I2A 16. (A + B)(A − B) 17. A2 − 5A − 2I2 18. E2 + 5E − 36I3 19. EDC 20. CDE 22. Let A = a b c d e f E1 = 21. ABCEDI2 0 1 1 0 E2 = 5 0 0 1 E3 = 1 −2 1 0 Compute E1A, E2A and E3A. What e�
�ect did each of the Ei matrices have on the rows of A? Create E4 so that its effect on A is to multiply the bottom row by −6. How would you extend this idea to matrices with more than two rows? 592 Systems of Equations and Matrices In Exercises 23 - 29, consider the following scenario. In the small village of Pedimaxus in the country of Sasquatchia, all 150 residents get one of the two local newspapers. Market research has shown that in any given week, 90% of those who subscribe to the Pedimaxus Tribune want to keep getting it, but 10% want to switch to the Sasquatchia Picayune. Of those who receive the Picayune, 80% want to continue with it and 20% want switch to the Tribune. We can express this situation using matrices. Specifically, let X be the ‘state matrix’ given by X = T P where T is the number of people who get the Tribune and P is the number of people who get the Picayune in a given week. Let Q be the ‘transition matrix’ given by Q = 0.90 0.20 0.10 0.80 such that QX will be the state matrix for the next week. 23. Let’s assume that when Pedimaxus was founded, all 150 residents got the Tribune. (Let’s call this Week 0.) This would mean X = 150 0 Since 10% of that 150 want to switch to the Picayune, we should have that for Week 1, 135 people get the Tribune and 15 people get the Picayune. Show that QX in this situation is indeed QX = 135 15 24. Assuming that the percentages stay the same, we can get to the subscription numbers for Week 2 by computing Q2X. How many people get each paper in Week 2? 25. Explain why the transition matrix does what we want it to do. 26. If the conditions do not change from week to week, then Q remains the same and we have what’s known as a Stochastic Process10 because Week n’s numbers are found by computing QnX. Choose a few values of n and, with the help of your classmates and calculator, find out how many people get each paper for that week. You should start to see a pattern as n →
∞. 27. If you didn’t see the pattern, we’ll help you out. Let Xs = 100 50. Show that QXs = Xs This is called the steady state because the number of people who get each paper didn’t change for the next week. Show that QnX → Xs as n → ∞. 10More specifically, we have a Markov Chain, which is a special type of stochastic process. 8.3 Matrix Arithmetic 593 28. Now let Show that Qn → S as n → ∞. 29. Show that SY = Xs for any matrix Y of the form Y = y 150 − y This means that no matter how the distribution starts in Pedimaxus, if Q is applied often enough, we always end up with 100 people getting the Tribune and 50 people getting the Picayune. 30. Let z = a + bi and w = c + di be arbitrary complex numbers. Associate z and w with the matrices Z = a b −b a and W = c d −d c Show that complex number addition, subtraction and multiplication are mirrored by the associated matrix arithmetic. That is, show that Z + W, Z − W and ZW produce matrices which can be associated with the complex numbers z + w, z − w and zw, respectively. 31. Let A = 1 2 3 4 and B = 0 −3 2 −5 Compare (A + B)2 to A2 + 2AB + B2. Discuss with your classmates what constraints must be placed on two arbitrary matrices A and B so that both (A + B)2 and A2 + 2AB + B2 exist. When will (A + B)2 = A2 + 2AB + B2? In general, what is the correct formula for (A + B)2? In Exercises 32 - 36, consider the following definitions. A square matrix is said to be an upper triangular matrix if all of its entries below the main diagonal are zero and it is said to be a lower triangular matrix if all of its entries above the main diagonal are zero. For example9 0 −5 from Exercises 8 - 21 above is an upper triangular matrix whereas F = 1 0 3 0 is a lower triangular matrix. questions with your classmates. (Zeros are allowed on the main diagonal.) Discuss the following 594 Systems
of Equations and Matrices 32. Give an example of a matrix which is neither upper triangular nor lower triangular. 33. Is the product of two n × n upper triangular matrices always upper triangular? 34. Is the product of two n × n lower triangular matrices always lower triangular? 35. Given the matrix A = 1 2 3 4 write A as LU where L is a lower triangular matrix and U is an upper triangular matrix? 36. Are there any matrices which are simultaneously upper and lower triangular? 8.3 Matrix Arithmetic 8.3.2 Answers 1. For A = 2 −3 4 1 and B = 5 −2 8 4 595 3A = 6 −9 12 3 −B = −5 2 −4 −8 A2 = 1 −18 13 6 A − 2B = −8 1 −7 −12 AB = −2 −28 30 21 BA = 8 −23 20 16 2. For A = −1 5 −3 6 and B = 2 10 1 −7 3A = −3 15 −9 18 −B = −2 −10 7 −1 A2 = −14 25 −15 21 A − 2B = −5 −15 4 11 AB = −37 −5 −48 −24 BA = −32 70 4 −29 3. For A = −1 3 5 2 and B = 7 0 8 −3 1 4 3A = −3 9 15 6 −B = −7 0 −8 3 −1 −4 A2 = 16 3 5 19 A − 2B is not defined AB = −16 3 4 29 2 48 BA is not defined 4. For A = 2 4 6 8 and B = −1 3 −5 11 7 −9 3A = 6 12 18 24 −B = 1 −3 5 9 −11 −7 A2 = 28 40 60 88 A − 2B is not defined AB = 26 −30 34 50 −54 58 BA is not defined 596 5. For A =   3A = 7 8 9     and B = 1 2 3   21 24 27 A2 is not defined AB =   7 14 21 8 16 24 9 18 27   6. For A =   −3 1
−2 4 5 −6   and B = −5 1 8 3A =   3 −6 −9 12 15 −18   A2 is not defined AB is not defined Systems of Equations and Matrices −B = −1 −2 −3 A − 2B is not defined BA = [50] −B = 5 −1 −8 A − 2B is not defined BA = 32 −34 7. For A =   2 −3 3 −7 5 1 −2 1 −1    and B =  1 1 2 17 33 19 10 19 11   3A = A2 =     6 −9 9 −21 15 3 −6 3 −3   −40 −4 23 −10 −4 11 15 21 −36   AB = 8. 7B − 4A =     4 −29 −47 −2 −B =   −1 −2 −1 −17 −33 −19 −10 −19 −11   A − 2B =   0 −7 3 −31 −65 −40 −27 −37 −23   BA =    . AB = −10 1 −20 −1 8.3 Matrix Arithmetic 597 10. BA = 12. ED = −9 −12 1 −2    67 11 3 − 178 3 −72 −30 −40    14. A − 4I2 = −3 2 3 0 16. (A + B)(A − B) = −7 3 46 2 11. E + D is undefined 13. CD + 2I2A = 238 3 −126 361 863 5 15 15. A2 − B2 = 17. A2 − 5A − 2I2 = −8 16 3 25 0 0 0 0 18. E2 + 5E − 36I3 =   −30 20 −15 0
−36 0 −36 0 0   19. EDC =    3449 15 − 407 15 − 101 99 6 − 9548 3 −648 −324 −35 −360    20. CDE is undefined 21. ABCEDI2 = − 90749 15 − 156601 15 − 28867 5 − 47033 5 d e f c a b E1 interchanged R1 and R2 of A. 22. E1A = E2A = E3A = d 5a 5b 5c f a − 2d b − 2e c − 2f f e e d E4 = 1 0 0 −6 E2 multiplied R1 of A by 5. E3 replaced R1 in A with R1 − 2R2. 598 Systems of Equations and Matrices 8.4 Systems of Linear Equations: Matrix Inverses We concluded Section 8.3 by showing how we can rewrite a system of linear equations as the matrix equation AX = B where A and B are known matrices and the solution matrix X of the equation corresponds to the solution of the system. In this section, we develop the method for solving such an equation. To that end, consider the system 2x − 3y = 16 3x + 4y = 7 To write this as a matrix equation, we follow the procedure outlined on page 590. We find the coefficient matrix A, the unknowns matrix X and constant matrix B to be A = 2 −3 4 3 X = x y B = 16 7 In order to motivate how we solve a matrix equation like AX = B, we revisit solving a similar equation involving real numbers. Consider the equation 3x = 5. To solve, we simply divide both sides by 3 and obtain x = 5 3. How can we go about defining an analogous process for matrices? To answer this question, we solve 3x = 5 again, but this time, we pay attention to the properties of real numbers being used at each step. Recall that dividing by 3 is the same as multiplying by 1 3 = 3−1, the so-called multiplicative inverse 1 of 3. 3x = 5 3−1(3x) = 3−1(5) Multiply by the (multiplicative) inverse of 3 Associative property of multiplication Inverse
property Multiplicative Identity 3−1 · 3 x = 3−1(5) 1 · x = 3−1(5) x = 3−1(5) If we wish to check our answer, we substitute x = 3−1(5) into the original equation 3x 3 3−1(5) 3 · 3−1 (5 Associative property of multiplication? = 5 = 5 Multiplicative Identity Inverse property Thinking back to Theorem 8.5, we know that matrix multiplication enjoys both an associative property and a multiplicative identity. What’s missing from the mix is a multiplicative inverse for the coefficient matrix A. Assuming we can find such a beast, we can mimic our solution (and check) to 3x = 5 as follows 1Every nonzero real number a has a multiplicative inverse, denoted a−1, such that a−1 · a = a · a−1 = 1. 8.4 Systems of Linear Equations: Matrix Inverses 599 Solving AX = B Checking our answer AX = B A−1(AX) = A−1B A−1A X = A−1B I2X = A−1B X = A−1B AX A A−1B AA−1 B I2B The matrix A−1 is read ‘A-inverse’ and we will define it formally later in the section. At this stage, we have no idea if such a matrix A−1 exists, but that won’t deter us from trying to find it.2 We want A−1 to satisfy two equations, A−1A = I2 and AA−1 = I2, making A−1 necessarily a 2 × 2 matrix.3 Hence, we assume A−1 has the form A−1 = x1 x2 x3 x4 for real numbers x1, x2, x3 and x4. For reasons which will become clear later, we focus our attention on the equation AA−1 = I2. We have AA−1 = I2 x1 x2 2 −3 x3 x4 4 3 2x1 − 3x3 2x2 − 3x4 3x1 + 4x3 3x2 + 4x4 = = 1 0 0 1 1 0 0 1 This gives rise to two more systems of equations 2x1 − 3
x3 = 1 3x1 + 4x3 = 0 2x2 − 3x4 = 0 3x2 + 4x4 = 1 At this point, it may seem absurd to continue with this venture. After all, the intent was to solve one system of equations, and in doing so, we have produced two more to solve. Remember, the objective of this discussion is to develop a general method which, when used in the correct scenarios, allows us to do far more than just solve a system of equations. If we set about to solve these systems using augmented matrices using the techniques in Section 8.2, we see that not only do both systems have the same coefficient matrix, this coefficient matrix is none other than the matrix A itself. (We will come back to this observation in a moment.) 2Much like Carl’s quest to find Sasquatch. 3Since matrix multiplication isn’t necessarily commutative, at this stage, these are two different equations. 600 Systems of Equations and Matrices 2x1 − 3x3 = 1 3x1 + 4x3 = 0 2x2 − 3x4 = 0 3x2 + 4x4 = 1 Encode into a matrix −−−−−−−−−−−−−→ Encode into a matrix −−−−−−−−−−−−−→ 3 2 −3 1 4 0 2 −3 0 4 1 3 To solve these two systems, we use Gauss-Jordan Elimination to put the augmented matrices into reduced row echelon form. (We leave the details to the reader.) For the first system, we get 2 −3 1 4 0 3 Gauss Jordan Elimination −−−−−−−−−−−−−−−−→ 4 1 0 17 0 1 − 3 17 which gives x1 = 4 17. To solve the second system, we use the exact same row operations, in the same order, to put its augmented matrix into reduced row echelon form (Think about why that works.) and we obtain 17 and x3 = − 3 2 −3 0 4 1 3 Gauss Jordan Elimination −−−−−−−−−−−−−−−−→ 1 0 0 1 3 17 2 17 which means x2 = 3 17 and x4 = 2 17. Hence,
A−1 = x1 x2 x3 x4 = 4 17 − 3 17 3 17 2 17 We can check to see that A−1 behaves as it should by computing AA−1 As an added bonus, AA−1 = 2 −3 4 3 4 17 − 3 17 3 17 2 17 1 0 0 1 = = I2 A−1A = 4 17 − 3 17 3 17 2 17 2 −3 4 3 = 1 0 0 1 = I2 We can now return to the problem at hand. From our discussion at the beginning of the section on page 599, we know X = A−1B = 4 17 − 3 17 3 17 2 17 16 7 = 5 −2 so that our final solution to the system is (x, y) = (5, −2). As we mentioned, the point of this exercise was not just to solve the system of linear equations, but to develop a general method for finding A−1. We now take a step back and analyze the foregoing discussion in a more general context. In solving for A−1, we used two augmented matrices, both of which contained the same entries as A 8.4 Systems of Linear Equations: Matrix Inverses 601 3 2 −3 1 4 0 2 − We also note that the reduced row echelon forms of these augmented matrices can be written as 4 1 0 17 0 1 − 3 17 3 17 2 17 1 0 0 1 I2 I2 = = x1 x3 x2 x4 where we have identified the entries to the left of the vertical bar as the identity I2 and the entries to the right of the vertical bar as the solutions to our systems. The long and short of the solution process can be summarized as A 1 0 A 0 1 Gauss Jordan Elimination −−−−−−−−−−−−−−−−→ Gauss Jordan Elimination −−−−−−−−−−−−−−−−→ I2 I2 x1 x3 x2 x4 Since the row operations for both processes are the same, all of the arithmetic on the left hand side of the vertical bar is identical in both problems. The only difference between the two processes is what happens to the constants to the right of the vertical bar. As long as we keep these separated into columns, we can combine our efforts into one
‘super-sized’ augmented matrix and describe the above process as A 1 0 0 1 Gauss Jordan Elimination −−−−−−−−−−−−−−−−→ I2 x1 x2 x3 x4 We have the identity matrix I2 appearing as the right hand side of the first super-sized augmented matrix and the left hand side of the second super-sized augmented matrix. To our surprise and delight, the elements on the right hand side of the second super-sized augmented matrix are none other than those which comprise A−1. Hence, we have A I2 Gauss Jordan Elimination −−−−−−−−−−−−−−−−→ I2 A−1 In other words, the process of finding A−1 for a matrix A can be viewed as performing a series of row operations which transform A into the identity matrix of the same dimension. We can view this process as follows. In trying to find A−1, we are trying to ‘undo’ multiplication by the matrix A. The identity matrix in the super-sized augmented matrix [A|I] keeps a running memory of all of the moves required to ‘undo’ A. This results in exactly what we want, A−1. We are now ready 602 Systems of Equations and Matrices to formalize and generalize the foregoing discussion. We begin with the formal definition of an invertible matrix. Definition 8.11. An n × n matrix A is said to be invertible if there exists a matrix A−1, read ‘A inverse’, such that A−1A = AA−1 = In. Note that, as a consequence of our definition, invertible matrices are square, and as such, the conditions in Definition 8.11 force the matrix A−1 to be same dimensions as A, that is, n × n. Since not all matrices are square, not all matrices are invertible. However, just because a matrix is square doesn’t guarantee it is invertible. (See the exercises.) Our first result summarizes some of the important characteristics of invertible matrices and their inverses. Theorem 8.6. Suppose A is an n ×
n matrix. 1. If A is invertible then A−1 is unique. 2. A is invertible if and only if AX = B has a unique solution for every n × r matrix B. The proofs of the properties in Theorem 8.6 rely on a healthy mix of definition and matrix arithmetic. To establish the first property, we assume that A is invertible and suppose the matrices B and C act as inverses for A. That is, BA = AB = In and CA = AC = In. We need to show that B and C are, in fact, the same matrix. To see this, we note that B = InB = (CA)B = C(AB) = CIn = C. Hence, any two matrices that act like A−1 are, in fact, the same matrix.4 To prove the second property of Theorem 8.6, we note that if A is invertible then the discussion on page 599 shows the solution to AX = B to be X = A−1B, and since A−1 is unique, so is A−1B. Conversely, if AX = B has a unique solution for every n × r matrix B, then, in particular, there is a unique solution X0 to the equation AX = In. The solution matrix X0 is our candidate for A−1. We have AX0 = In by definition, but we need to also show X0A = In. To that end, we note that A (X0A) = (AX0) A = InA = A. In other words, the matrix X0A is a solution to the equation AX = A. Clearly, X = In is also a solution to the equation AX = A, and since we are assuming every such equation as a unique solution, we must have X0A = In. Hence, we have X0A = AX0 = In, so that X0 = A−1 and A is invertible. The foregoing discussion justifies our quest to find A−1 using our super-sized augmented matrix approach A In Gauss Jordan Elimination −−−−−−−−−−−−−−−−→ In A−1 We are, in essence, trying to find the unique solution to the equation AX = In using row operations.
What does all of this mean for a system of linear equations? Theorem 8.6 tells us that if we write the system in the form AX = B, then if the coefficient matrix A is invertible, there is only one solution to the system − that is, if A is invertible, the system is consistent and independent.5 We also know that the process by which we find A−1 is determined completely by A, and not by the 4If this proof sounds familiar, it should. See the discussion following Theorem 5.2 on page 380. 5It can be shown that a matrix is invertible if and only if when it serves as a coefficient matrix for a system of equations, the system is always consistent independent. It amounts to the second property in Theorem 8.6 where the matrices B are restricted to being n × 1 matrices. We note that, owing to how matrix multiplication is defined, being able to find unique solutions to AX = B for n × 1 matrices B gives you the same statement about solving such equations for n × r matrices − since we can find a unique solution to them one column at a time. 8.4 Systems of Linear Equations: Matrix Inverses 603 constants in B. This answers the question as to why we would bother doing row operations on a super-sized augmented matrix to find A−1 instead of an ordinary augmented matrix to solve a system; by finding A−1 we have done all of the row operations we ever need to do, once and for all, since we can quickly solve any equation AX = B using one multiplication, A−1B. Example 8.4.1. Let A =   1 3 0 −1 1 2   2 5 4 1. Use row operations to find A−1. Check your answer by finding A−1A and AA−1. 2. Use A−1 to solve the following systems of equations    (a) 3x + y + 2z = 26 −y + 5z = 39 2x + y + 4z = 117    (b) 3x + y + 2z = 4 −y + 5z = 2 2
x + y + 4z = 5    (c) 3x + y + 2z = 1 −y + 5z = 0 2x + y + 4z = 0 Solution. 1. We begin with a super-sized augmented matrix and proceed with Gauss-Jordan elimination.   Replace R1 −−−−−−−→ with 1 3 R1 Replace R3 with −−−−−−−−−−→ −2R1 + R3  1 1 3 0 −1 1 2  1 1 3 0 −1 1 0 3  1 0 0  Replace R2 −−−−−−−−→ with (−1)R2  1 0 0 Replace R3 with −−−−−−−−−−→ 3 R2 + R3 − 1  Replace R3 −−−−−−−→ with 3 13 R3   1 1 2 1 1 3 0 −5 13 1 0 3 1 0 3 0 −5 0 1 0 3 0 −1 1 13 1 − 2 13       0 0 1 0 0 1 0 0 3 13 5 1 8 3 1 0 3 0 −5 13 5 0 1 − 2 13 1 0 3 0 −1 1 3 3 1 0 3 0 −1 1 13   13 Replace R1 with − 2 3 R3 + R1 −−−−−−−−−−−−→ Replace R2 with 5R3 + R2    1 0 0 39 − 2 39 − 2 13 − 8 17 1 0 3 1 0 − 10 0 1 − 2 13 13 1 13    13 15 13 3 13 604 Systems of Equations and Matrices    1 0 0 39 − 2 39 − 2 13 − 8 17 1 0 3 1 0 − 10 0 1 − 2 13 13 1 13    Replace R1 with −−−−−−−−−−→ 3 R2 + R1 − 1    2 13 − 7 9 1 0 0 13 0 1 0 − 10 0 0 1 − 2 13 13 −
8 13 1 13    13 15 13 3 13 13 15 13 3 13 We find A−1 =    2 13 − 7 13 15 13 3 13 9 13 − 10 − 2 13  13 − 8 13 1 13 A−1A =   and   . To check our answer, we compute 2 13 − 7 9 13 − 10 − 2 13 13 − 8 13 1 13 13 15 13 3 13       1 3 0 −   = I3  AA−1 =    1 3 0 − 13 − 10 − 2 13 13 − 8 13 1 13 2 13 −   = I3  13 15 13 3 13 2. Each of the systems in this part has A as its coefficient matrix. The only difference between the systems is the constants which is the matrix B in the associated matrix equation AX = B. We solve each of them using the formula X = A−1B. (a) X = A−1B = (b) X = A−1B = (c) X = A−1B =          2 13 − 7 13 − 8 13 − 7 13 − 8 13 − 7 13 − 8 9 13 − 10 − 2 13 9 13 − 10 − 2 13 9 13 − 10 − 2 13 13 1 13 2 13 1 13 2 13 1 13                   26 39 117  13 15 13 3 13 13 15 13 3 13 13 15 13 3 13     =   −39 91   . Our solution is (−39, 91, 26). 26  . We get 5  13, 19 13, 9 13. 5
13 19 13 9 13  . We find 9  13, − 10 13, − 2 13.6 9 13 − 10 13 − 2 13 In Example 8.4.1, we see that finding one inverse matrix can enable us to solve an entire family of systems of linear equations. There are many examples of where this comes in handy ‘in the wild’, and we chose our example for this section from the field of electronics. We also take this opportunity to introduce the student to how we can compute inverse matrices using the calculator. 6Note that the solution is the first column of the A−1. The reader is encouraged to meditate on this ‘coincidence’. 8.4 Systems of Linear Equations: Matrix Inverses 605 Example 8.4.2. Consider the circuit diagram below.7 We have two batteries with source voltages VB1 and VB2, measured in volts V, along with six resistors with resistances R1 through R6, measured in kiloohms, kΩ. Using Ohm’s Law and Kirchhoff’s Voltage Law, we can relate the voltage supplied to the circuit by the two batteries to the voltage drops across the six resistors in order to find the four ‘mesh’ currents: i1, i2, i3 and i4, measured in milliamps, mA. If we think of electrons flowing through the circuit, we can think of the voltage sources as providing the ‘push’ which makes the electrons move, the resistors as obstacles for the electrons to overcome, and the mesh current as a net rate of flow of electrons around the indicated loops. The system of linear equations associated with this circuit is    (R1 + R3) i1 − R3i2 − R1i4 = VB1 −R3i1 + (R2 + R3 + R4) i2 − R4i3 − R2i4 = 0 −R4i2 + (R4 + R6) i3 − R6i4 = −VB2 −R1i1 − R2i2 − R6i3 +
(R1 + R2 + R5 + R6) i4 = 0 1. Assuming the resistances are all 1kΩ, find the mesh currents if the battery voltages are (a) VB1 = 10V and VB2 = 5V (b) VB1 = 10V and VB2 = 0V (c) VB1 = 0V and VB2 = 10V (d) VB1 = 10V and VB2 = 10V 2. Assuming VB1 = 10V and VB2 = 5V, find the possible combinations of resistances which would yield the mesh currents you found in 1(a). 7The authors wish to thank Don Anthan of Lakeland Community College for the design of this example. VB1R5R1R2R6VB2R3R4i1i2i3i41 606 Solution. Systems of Equations and Matrices 1. Substituting the resistance values into our system of equations, we get    2i1 − i2 − i4 = VB1 −i1 + 3i2 − i3 − i4 = 0 −i2 + 2i3 − i4 = −VB2 −i1 − i2 − i3 + 4i4 = 0 This corresponds to the matrix equation AX = B where     A = 2 −1 −1 0 −1 0 −1 3 −1 −1 2 −1 4 −1 −1 −1     X =         i1 i2 i3 i4 B =     VB1 0 −VB2 0     When we input the matrix A into the calculator, we find from which we have A−1 =     1.625 1.25 1.125 1 1.25 1.125 1.5 1.25 1.25 1.625 To solve the four systems given to us, we find X = A−1
B where the value of B is determined by the given values of VB1 and VB2 1 (a) B =         10 0 −5 0, 1 (b) B =         10 0 0 0, 1 (c10 0, 1 (d) B =         10 0 10 0 (a) For VB1 = 10V and VB2 = 5V, the calculator gives i1 = 10.625 mA, i2 = 6.25 mA, i3 = 3.125 mA, and i4 = 5 mA. We include a calculator screenshot below for this part (and this part only!) for reference. 8.4 Systems of Linear Equations: Matrix Inverses 607 (b) By keeping VB1 = 10V and setting VB2 = 0V, we are removing the effect of the second battery. We get i1 = 16.25 mA, i2 = 12.5 mA, i3 = 11.25 mA, and i4 = 10 mA. (c) Part (c) is a symmetric situation to part (b) in so much as we are zeroing out VB1 and making VB2 = 10. We find i1 = −11.25 mA, i2 = −12.5 mA, i3 = −16.25 mA, and i4 = −10 mA, where the negatives indicate that the current is flowing in the opposite direction as is indicated on the diagram. The reader is encouraged to study the symmetry here, and if need be, hold up a mirror to the diagram to literally ‘see’ what is happening. (d) For VB1 = 10V and VB2 = 10V, we get i1 = 5 mA, i2 = 0 mA, i3 = −5 mA, and i4 = 0 mA. The mesh currents i2 and i4 being zero is a consequence of both batteries ‘pushing’ in equal but opposite directions, causing the net �
�ow of electrons in these two regions to cancel out. 2. We now turn the tables and are given VB1 = 10V, VB2 = 5V, i1 = 10.625 mA, i2 = 6.25 mA, i3 = 3.125 mA and i4 = 5 mA and our unknowns are the resistance values. Rewriting our system of equations, we get    1.25R2 − 4.375R3 + 3.125R4 = 5.625R1 + 4.375R3 = 10 0 −3.125R4 − 1.875R6 = −5 0 −5.625R1 − 1.25R2 + 5R5 + 1.875R6 = The coefficient matrix for this system is 4 × 6 (4 equations with 6 unknowns) and is therefore not invertible. We do know, however, this system is consistent, since setting all the resistance values equal to 1 corresponds to our situation in problem 1a. This means we have an underdetermined consistent system which is necessarily dependent. To solve this system, we encode it into an augmented matrix 0 4.375 1.25 −4.375     5.25 0 0 0 −5.625 −1.25 0 3.125 0 −3.125 0 0 0 0 0 10 0 0 0 −1.875 −5 0 1.875 5     and use the calculator to write in reduced row echelon form 608 Systems of Equations and Matrices     1 0 0 0 0.7 0 1 −3..7 0 0 −1.5 −4 0.6 1.6 0 1 1 0     Decoding this system from the matrix, we get    R1 + 0.7R3 = 1.7 R2 − 3.5R3 − 1.5R6 = −4 R4 + 0.6R6 = 1.6 1 R5 = We can solve for R1, R2, R4
and R5 leaving R3 and R6 as free variables. Labeling R3 = s and R6 = t, we have R1 = −0.7s + 1.7, R2 = 3.5s + 1.5t − 4, R4 = −0.6t + 1.6 and R5 = 1. Since resistance values are always positive, we need to restrict our values of s and t. We know R3 = s > 0 and when we combine that with R1 = −0.7s + 1.7 > 0, we get 0 < s < 16 7. Similarly, R6 = t > 0 and with R4 = −0.6t + 1.6 > 0, we find 0 < t < 8 In order visualize the inequality R2 = 3.5s + 1.5t − 4 > 0, we graph the 3. line 3.5s + 1.5t − 4 = 0 on the st-plane and shade accordingly.8 Imposing the additional conditions 0 < s < 16 3, we find our values of s and t restricted to the region depicted on the right. Using the roster method, the values of s and t are pulled from the region (s, t) : 0 < s < 16 3, 3.5s + 1.5t − 4 > 0. The reader is encouraged to check that the solution presented in 1(a), namely all resistance values equal to 1, corresponds to a pair (s, t) in the region. 7 and = 16 7 −2 −1 1 2 4 t −2 −1 1 2 4 t −1 −1 The region where 3.5s + 1.5t − 4 > 0 The region for our parameters s and t. t = 8 3 8See Section 2.4 for a review of this procedure. 8.4 Systems of Linear Equations: Matrix Inverses 609 8.4.1 Exercises In Exercises 1 - 8, find the inverse of the matrix or state that the matrix is not invertible. 1. A = 3. C = 1 2 3 4 6 15 14 35 2. B = 4. D = 12 −7 −5 3 2 −1 16 −9     5. E = 7. G = 0 3 2 −1 4 3 2 −5 �
��  3 2 3 11 4 19 −3 1 2 3   6. F = 83 4 −3 6 2 1 2 −2 0 0 0 −3 8 16 4 − In Exercises 9 - 11, use one matrix inverse to solve the following systems of linear equations. 3x + 7y = 26 5x + 12y = 39 9. 10. 3x + 7y = 0 5x + 12y = −1 11. 3x + 7y = −7 5 5x + 12y = In Exercises 12 - 14, use the inverse of E from Exercise 5 above to solve the following systems of linear equations.    12. 3x + 4z = 1 2x − y + 3z = 0 −3x + 2y − 5z = 0    13. 3x + 4z = 0 2x − y + 3z = 1 −3x + 2y − 5z = 0    14. 3x + 4z = 0 2x − y + 3z = 0 −3x + 2y − 5z = 1 15. This exercise is a continuation of Example 8.3.3 in Section 8.3 and gives another application of matrix inverses. Recall that given the position matrix P for a point in the plane, the matrix RP corresponds to a point rotated 45◦ counterclockwise from P where a) Find R−1. (b) If RP rotates a point counterclockwise 45◦, what should R−1P do? Check your answer by finding R−1P for various points on the coordinate axes and the lines y = ±x. (c) Find R−1P where P corresponds to a generic point P (x, y). Verify that this takes points on the curve y = 2 x to points on the curve x2 − y2 = 4. 610 Systems of Equations and Matrices 16. A Sasquatch’s diet consists of three primary foods: Ippizuti Fish, Misty Mushrooms, and Sun Berries. Each serving of Ippizuti Fish is 500 calories, contains 40 grams of protein, and has no Vitamin X. Each serving of Misty Mushrooms is 50 calories, contains 1 gram of protein, and 5 milligrams of Vitamin X. Finally,
each serving of Sun Berries is 80 calories, contains no protein, but has 15 milligrams of Vitamin X.9 (a) If an adult male Sasquatch requires 3200 calories, 130 grams of protein, and 275 milligrams of Vitamin X daily, use a matrix inverse to find how many servings each of Ippizuti Fish, Misty Mushrooms, and Sun Berries he needs to eat each day. (b) An adult female Sasquatch requires 3100 calories, 120 grams of protein, and 300 milligrams of Vitamin X daily. Use the matrix inverse you found in part (a) to find how many servings each of Ippizuti Fish, Misty Mushrooms, and Sun Berries she needs to eat each day. (c) An adolescent Sasquatch requires 5000 calories, 400 grams of protein daily, but no Vitamin X daily.10 Use the matrix inverse you found in part (a) to find how many servings each of Ippizuti Fish, Misty Mushrooms, and Sun Berries she needs to eat each day. 17. Matrices can be used in cryptography. Suppose we wish to encode the message ‘BIGFOOT LIVES’. We start by assigning a number to each letter of the alphabet, say A = 1, B = 2 and so on. We reserve 0 to act as a space. Hence, our message ‘BIGFOOT LIVES’ corresponds to the string of numbers ‘2, 9, 7, 6, 15, 15, 20, 0, 12, 9, 22, 5, 19.’ To encode this message, we use an invertible matrix. Any invertible matrix will do, but for this exercise, we choose   A = 2 −3 3 −7 5 1 −2 1 −1   Since A is 3 × 3 matrix, we encode our message string into a matrix M with 3 rows. To do this, we take the first three numbers, 2 9 7, and make them our first column, the next three numbers, 6 15 15, and make them our second column, and so on. We put 0’s to round out the matrix.   M = 6 20 2 9 15 7 15 12 9 19 0 0 0 22 5   To encode the message
, we find the product AM AM =   2 −3 3 −7 5 1 −2 1 −1     6 20 2 9 15 7 15 12 9 19 0 0 0 22 5    =  42 3 12 1 100 −23 38 57 39 36 −12 −42 −152 −46 −133   9Misty Mushrooms and Sun Berries are the only known fictional sources of Vitamin X. 10Vitamin X is needed to sustain Sasquatch longevity only. 8.4 Systems of Linear Equations: Matrix Inverses 611 So our coded message is ‘12, 1, −12, 42, 3, −42, 100, 36, −152, −23, 39, −46, 38, 57, −133.’ To decode this message, we start with this string of numbers, construct a message matrix as we did earlier (we should get the matrix AM again) and then multiply by A−1. (a) Find A−1. (b) Use A−1 to decode the message and check this method actually works. (c) Decode the message ‘14, 37, −76, 128, 21, −151, 31, 65, −140’ (d) Choose another invertible matrix and encode and decode your own messages. 18. Using the matrices A from Exercise 1, B from Exercise 2 and D from Exercise 4, show AB = D and D−1 = B−1A−1. That is, show that (AB)−1 = B−1A−1. 19. Let M and N be invertible n × n matrices. Show that (M N )−1 = N −1M −1 and compare your work to Exercise 31 in Section 5.2. 612 Systems of Equations and Matrices 8.4.2 Answers 1. A−1 = −2 3 1 2 − 1 2 3. C is not invertible 5. E−1 =   −1 8 4 1 −3 −1 1 −6 −3   2. B−1 = 3 7 5 12 4. D−1 = 9 2 − 1 2 8 −1 6. F −   
16 3 0 2 − 35 −90 − 1 2 0 1 5 0 −7 −36      0 7 2 0 1 7. G is not invertible 8. H −1 = The coefficient matrix is B−1 from Exercise 2 above so the inverse we need is (B−1)−1 = B. 12 −7 −5 3 26 39 9. = 39 −13 10. 11. 12 −7 −5 3 0 −1 12 −7 −5 3 −7 5 = = 7 −3 So x = 39 and y = −13. So x = 7 and y = −3. −119 50 So x = −119 and y = 50. The coefficient matrix is E =   3 0 2 −1 4 3 2 −5   from Exercise 5, so E−1 =   −1 8 4 1 −3 −1 1 −6 −3   12. 13. 14.       −1 8 4 1 −3 −1 1 −6 −3 −1 8 4 1 −3 −1 1 −6 −3 −1 8 4 1 −3 −1 1 −6 −1 1 1 8 −3 −6 4 −1 −3   So x = −1, y = 1 and z = 1.   So x = 8, y = −3 and z = −6.   So x = 4, y = −1 and z = −3. 8.4 Systems of Linear Equations: Matrix Inverses 613 16. (a) The adult male Sasquatch needs: 3 servings of Ippizuti Fish, 10 servings of Misty Mush- rooms, and 15 servings of Sun Berries daily. (b) The adult female Sasquatch needs: 3 servings of Ippizuti Fish and 20 servings of Sun Berries daily. (No Misty Mushrooms are needed!) (c) The adolescent Sasquatch requires 10 servings of Ippizuti Fish daily. (No Misty Mush- rooms or Sun Berries are needed!) 17. (a) A−1 =     1
2 1 17 33 19 10 19 11     (b) 1 1 2 17 33 19 10 19 11   42 3 12 1 100 −23 38 57 39 36 −12 −42 −152 −46 −133    =  6 20 2 9 15 7 15 12 0 22 5 9 19 0 0   (c) ‘LOGS RULE’ 614 Systems of Equations and Matrices 8.5 Determinants and Cramer’s Rule 8.5.1 Definition and Properties of the Determinant In this section we assign to each square matrix A a real number, called the determinant of A, which will eventually lead us to yet another technique for solving consistent independent systems of linear equations. The determinant is defined recursively, that is, we define it for 1 × 1 matrices and give a rule by which we can reduce determinants of n × n matrices to a sum of determinants of (n − 1) × (n − 1) matrices.1 This means we will be able to evaluate the determinant of a 2 × 2 matrix as a sum of the determinants of 1 × 1 matrices; the determinant of a 3 × 3 matrix as a sum of the determinants of 2 × 2 matrices, and so forth. To explain how we will take an n × n matrix and distill from it an (n − 1) × (n − 1), we use the following notation. Definition 8.12. Given an n × n matrix A where n > 1, the matrix Aij is the (n − 1) × (n − 1) matrix formed by deleting the ith row of A and the jth column of A. For example, using the matrix A below, we find the matrix A23 by deleting the second row and third column of A1 5 1 4 2   Delete R2 and C3 −−−−−−−−−−−→ A23 = 3 1 2 1 We are now in the position to define the determinant of a matrix. Definition 8.13. Given an n × n matrix A the determinant of A, denoted det(A), is de�
�ned as follows If n = 1, then A = [a11] and det(A) = det ([a11]) = a11. If n > 1, then A = [aij]n×n and det(A) = det [aij]n×n = a11 det (A11) − a12 det (A12) + −... + (−1)1+na1n det (A1n) ‘det(A)’ and ‘|A|’ There are two commonly used notations for the determinant of a matrix A: We have chosen to use the notation det(A) as opposed to |A| because we find that the latter is often confused with absolute value, especially in the context of a 1 × 1 matrix. In the expansion a11 det (A11)−a12 det (A12)+−...+(−1)1+na1n det (A1n), the notation ‘+−...+(−1)1+na1n’ means that the signs alternate and the final sign is dictated by the sign of the quantity (−1)1+n. Since the entries a11, a12 and so forth up through a1n comprise the first row of A, we say we are finding the determinant of A by ‘expanding along the first row’. Later in the section, we will develop a formula for det(A) which allows us to find it by expanding along any row. Applying Definition 8.13 to the matrix A = 4 −3 1 2 we get 1We will talk more about the term ‘recursively’ in Section 9.1. 8.5 Determinants and Cramer’s Rule 615 det(A) = det 4 −3 1 2 = 4 det (A11) − (−3) det (A12) = 4 det([1]) + 3 det([2]) = 4(1) + 3(2) = 10 For a generic 2 × 2 matrix A = a b c d we get det(A) = det a b c d = a det (A11) − b det (A12) = a det ([d]) − b det ([c]) = ad − bc This formula
is worth remembering Equation 8.1. For a 2 × 2 matrix, det a b c d = ad − bc Applying Definition 8.13 to the 3 × 3 matrix A =   3 1 0 −1 1 2 2 5 4   we obtain det(A) = det     3 1 0 − det (A11) − 1 det (A12) + 2 det (A13) = 3 det −1 5 1 4 − det 0 5 2 4 + 2 det 0 −1 1 2 = 3((−1)(4) − (5)(1)) − ((0)(4) − (5)(2)) + 2((0)(1) − (−1)(2)) = 3(−9) − (−10) + 2(2) = −13 To evaluate the determinant of a 4 × 4 matrix, we would have to evaluate the determinants of four 3 × 3 matrices, each of which involves the finding the determinants of three 2 × 2 matrices. As you can see, our method of evaluating determinants quickly gets out of hand and many of you may be reaching for the calculator. There is some mathematical machinery which can assist us in calculating determinants and we present that here. Before we state the theorem, we need some more terminology. 616 Systems of Equations and Matrices Definition 8.14. Let A be an n × n matrix and Aij be defined as in Definition 8.12. The ij minor of A, denoted Mij is defined by Mij = det (Aij). The ij cofactor of A, denoted Cij is defined by Cij = (−1)i+jMij = (−1)i+j det (Aij). We note that in Definition 8.13, the sum a11 det (A11) − a12 det (A12) + −... + (−1)1+na1n det (A1n) can be rewritten as a11(−1)1+1 det (A11) + a12(−1)1+2 det (A12) +... + a1n(−1)1+n det
(A1n) which, in the language of cofactors is a11C11 + a12C12 +... + a1nC1n We are now ready to state our main theorem concerning determinants. Theorem 8.7. Properties of the Determinant: Let A = [aij]n×n. We may find the determinant by expanding along any row. That is, for any 1 ≤ k ≤ n, det(A) = ak1Ck1 + ak2Ck2 +... + aknCkn If A is the matrix obtained from A by: – interchanging any two rows, then det(A) = − det(A). – replacing a row with a nonzero multiple (say c) of itself, then det(A) = c det(A) – replacing a row with itself plus a multiple of another row, then det(A) = det(A) If A has two identical rows, or a row consisting of all 0’s, then det(A) = 0. If A is upper or lower triangular,a then det(A) is the product of the entries on the main diagonal.b If B is an n × n matrix, then det(AB) = det(A) det(B). det (An) = det(A)n for all natural numbers n. A is invertible if and only if det(A) = 0. In this case, det A−1 = 1 det(A). aSee Exercise 8.3.1 in 8.3. bSee page 585 in Section 8.3. Unfortunately, while we can easily demonstrate the results in Theorem 8.7, the proofs of most of these properties are beyond the scope of this text. We could prove these properties for generic 2 × 2 8.5 Determinants and Cramer’s Rule 617 or even 3 × 3 matrices by brute force computation, but this manner of proof belies the elegance and symmetry of the determinant. We will prove what few properties we can after we have developed some more tools such as the Principle of Mathematical Induction in Section 9.3.2 For the moment, let us demonstrate some of the properties listed in Theorem 8.7 on the matrix A below. (Others will be discussed in the Exercises.) A =   3 1 0 −1 1
2   2 5 4 We found det(A) = −13 by expanding along the first row. To take advantage of the 0 in the second row, we use Theorem 8.7 to find det(A) = −13 by expanding along that row.   det   1 3 0 −1 1 2 2 5 4     = 0C21 + (−1)C22 + 5C23 = (−1)(−1)2+2 det (A22) + 5(−1)2+3 det (A23) = − det 3 2 2 4 − 5 det 3 1 2 1 = −((3)(4) − (2)(2)) − 5((3)(1) − (2)(1)) = −8 − 5 = −13 In general, the sign of (−1)i+j in front of the minor in the expansion of the determinant follows an alternating pattern. Below is the pattern for 2 × 2, 3 × 3 and 4 × 4 matrices, and it extends naturally to higher dimensions. + − − +   + − + − + − + − +       + − + − − + − + + − + − − + − +     The reader is cautioned, however, against reading too much into these sign patterns. In the example above, we expanded the 3 × 3 matrix A by its second row and the term which corresponds to the second entry ended up being negative even though the sign attached to the minor is (+). These signs represent only the signs of the (−1)i+j in the formula; the sign of the corresponding entry as well as the minor itself determine the ultimate sign of the term in the expansion of the determinant. To illustrate some of the other properties in Theorem 8.7, we use row operations to transform our 3 × 3 matrix A into an upper triangular matrix, keeping track of the row operations, and labeling 2For a very elegant treatment, take a course in Linear Algebra. There, you will most likely see the treatment of determinants logically reversed than what is presented here. Specifically, the determinant is defined as a function
which takes a square matrix to a real number and satisfies some of the properties in Theorem 8.7. From that function, a formula for the determinant is developed. 618 Systems of Equations and Matrices each successive matrix.3   3 1 0 −1 1 2 A   2 5 4 Replace R3 −−−−−−−−−−→ with − 2 3 R1 + R3   3 1 0 − Replace R3 with −−−−−−−−−−→ 1 3 R2 + R3   3 1 0 −1 0 0 C   2 5 13 3 Theorem 8.7 guarantees us that det(A) = det(B) = det(C) since we are replacing a row with itself plus a multiple of another row moving from one matrix to the next. Furthermore, since C is upper triangular, det(C) is the product of the entries on the main diagonal, in this case det(C) = (3)(−1) 13 = −13. This demonstrates the utility of using row operations to assist in 3 calculating determinants. This also sheds some light on the connection between a determinant and invertibility. Recall from Section 8.4 that in order to find A−1, we attempt to transform A to In using row operations A In Gauss Jordan Elimination −−−−−−−−−−−−−−−−→ In A−1 As we apply our allowable row operations on A to put it into reduced row echelon form, the determinant of the intermediate matrices can vary from the determinant of A by at most a nonzero multiple. This means that if det(A) = 0, then the determinant of A’s reduced row echelon form must also be nonzero, which, according to Definition 8.4 means that all the main diagonal entries on A’s reduced row echelon form must be 1. That is, A’s reduced row echelon form is In, and A is invertible. Conversely, if A is invertible, then A can be transformed into In using row operations. Since det (In) = 1 = 0, our same logic implies det(A) = 0. Basically, we have established
that the determinant determines whether or not the matrix A is invertible.4 It is worth noting that when we first introduced the notion of a matrix inverse, it was in the context of solving a linear matrix equation. In effect, we were trying to ‘divide’ both sides of the matrix equation AX = B by the matrix A. Just like we cannot divide a real number by 0, Theorem 8.7 tells us we cannot ‘divide’ by a matrix whose determinant is 0. We also know that if the coefficient matrix of a system of linear equations is invertible, then system is consistent and independent. It follows, then, that if the determinant of said coefficient is not zero, the system is consistent and independent. 8.5.2 Cramer’s Rule and Matrix Adjoints In this section, we introduce a theorem which enables us to solve a system of linear equations by means of determinants only. As usual, the theorem is stated in full generality, using numbered unknowns x1, x2, etc., instead of the more familiar letters x, y, z, etc. The proof of the general case is best left to a course in Linear Algebra. 3Essentially, we follow the Gauss Jordan algorithm but we don’t care about getting leading 1’s. 4In Section 8.5.2, we learn determinants (specifically cofactors) are deeply connected with the inverse of a matrix. 8.5 Determinants and Cramer’s Rule 619 Theorem 8.8. Cramer’s Rule: Suppose AX = B is the matrix form of a system of n linear equations in n unknowns where A is the coefficient matrix, X is the unknowns matrix, and B is the constant matrix. If det(A) = 0, then the corresponding system is consistent and independent and the solution for unknowns x1, x2,... xn is given by: xj = det (Aj) det(A), where Aj is the matrix A whose jth column has been replaced by the constants in B. In words, Cramer’s Rule tells us we can solve for each unknown, one at a time, by finding the ratio of the determinant of
Aj to that of the determinant of the coefficient matrix. The matrix Aj is found by replacing the column in the coefficient matrix which holds the coefficients of xj with the constants of the system. The following example fleshes out this method. Example 8.5.1. Use Cramer’s Rule to solve for the indicated unknowns. 1. Solve 2x1 − 3x2 = 4 5x1 + x2 = −2 for x1 and x2 2. Solve    2x − 3y + z = −1 1 0 x − y + z = 3x − 4z = for z. Solution. 1. Writing this system in matrix form, we find A = 2 −3 1 5 X = x1 x2 B = 4 −2 To find the matrix A1, we remove the column of the coefficient matrix A which holds the coefficients of x1 and replace it with the corresponding entries in B. Likewise, we replace the column of A which corresponds to the coefficients of x2 with the constants to form the matrix A2. This yields A1 = 4 −3 1 −2 A2 = 2 4 5 −2 Computing determinants, we get det(A) = 17, det (A1) = −2 and det (A2) = −24, so that x1 = det (A1) det(A) = − 2 17 x2 = det (A2) det(A) = − 24 17 The reader can check that the solution to the system is − 2 17, − 24 17. 620 Systems of Equations and Matrices 2. To use Cramer’s Rule to find z, we identify x3 as z. We have   A = 2 −3 1 −1 3 1 1 0 −1 1 0    A3 = Az =  2 −3 −1 1 −1 1 0 0 3   Expanding both det(A) and det (Az) along the third rows (to take advantage of the 0’s) gives z = det (Az) det(A) = −12 −10 = 6 5 The reader is
encouraged to solve this system for x and y similarly and check the answer. Our last application of determinants is to develop an alternative method for finding the inverse of a matrix.5 Let us consider the 3 × 3 matrix A which we so extensively studied in Section 8.5.1 A =   3 1 0 −1 1 2   2 5 4 We found through a variety of methods that det(A) = −13. To our surprise and delight, its inverse below has a remarkable number of 13’s in the denominators of its entries. This is no coincidence.   2   A−1 = 13 − 7 9 13 − 10 − 2 13 Recall that to find A−1, we are essentially solving the matrix equation AX = I3, where X = [xij]3×3 is a 3 × 3 matrix. Because of how matrix multiplication is defined, the first column of I3 is the product of A with the first column of X, the second column of I3 is the product of A with the second column of X and the third column of I3 is the product of A with the third column of X. In other words, we are solving three equations6 13 − 8 13 15 13 3 13 13 1 13    A  x11 x21 x31    =    1 0 0  A  x12 x22 x32    =    0 1 0  A  x13 x23 x33    =    0 0 1 We can solve each of these systems using Cramer’s Rule. Focusing on the first system, we have A1 =   1 1 0 −1 1 0 2 5 4    A2 =  3 1 2 0 0 5 2 0 4    A3 =  3 1 0 −1 1 2   1 0 0 5We are developing
a method in the forthcoming discussion. As with the discussion in Section 8.4 when we developed the first algorithm to find matrix inverses, we ask that you indulge us. 6The reader is encouraged to stop and think this through. 8.5 Determinants and Cramer’s Rule 621 If we expand det (A1) along the first row, we get det (A1) = det = det −1 5 1 4 −1 5 1 4 − det 0 5 0 4 + 2 det 0 −1 1 0 Amazingly, this is none other than the C11 cofactor of A. The reader is invited to check this, as well as the claims that det (A2) = C12 and det (A3) = C13.7 (To see this, though it seems unnatural to do so, expand along the first row.) Cramer’s Rule tells us x11 = det (A1) det(A) = C11 det(A), x21 = det (A2) det(A) = C12 det(A), x31 = det (A3) det(A) = C13 det(A) So the first column of the inverse matrix X is:   =   x11 x21 x31                   C11 det(A) C12 det(A) C13 det(A) = 1 det(A)     C11 C12 C13 Notice the reversal of the subscripts going from the unknown to the corresponding cofactor of A. This trend continues and we get   =   x12 x22 x32 1 det(A)     C21 C22 C23   =   x13 x23 x33 1 det(A)     C31 C32 C33 Putting all of these together, we have obtained a new
and surprising formula for A−1, namely A−1 = 1 det(A)   C11 C21 C31 C12 C22 C32 C13 C23 C33   To see that this does indeed yield A−1, we find all of the cofactors of A C11 = −9, C21 = −2, C31 = C12 = 10, C22 = C13 = 7 8, C32 = −15 2, C23 = −1, C33 = −3 And, as promised, 7In a solid Linear Algebra course you will learn that the properties in Theorem 8.7 hold equally well if the word ‘row’ is replaced by the word ‘column’. We’re not going to get into column operations in this text, but they do make some of what we’re trying to say easier to follow. 622 Systems of Equations and Matrices A−1 = 1 det(A)   C11 C21 C31 C12 C22 C32 C13 C23 C33   = −   1 13 −9 −2 7 8 −15 10 2 −1 −3   =    2 13 − 7 9 13 − 10 − 2 13 13 − 8 13 1 13    13 15 13 3 13 To generalize this to invertible n × n matrices, we need another definition and a theorem. Our definition gives a special name to the cofactor matrix, and the theorem tells us how to use it along with det(A) to find the inverse of a matrix. Definition 8.15. Let A be an n × n matrix, and Cij denote the ij cofactor of A. The adjoint of A, denoted adj(A) is the matrix whose ij-entry is the ji cofactor of A, Cji. That is adj(A) =      C11 C21 C12 C22...... C1n C2n... Cn1... Cn2...... Cnn  �
��    This new notation greatly shortens the statement of the formula for the inverse of a matrix. Theorem 8.9. Let A be an invertible n × n matrix. Then A−1 = 1 det(A) adj(A) For 2 × 2 matrices, Theorem 8.9 reduces to a fairly simple formula. Equation 8.2. For an invertible 2 × 2 matrix, a b c d −1 = 1 ad − bc d −b a −c The proof of Theorem 8.9 is, like so many of the results in this section, best left to a course in Linear Algebra. In such a course, not only do you gain some more sophisticated proof techniques, you also gain a larger perspective. The authors assure you that persistence pays off. If you stick around a few semesters and take a course in Linear Algebra, you’ll see just how pretty all things matrix really are - in spite of the tedious notation and sea of subscripts. Within the scope of this text, we will prove a few results involving determinants in Section 9.3 once we have the Principle of Mathematical Induction well in hand. Until then, make sure you have a handle on the mechanics of matrices and the theory will come eventually. 8.5 Determinants and Cramer’s Rule 623 8.5.3 Exercises In Exercises 1 - 8, compute the determinant of the given matrix. (Some of these matrices appeared in Exercises 1 - 8 in Section 8.4.) 1. B = 12 −7 −5 3 3. Q = x x2 1 2x     5. F = 73 4 −3 6 2 i −1 k j 5 0 9 −4 −2   2. C = 6 15 14 35           4. L = 6. G = 8. H = 1 x3 3 x4     ln(x) x3 1 − 3 ln(x) x4   2 3 3 11 4 19 − 1 2 3 1 2 −2 0 0 0 −3 0 8 7
16 0 4 1 −5 1     In Exercises 9 - 14, use Cramer’s Rule to solve the system of linear equations. 3x + 7y = 26 5x + 12y = 39 9. 11. 13.    x + y = 8000 0.03x + 0.05y = 250 x + y + z = 3 2x − y + z = 0 −3x + 5y + 7z = 7 In Exercises 15 - 16, use Cramer’s Rule to solve for x4. 15.    x1 − x3 = −2 2x2 − x4 = 0 x1 − 2x2 + x3 = 0 −x3 + x4 = 1 16. 10. 2x − 4y = 5 10x + 13y = −6 12 6x + 7y = 3 14.       3x + y − 2z = 10 4x − y + z = 5 x − 3y − 4z = −1 4x1 + x2 = x2 − 3x3 = 10x1 + x3 + x4 = 4 1 0 −x2 + x3 = −3 624 Systems of Equations and Matrices In Exercises 17 - 18, find the inverse of the given matrix using their determinants and adjoints. 17. B = 12 −7 −5 3 183 4 −3 6 2 19. Carl’s Sasquatch Attack! Game Card Collection is a mixture of common and rare cards. Each common card is worth $0.25 while each rare card is worth $0.75. If his entire 117 card collection is worth $48.75, how many of each kind of card does he own? 20. How much of a 5 gallon 40% salt solution should be replaced with pure water to obtain 5 gallons of a 15% solution? 21. How much of a 10 liter 30% acid solution must be replaced with pure acid to obtain 10 liters of a 50% solution? 22. Daniel’s Exotic Animal Rescue houses snakes, tarantulas and scorpions. When
asked how many animals of each kind he boards, Daniel answered: ‘We board 49 total animals, and I am responsible for each of their 272 legs and 28 tails.’ How many of each animal does the Rescue board? (Recall: tarantulas have 8 legs and no tails, scorpions have 8 legs and one tail, and snakes have no legs and one tail.) 23. This exercise is a continuation of Exercise 16 in Section 8.4. Just because a system is consistent independent doesn’t mean it will admit a solution that makes sense in an applied setting. Using the nutrient values given for Ippizuti Fish, Misty Mushrooms, and Sun Berries, use Cramer’s Rule to determine the number of servings of Ippizuti Fish needed to meet the needs of a daily diet which requires 2500 calories, 1000 grams of protein, and 400 milligrams of Vitamin X. Now use Cramer’s Rule to find the number of servings of Misty Mushrooms required. Does a solution to this diet problem exist? 11 −7 −3 15 9 6 1 −5 9 6 −7 11 24. Let R =, and a) Show that det(RS) = det(R) det(S) (b) Show that det(T ) = − det(R) (c) Show that det(U ) = −3 det(S) 25. For M, N, and P below, show that det(M ) = 0, det(N ) = 0 and det(P ) = 02 −4 −6 9 8 7   8.5 Determinants and Cramer’s Rule 625 26. Let A be an arbitrary invertible 3 × 3 matrix. (a) Show that det(I3) = 1. (See footnote8 below.) (b) Using the facts that AA−1 = I3 and det(AA−1) = det(A) det(A−1), show that det(A−1) = 1 det(A) The purpose of Exercises 27 - 30 is to introduce you to the eigenvalues and eigenvectors of a matrix.9 We begin with an example using a 2 × 2 matrix and then guide you through some exercises using a 3 × 3 matrix. Consider the matrix C = 6 15 14 35 from Exercise 2. We know that det(C) = 0 which means that CX =
02×2 does not have a unique solution. So there is a nonzero matrix Y with CY = 02×2. In fact, every matrix of the form Y = − 5 2 t t is a solution to CX = 02×2, so there are infinitely many matrices such that CX = 02×2. But consider the matrix X41 = 3 7 It is NOT a solution to CX = 02×2, but rather, 3 7 6 15 14 35 CX41 = = 123 287 = 41 3 7 In fact, if Z is of the form Z = 3 7 t t then CZ = 6 15 14 35 3 7 t t = 123 7 t 41t = 41 3 7 t t = 41Z for all t. The big question is “How did we know to use 41?” We need a number λ such that CX = λX has nonzero solutions. We have demonstrated that λ = 0 and λ = 41 both worked. Are there others? If we look at the matrix equation more closely, what 8If you think about it for just a moment, you’ll see that det(In) = 1 for any natural number n. The formal proof of this fact requires the Principle of Mathematical Induction (Section 9.3) so we’ll stick with n = 3 for the time being. 9This material is usually given its own chapter in a Linear Algebra book so clearly we’re not able to tell you everything you need to know about eigenvalues and eigenvectors. They are a nice application of determinants, though, so we’re going to give you enough background so that you can start playing around with them. 626 Systems of Equations and Matrices we really wanted was a nonzero solution to (C − λI2)X = 02×2 which we know exists if and only if the determinant of C − λI2 is zero.10 So we computed det(C − λI2) = det 6 − λ 15 14 35 − λ = (6 − λ)(35 − λ) − 14 · 15 = λ2 − 41λ This is called the characteristic polynomial of the matrix C and it has two zeros: λ = 0 and λ = 41. That’s how we knew to use 41 in our work above. The fact that �
� = 0 showed up as one of the zeros of the characteristic polynomial just means that C itself had determinant zero which we already knew. Those two numbers are called the eigenvalues of C. The corresponding matrix solutions to CX = λX are called the eigenvectors of C and the ‘vector’ portion of the name will make more sense after you’ve studied vectors. Now it’s your turn. In the following exercises, you’ll be using the matrix G from Exercise 6 11 4 19 27. Show that the characteristic polynomial of G is p(λ) = −λ(λ − 1)(λ − 22). That is, compute det (G − λI3). 28. Let G0 = G. Find the parametric description of the solution to the system of linear equations given by GX = 03×3. 29. Let G1 = G − I3. Find the parametric description of the solution to the system of linear equations given by G1X = 03×3. Show that any solution to G1X = 03×3 also has the property that GX = 1X. 30. Let G22 = G − 22I3. Find the parametric description of the solution to the system of linear equations given by G22X = 03×3. Show that any solution to G22X = 03×3 also has the property that GX = 22X. 10Think about this. 8.5 Determinants and Cramer’s Rule 627 8.5.4 Answers 1. det(B) = 1 3. det(Q) = x2 5. det(F ) = −12 2. det(C) = 0 4. det(L) = 1 x7 6. det(G) = 0 7. det(V ) = 20i + 43j + 4k 8. det(H) = −2 9. x = 39, y = −13 11. x = 7500, y = 500 13. x = 1, y = 2, z = 0 15. x4 = 4 17. B−1 = 3 7 5 12 18. F − 10. x = 41 66, y = − 31 33 12. x = 76 47, y = − 45 47 14. x = 121 60, y = 131 60, z = − 53 60 16. x4 = −1  �
��  19. Carl owns 78 common cards and 39 rare cards. 20. 3.125 gallons. 21. 20 7 ≈ 2.85 liters. 22. The rescue houses 15 snakes, 21 tarantulas and 13 scorpions. 23. Using Cramer’s Rule, we find we need 53 servings of Ippizuti Fish to satisfy the dietary requirements. The number of servings of Misty Mushrooms required, however, is −1120. Since it’s impossible to have a negative number of servings, there is no solution to the applied problem, despite there being a solution to the mathematical problem. A cautionary tale about using Cramer’s Rule: just because you are guaranteed a mathematical answer for each variable doesn’t mean the solution will make sense in the ‘real’ world. 628 Systems of Equations and Matrices 8.6 Partial Fraction Decomposition This section uses systems of linear equations to rewrite rational functions in a form more palatable to Calculus students. In College Algebra, the function f (x) = x2 − x − 6 x4 + x2 (1) is written in the best form possible to construct a sign diagram and to find zeros and asymptotes, but certain applications in Calculus require us to rewrite f (x) as f (x) = x + 7 x2 + 1 − 1 x − 6 x2 (2) If we are given the form of f (x) in (2), it is a matter of Intermediate Algebra to determine a common denominator to obtain the form of f (x) given in (1). The focus of this section is to develop a method by which we start with f (x) in the form of (1) and ‘resolve it into partial fractions’ to obtain the form in (2). Essentially, we need to reverse the least common denominator process. Starting with the form of f (x) in (1), we begin by factoring the denominator x2 − x − 6 x4 + x2 = x2 − x − 6 x2 (x2 + 1) We now think about which individual denominators could contribute to obtain x2 x2 + 1 as the least common denominator. Certainly x2 and x2 + 1, but are there any other factors? Since x2 + 1 is an irreducible quadratic1 there
are no factors of it that have real coefficients which can contribute to the denominator. The factor x2, however, is not irreducible, since we can think of it as x2 = xx = (x − 0)(x − 0), a so-called ‘repeated’ linear factor.2 This means it’s possible that a term with a denominator of just x contributed to the expression as well. What about something like x x2 + 1? This, too, could contribute, but we would then wish to break down that denominator into x and x2 + 1, so we leave out a term of that form. At this stage, we have guessed x2 − x − 6 x4 + x2 = x2 − x − 6 x2 (x2 + 1) =? x +? x2 +? x2 + 1 Our next task is to determine what form the unknown numerators take. It stands to reason that since the expression x2−x−6 is ‘proper’ in the sense that the degree of the numerator is less than x4+x2 the degree of the denominator, we are safe to make the ansatz that all of the partial fraction resolvents are also. This means that the numerator of the fraction with x as its denominator is just a constant and the numerators on the terms involving the denominators x2 and x2 + 1 are at most linear polynomials. That is, we guess that there are real numbers A, B, C, D and E so that x2 − x − 6 x4 + x2 = x2 − x − 6 x2 (x2 + 1) = A x + Bx + C x2 + Dx + E x2 + 1 1Recall this means it has no real zeros; see Section 3.4. 2Recall this means x = 0 is a zero of multiplicity 2. 8.6 Partial Fraction Decomposition 629 However, if we look more closely at the term Bx+C term B Hence, we drop it and, after re-labeling, we find ourselves with our new guess: x2. The x which means it contributes nothing new to our expansion. x has the same form as the term A, we see that Bx+C x2 = Bx x2 + C x2 = B x
+ C x2 x2 − x − 6 x4 + x2 = x2 − x − 6 x2 (x2 + 1) = A x + B x2 + Cx + D x2 + 1 Our next task is to determine the values of our unknowns. Clearing denominators gives x2 − x − 6 = Ax x2 + 1 + B x2 + 1 + (Cx + D)x2 Gathering the like powers of x we have x2 − x − 6 = (A + C)x3 + (B + D)x2 + Ax + B In order for this to hold for all values of x in the domain of f, we equate the coefficients of corresponding powers of x on each side of the equation3 and obtain the system of linear equations    (E1) A + C = (E2) B + D = (E3) (E4) 0 From equating coefficients of x3 1 From equating coefficients of x2 A = −1 From equating coefficients of x B = −6 From equating the constant terms To solve this system of equations, we could use any of the methods presented in Sections 8.1 through 8.5, but none of these methods are as efficient as the good old-fashioned substitution you learned in Intermediate Algebra. From E3, we have A = −1 and we substitute this into E1 to get C = 1. Similarly, since E4 gives us B = −6, we have from E2 that D = 7. We get x2 − x − 6 x4 + x2 = x2 − x − 6 x2 (x2 + 1) = − 1 x − 6 x2 + x + 7 x2 + 1 which matches the formula given in (2). As we have seen in this opening example, resolving a rational function into partial fractions takes two steps: first, we need to determine the form of the decomposition, and then we need to determine the unknown coefficients which appear in said form. Theorem 3.16 guarantees that any polynomial with real coefficients can be factored over the real numbers as a product of linear factors and irreducible quad
ratic factors. Once we have this factorization of the denominator of a rational function, the next theorem tells us the form the decomposition takes. The reader is encouraged to review the Factor Theorem (Theorem 3.6) and its connection to the role of multiplicity to fully appreciate the statement of the following theorem. 3We will justify this shortly. 630 Systems of Equations and Matrices Theorem 8.10. Suppose R(x) = than the degree of D(x) and N (x) and D(x) have no common factors.a is a rational function where the degree of N (x) less N (x) D(x) If α is a real zero of D of multiplicity m which corresponds to the linear factor ax + b, the partial fraction decomposition includes A1 ax + b + A2 (ax + b)2 +... + Am (ax + b)m for real numbers A1, A2,... Am. If α is a non-real zero of D of multiplicity m which corresponds to the irreducible quadratic ax2 + bx + c, the partial fraction decomposition includes B1x + C1 ax2 + bx + c + B2x + C2 (ax2 + bx + c)2 +... + Bmx + Cm (ax2 + bx + c)m for real numbers B1, B2,... Bm and C1, C2,... Cm. aIn other words, R(x) is a proper rational function which has been fully reduced. The proof of Theorem 8.10 is best left to a course in Abstract Algebra. Notice that the theorem provides for the general case, so we need to use subscripts, A1, A2, etc., to denote different unknown coefficients as opposed to the usual convention of A, B, etc.. The stress on multiplicities is to help us correctly group factors in the denominator. For example, consider the rational function 3x − 1 (x2 − 1) (2 − x − x2) Factoring the denominator to find the zeros, we get (x + 1)(x − 1)(1 − x)(2 + x). We find x = −1 and x = −2 are zeros of multiplicity
one but that x = 1 is a zero of multiplicity two due to the two different factors (x − 1) and (1 − x). One way to handle this is to note that (1 − x) = −(x − 1) so 3x − 1 (x + 1)(x − 1)(1 − x)(2 + x) = 3x − 1 −(x − 1)2(x + 1)(x + 2) = 1 − 3x (x − 1)2(x + 1)(x + 2) from which we proceed with the partial fraction decomposition 1 − 3x (x − 1)2(x + 1)(x + 2) = A x − 1 + B (x − 1) Turning our attention to non-real zeros, we note that the tool of choice to determine the irreducibility of a quadratic ax2 + bx + c is the discriminant, b2 − 4ac. If b2 − 4ac < 0, the quadratic admits a pair of non-real complex conjugate zeros. Even though one irreducible quadratic gives two distinct non-real zeros, we list the terms with denominators involving a given irreducible quadratic only once to avoid duplication in the form of the decomposition. The trick, of course, is factoring the 8.6 Partial Fraction Decomposition 631 denominator or otherwise finding the zeros and their multiplicities in order to apply Theorem 8.10. We recommend that the reader review the techniques set forth in Sections 3.3 and 3.4. Next, we state a theorem that if two polynomials are equal, the corresponding coefficients of the like powers of x are equal. This is the principal by which we shall determine the unknown coefficients in our partial fraction decomposition. Theorem 8.11. Suppose anxn + an−1xn−1 + · · · + a2x2 + a1x + a0 = bmxm + mm−1xm−1 + · · · + b2x2 + b1x + b0 for all x in an open interval I. Then n = m and ai = bi for all i = 1... n. Believe it or not, the proof of Theorem 8.11 is a
consequence of Theorem 3.14. Define p(x) to be the difference of the left hand side of the equation in Theorem 8.11 and the right hand side. Then p(x) = 0 for all x in the open interval I. If p(x) were a nonzero polynomial of degree k, then, by Theorem 3.14, p could have at most k zeros in I, and k is a finite number. Since p(x) = 0 for all the x in I, p has infinitely many zeros, and hence, p is the zero polynomial. This means there can be no nonzero terms in p(x) and the theorem follows. Arguably, the best way to make sense of either of the two preceding theorems is to work some examples. Example 8.6.1. Resolve the following rational functions into partial fractions. 1. R(x) = x + 5 2x2 − x − 1 4. R(x) = 4x3 x2 − 2 Solution. 2. R(x) = 5. R(x) = 3 x3 − 2x2 + x x3 + 5x − 1 x4 + 6x2 + 9 3. R(x) = 6. R(x) = 3 x3 − x2 + x 8x2 x4 + 16 1. We begin by factoring the denominator to find 2x2 − x − 1 = (2x + 1)(x − 1). We get x = − 1 2 and x = 1 are both zeros of multiplicity one and thus we know x + 5 2x2 − x − 1 = x + 5 (2x + 1)(x − 1) = A 2x + 1 + B x − 1 Clearing denominators, we get x+5 = A(x−1)+B(2x+1) so that x+5 = (A+2B)x+B −A. Equating coefficients, we get the system A + 2B = 1 −A + B = 5 This system is readily handled using the Addition Method from Section 8.1, and after adding both equations, we get 3B = 6 so B = 2. Using back substitution, we find A = −3.
Our answer is easily checked by getting a common denominator and adding the fractions. x + 5 2x2 − 2x + 1 632 Systems of Equations and Matrices 2. Factoring the denominator gives x3 − 2x2 + x = x x2 − 2x + 1 = x(x − 1)2 which gives x = 0 as a zero of multiplicity one and x = 1 as a zero of multiplicity two. We have 3 x3 − 2x2 + x = 3 x(x − 1)x − 1)2 Clearing denominators, we get 3 = A(x − 1)2 + Bx(x − 1) + Cx, which, after gathering up the like terms becomes 3 = (A + B)x2 + (−2A − B + C)x + A. Our system is    A + B = 0 −2A − B + C = 0 A = 3 Substituting A = 3 into A + B = 0 gives B = −3, and substituting both for A and B in −2A − B + C = 0 gives C = 3. Our final answer is 3 x3 − 2x2 + x − 1)2 3. The denominator factors as x x2 − x + 1. We see immediately that x = 0 is a zero of multiplicity one, but the zeros of x2 − x + 1 aren’t as easy to discern. The quadratic doesn’t factor easily, so we check the discriminant and find it to be (−1)2 − 4(1)(1) = −3 < 0. We find its zeros are not real so it is an irreducible quadratic. The form of the partial fraction decomposition is then 3 x3 − x2 + x = 3 x (x2 − x + 1) = A x + Bx + C x2 − x + 1 Proceeding as usual, we clear denominators and get 3 = A x2 − x + 1 + (Bx + C)x or 3 = (A + B)x2 + (−A + C)x + A. We get    A + B = 0 −A + C = 0 A = 3 From A = 3 and A + B = 0, we get
B = −3. From −A + C = 0, we get C = A = 3. We get 3 x3 − x2 + x = 3 x + 3 − 3x x2 − x + 1 4. Since 4x3 x2−2 isn’t proper, we use long division and we get a quotient of 4x with a remainder of 8x. That is, 4x3 x2−2 so we now work on resolving 8x x2−2 into partial fractions. The quadratic x2 −2, though it doesn’t factor nicely, is, nevertheless, reducible. Solving x2 −2 = 0 x2−2 = 4x + 8x 8.6 Partial Fraction Decomposition 633 √ gives us x = ± enables us to now factor x2 − 2 = x − 2, and each of these zeros must be of multiplicity one since Theorem 3.14 2. Hence, 2 x + √ √ 8x x2 − 2 = x − √ √ 8x or 8x = (A + B)x + (A − B) √ 2. Clearing fractions, we get 8x = A x + We get the system √ A + B = 8 2 = 0 (A − B) √ From (A − B) Hence, A = B = 4 and we get 2 = 0, we get A = B, which, when substituted into A + B = 8 gives B = 4. 4x3 x2 − 2 = 4x + 8x x2 − 2 = 4x +. At first glance, the denominator D(x) = x4 + 6x2 + 9 appears irreducible. However, D(x) has three terms, and the exponent on the first term is exactly twice that of the second. Rewriting + 6x2 + 9, we see it is a quadratic in disguise and factor D(x) = x2 + 32 D(x) = x22. Since x2 + 3 clearly has no real zeros, it is irreducible and the form of the decomposition is x3 + 5x − 1 x4 + 6x2 + 9 x3 + 5x − 1 (x2 + 3)2 = When we clear denominators, we find x3 + 5
x − 1 = (Ax + B) x2 + 3 + Cx + D which yields x3 + 5x − 1 = Ax3 + Bx2 + (3A + C)x + 3B + D. Our system is Cx + D (x2 + 3)2 Ax + B x2 + 3 + =    A = 1 B = 0 5 3A + C = 3B + D = −1 We have A = 1 and B = 0 from which we get C = 2 and D = −1. Our final answer is x3 + 5x − 1 x4 + 6x2 + 9 = x x2 + 3 + 2x − 1 (x2 + 3)2 6. Once again, the difficulty in our last example is factoring the denominator. In an attempt to get a quadratic in disguise, we write x4 + 16 = x22 + 42 = x22 + 8x2 + 42 − 8x2 = x2 + 42 − 8x2 634 Systems of Equations and Matrices and obtain a difference of two squares: x2 + 42 and 8x2 = 2x √ 22. Hence, x4 + 16 = x2 + 4 − 2x √ 2 x2 + 4 + 2x √ 2 = x2 − 2x √ 2 + 4 x2 + 2x √ 2 + 4 The discrimant of both of these quadratics works out to be −8 < 0, which means they are irreducible. We leave it to the reader to verify that, despite having the same discriminant, these quadratics have different zeros. The partial fraction decomposition takes the form 8x2 x4 + 16 = √ x2 − 2x √ 8x2 2 + 4 x2 + 2x √ 2 + 4 = 2 + 4 + (Cx + D) x2 − 2x x2 − 2x Ax + B √ √ 2 + 4 or + Cx + D √ x2 + 2x 2 + 4 2 + 4 We get 8x2 = (Ax + B) x2 + 2x 8x2 = (A + C)x3 + (2A √ 2 + B
− 2C √ 2 + D)x2 + (4A + 2B √ 2 + 4C − 2D √ 2)x + 4B + 4D which gives the system    √ 2A 4A + 2B 2 + B − 2C √ 2 + 4C − 2D √ 4B + 4D = 0 √ We choose substitution as the weapon of choice to solve this system. From A + C = 0, we get A = −C; from 4B + 4D = 0, we get B = −D. Substituting these into the remaining two equations, we get −2C √ √ 2 − D − 2C √ 2 + 4C − 2D √ 2 + D = 8 2 = 0 or −4C − 2D √ √ −4C −4D 2 = 8 2 = 0 We get C = − √ 2 so that A = −C = √ 2 and D = 0 which means B = −D = 0. We get 8x2 x4 + 16 = √ x x2 − 2x 2 √ √ x x2 + 2x.6 Partial Fraction Decomposition 635 8.6.1 Exercises In Exercises 1 - 6, find only the form needed to begin the process of partial fraction decomposition. Do not create the system of linear equations or attempt to find the actual decomposition. 1. 3. 5. 7 (x − 3)(x + 5) m (7x − 6)(x2 + 9) A polynomial of degree < 9 (x + 4)5(x2 + 1)2 2. 4. 6. 5x + 4 x(x − 2)(2 − x) ax2 + bx + c x3(5x + 9)(3x2 + 7x + 9) A polynomial of degree < 7 x(4x − 1)2(x2 + 5)(9x2 + 16) In Exercises 7 - 18, find the partial fraction decomposition of the following rational expressions. 7. 9. 11. 13. 15. 17. 2x x2 − 1 11x2 − 5x − 10 5x3 − 5x2 −x
2 + 15 4x4 + 40x2 + 36 5x4 − 34x3 + 70x2 − 33x − 19 (x − 3)2 −7x2 − 76x − 208 x3 + 18x2 + 108x + 216 4x3 − 9x2 + 12x + 12 x4 − 4x3 + 8x2 − 16x + 16 8. 10. 12. 14. 16. 18. −7x + 43 3x2 + 19x − 14 −2x2 + 20x − 68 x3 + 4x2 + 4x + 16 −21x2 + x − 16 3x3 + 4x2 − 3x + 2 x6 + 5x5 + 16x4 + 80x3 − 2x2 + 6x − 43 x3 + 5x2 + 16x + 80 −10x4 + x3 − 19x2 + x − 10 x5 + 2x3 + x 2x2 + 3x + 14 (x2 + 2x + 9)(x2 + x + 5) 19. As we stated at the beginning of this section, the technique of resolving a rational function into partial fractions is a skill needed for Calculus. However, we hope to have shown you that it is worth doing if, for no other reason, it reinforces a hefty amount of algebra. One of the common algebraic errors the authors find students make is something along the lines of Think about why if the above were true, this section would have no need to exist. 8 x2 − 9 = 8 x2 − 8 9 636 Systems of Equations and Matrices 8.6.2 Answers 1. 3. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18 7x − 6 A x + 4 A x + 2x x2 − 1 + + Bx + C x2 + 9 C B (x + 4)3 + (x + 4)2 + C B (4x − 1)2 + 4x − x + 4)4 + E (x + 4)5 + Dx + E x2 + 5 + F x + G 9x2 + 16 2. 4x − 2)2 + B x2 + F x + G x2 + 1 C x3 + + D 5x + 9 Hx + I (x2 + 1)
2 + Ex + F 3x2 + 7x + 9 −7x + 43 3x2 + 19x − 14 11x2 − 5x − 10 5x3 − 5x2 = = 5 3x − x2 − 4 5(x − 1) −2x2 + 20x − 68 x3 + 4x2 + 4x + 16 = − 9 x + 4 + 7x − 8 x2 + 4 −x2 + 15 4x4 + 40x2 + 36 = 1 2(x2 + 1) − 3 4(x2 + 9) = − −21x2 + x − 16 6 3x3 + 4x2 − 3x + 2 x + 2 5x4 − 34x3 + 70x2 − 33x − 19 (x − 3)2 − 3x + 5 3x2 − 2x + 1 = 5x2 − 4x + 1 + 9 x − 3 − 1 (x − 3)2 x6 + 5x5 + 16x4 + 80x3 − 2x2 + 6x − 43 x3 + 5x2 + 16x + 80 = x3 + x + 1 x2 + 16 − 3 x + 5 −7x2 − 76x − 208 x3 + 18x2 + 108x + 216 = − 7 x + 6 + 8 (x + 6)2 − 4 (x + 6)3 −10x4 + x3 − 19x2 + x − 10 x5 + 2x3 + x = − 10 x + 1 x2 + 1 + x (x2 + 1)2 4x3 − 9x2 + 12x + 12 x4 − 4x3 + 8x2 − 16x + 16 = 1 x − 2 + 4 (x − 2)2 + 3x + 1 x2 + 4 2x2 + 3x + 14 (x2 + 2x + 9)(x2 + x + 5) = 1 x2 + 2x + 9 + 1 x2 + x + 5 8.7 Systems of Non-Linear Equations and Inequalities 637 8.7 Systems of Non-Linear Equations and Inequalities In this section, we study systems of non-linear equations and inequalities. Unlike the systems of linear equations for which we have developed several algorithmic solution techniques, there is no general algorithm to solve systems of non-linear equations. Moreover,
all of the usual hazards of non-linear equations like extraneous solutions and unusual function domains are once again present. Along with the tried and true techniques of substitution and elimination, we shall often need equal parts tenacity and ingenuity to see a problem through to the end. You may find it necessary to review topics throughout the text which pertain to solving equations involving the various functions we have studied thus far. To get the section rolling we begin with a fairly routine example. Example 8.7.1. Solve the following systems of equations. Verify your answers algebraically and graphically. x2 + y2 = 4 4x2 + 9y2 = 36 x2 + y2 = 4 4x2 − 9y2 = 36 1. 2. Solution: x2 + y2 = 4 y − 2x = 0 x2 + y2 = 4 y − x2 = 0 3. 4. 1. Since both equations contain x2 and y2 only, we can eliminate one of the variables as we did in Section 8.1. (E1) x2 + y2 = 4 (E2) 4x2 + 9y2 = 36 Replace E2 with −−−−−−−−−−→ −4E1 + E2 (E1) x2 + y2 = 4 5y2 = 20 (E2) From 5y2 = 20, we get y2 = 4 or y = ±2. To find the associated x values, we substitute each value of y into one of the equations to find the resulting value of x. Choosing x2 + y2 = 4, we find that for both y = −2 and y = 2, we get x = 0. Our solution is thus {(0, 2), (0, −2)}. To check this algebraically, we need to show that both points satisfy both of the original equations. We leave it to the reader to verify this. To check our answer graphically, we sketch both equations and look for their points of intersection. The graph of x2 + y2 = 4 is a circle centered at (0, 0) with a radius of 2, whereas the graph of 4x2 + 9y2 = 36, when written in the standard form x2 4 = 1 is easily recognized as an ellipse centered at (0, 0) with a major axis along the x-axis of length
6 and a minor axis along the y-axis of length 4. We see from the graph that the two curves intersect at their y-intercepts only, (0, ±2). 9 + y2 2. We proceed as before to eliminate one of the variables (E1) x2 + y2 = 4 (E2) 4x2 − 9y2 = 36 Replace E2 with −−−−−−−−−−→ −4E1 + E2 (E1) x2 + y2 = 4 (E2) −13y2 = 20 638 Systems of Equations and Matrices Since the equation −13y2 = 20 admits no real solution, the system is inconsistent. To verify this graphically, we note that x2 + y2 = 4 is the same circle as before, but when writing the second equation in standard form, x2 4 = 1, we find a hyperbola centered at (0, 0) opening to the left and right with a transverse axis of length 6 and a conjugate axis of length 4. We see that the circle and the hyperbola have no points in common. 9 − y2 y 1 −1 −3 −2 −1 1 2 3 x −3 −2 −1 y 1 −1 1 2 3 x Graphs for x2 + y2 = 4 4x2 + 9y2 = 36 Graphs for x2 + y2 = 4 4x2 − 9y2 = 36 √ 3. Since there are no like terms among the two equations, elimination won’t do us any good. We turn to substitution and from the equation y − 2x = 0, we get y = 2x. Substituting this √ into x2 + y2 = 4 gives x2 + (2x)2 = 4. Solving, we find 5x2 = 4 or x = ± 2 5 5. Returning √ 5 when x = 2 5 to the equation we used for the substitution, y = 2x, we find y = 4 5, so one solution is. We. Similarly, we find the other solution to be leave it to the reader that both points satisfy both equations, so that our final answer is 2. The graph of x2 + y2 = 4 is our circle from before and the graph of y − 2x = 0 is
a line through the origin with slope 2. Though we cannot verify the numerical values of the points of intersection from our sketch, we do see that we have two solutions: one in Quadrant I and one in Quadrant III as required. While it may be tempting to solve y − x2 = 0 as y = x2 and substitute, we note that this system is set up for elimination.1 (E1) x2 + y2 = 4 y − x2 = 0 (E2) Replace E2 with −−−−−−−−−−→ E1 + E2 (E1) x2 + y2 = 4 y2 + y = 4 (E2) From y2 + y = 4 we get y2 + y − 4 = 0 which gives y = −1±. Due to the complicated 2 nature of these answers, it is worth our time to make a quick sketch of both equations to head off any extraneous solutions we may encounter. We see that the circle x2 + y2 = 4 intersects the parabola y = x2 exactly twice, and both of these points have a positive y value. Of the two solutions for y, only y = −1+ is positive, so to get our solution, we substitute this 2 17 17 √ √ 1We encourage the reader to solve the system using substitution to see that you get the same solution. 8.7 Systems of Non-Linear Equations and Inequalities 639 into y − x2 = 0 and solve for x. We get 2+2 2 17, −1+ 2 17, − −2+2 2 17, −1+ 2 17 √ −1+ 2 17 = ± √ √ −2+2 2 17. Our solution is, which we leave to the reader to verify. y 1 −3 −2 −1 1 2 3 x −3 −2 −1 y 1 −1 1 2 3 x Graphs for x2 + y2 = 4 y − 2x = 0 Graphs for x2 + y2 = 4 y − x2 = 36 A couple of remarks about Example 8.7.1 are in order. First note that, unlike systems of linear equations, it is possible for a system of non-linear equations to have more than one solution without having infinitely many solutions. In fact, while we characterize systems of nonlinear equations as being ‘consistent�
� or ‘inconsistent,’ we generally don’t use the labels ‘dependent’ or ‘independent’. Secondly, as we saw with number 4, sometimes making a quick sketch of the problem situation can save a lot of time and effort. While in general the curves in a system of non-linear equations may not be easily visualized, it sometimes pays to take advantage when they are. Our next example provides some considerable review of many of the topics introduced in this text. Example 8.7.2. Solve the following systems of equations. Verify your answers algebraically and graphically, as appropriate. x2 + 2xy − 16 = 0 y2 + 2xy − 16 = 0 1. y + 4e2x = 1 y2 + 2ex = 1 2.    3. z(x − 2) = x yz = y (x − 2)2 + y2 = 1 Solution. 1. At first glance, it doesn’t appear as though elimination will do us any good since it’s clear that we cannot completely eliminate one of the variables. The alternative, solving one of the equations for one variable and substituting it into the other, is full of unpleasantness. Returning to elimination, we note that it is possible to eliminate the troublesome xy term, and the constant term as well, by elimination and doing so we get a more tractable relationship between x and y (E1) x2 + 2xy − 16 = 0 (E2) y2 + 2xy − 16 = 0 Replace E2 with −−−−−−−−−−→ −E1 + E2 (E1) x2 + 2xy − 16 = 0 y2 − x2 = 0 (E2) 640 Systems of Equations and Matrices √ √ 4 3 or x = ± 4 We get y2 − x2 = 0 or y = ±x. Substituting y = x into E1 we get x2 + 2x2 − 16 = 0 so √ 3 that x2 = 16 3. On the other hand, when we substitute y = −x into E1, we get √ 3 x2 − 2x2 − 16 = 0 or x2 = −16 which gives no real solutions. Substituting each of x = ± 4 3 into the substitution equation y =
x yields the solution. We leave it to the reader to show that both points satisfy both equations and now turn to verifying our solution graphically. We begin by solving x2+2xy−16 = 0 for y to obtain y = 16−x2 2x. This function is easily graphed using the techniques of Section 4.2. Solving the second equation, y2 + 2xy − 16 = 0, for y, however, is more complicated. We use the quadratic formula to x2 + 16 which would require the use of Calculus or a calculator to graph. obtain y = −x ± Believe it or not, we don’t need either because the equation y2 + 2xy − 16 = 0 can be obtained from the equation x2 + 2xy − 16 = 0 by interchanging y and x. Thinking back to Section 5.2, this means we can obtain the graph of y2 + 2xy − 16 = 0 by reflecting the graph of x2 + 2xy − 16 = 0 across the line y = x. Doing so confirms that the two graphs intersect twice: once in Quadrant I, and once in Quadrant III as required4 −3 −2 −1 1 2 3 4 x −1 −2 −3 −4 The graphs of x2 + 2xy − 16 = 0 and y2 + 2xy − 16 = 0 2. Unlike the previous problem, there seems to be no avoiding substitution and a bit of algebraic unpleasantness. Solving y + 4e2x = 1 for y, we get y = 1 − 4e2x which, when substituted into the second equation, yields 1 − 4e2x2 + 2ex = 1. After expanding and gathering like terms, we get 16e4x − 8e2x + 2ex = 0. Factoring gives us 2ex 8e3x − 4ex + 1 = 0, and since 2ex = 0 for any real x, we are left with solving 8e3x − 4ex + 1 = 0. We have three terms, and even though this is not a ‘quadratic in disguise’, we can benefit from the substitution u = ex. The equation becomes 8u3−4u+1 = 0. Using the techniques set forth in Section 3.3, we find u = 1 2 8u2 + 4u −
2. We is a zero and use synthetic division to factor the left hand side as u − 1 2 √ use the quadratic formula to solve 8u2 + 4u − 2 = 0 and find u = −1± 5. Since u = ex, we 4 = − ln(2). As now must solve ex = 1 for ex = −1± has no real solutions. We are 4 2 and ex = −1± 4, we first note that −1−. From ex = 1 4 < 0, so ex = −1− 2, we get x = ln.7 Systems of Non-Linear Equations and Inequalities 641 left with ex = −1+ 4 accompanying y values for each of our solutions for x. For x = − ln(2), we get, so that x = ln. We now return to y = 1 − 4e2x to find the 5 5 √ −1+ 4 √ y = 1 − 4e2x = 1 − 4e−2 ln(2) = 1 − 4eln For x = ln √ −1+ 4 5, we have y = 1 − 4e2x 2 ln = 1 − 4e √ 5 −1+ 4 √ 2 5 2 5 ln −1+ 4 √ −1+ 4 √ 3− 8 5 = 1 − 4e = 1+ 2 5 √ 5 −1+ 4, −1+ 2 √ 5 (0, − ln(2)), We get two solutions,. It is a good review of the properties of logarithms to verify both solutions, so we leave that to the reader. We are able to sketch y = 1 − 4e2x using transformations, but the second equation is more difficult and we resort to the calculator. We note that to graph y2 + 2ex = 1, we need to graph both the positive and negative roots, y = ± 1 − 2ex. After some careful zooming,2 we get ln √ The graphs of y = 1 − 4e2x and y = ± √ 1 − 2ex. 3. Our last system involves three variables and gives some insight on how to keep such systems organized. Labeling the equations as before, we have 2The calculator has trouble confirming the solution (− l
n(2), 0) due to its issues in graphing square root functions. If we mentally connect the two branches of the thicker curve, we see the intersection. 642 Systems of Equations and Matrices    z(x − 2) = x E1 E2 yz = y E3 (x − 2)2 + y2 = 1 The easiest equation to start with appears to be E2. While it may be tempting to divide both sides of E2 by y, we caution against this practice because it presupposes y = 0. Instead, we take E2 and rewrite it as yz − y = 0 so y(z − 1) = 0. From this, we get two cases: y = 0 or z = 1. We take each case in turn. Case 1: y = 0. Substituting y = 0 into E1 and E3, we get E1 z(x − 2) = x E3 (x − 2)2 = 1 Solving E3 for x gives x = 1 or x = 3. Substituting these values into E1 gives z = −1 when x = 1 and z = 3 when x = 3. We obtain two solutions, (1, 0, −1) and (3, 0, 3). Case 2: z = 1. Substituting z = 1 into E1 and E3 gives us E1 (1)(x − 2) = x E3 (1 − 2)2 + y2 = 1 Equation E1 gives us x − 2 = x or −2 = 0, which is a contradiction. This means we have no solution to the system in this case, even though E3 is solvable and gives y = 0. Hence, our final answer is {(1, 0, −1), (3, 0, 3)}. These points are easy enough to check algebraically in our three original equations, so that is left to the reader. As for verifying these solutions graphically, they require plotting surfaces in three dimensions and looking for intersection points. While this is beyond the scope of this book, we provide a snapshot of the graphs of our three equations near one of the solution points, (1, 0, −1). Example 8.7.2 showcases some of the ingenuity and tenacity mentioned at the beginning of the section. Sometimes you just have to look at a system the right way to �
��nd the most efficient method to solve it. Sometimes you just have to try something. 8.7 Systems of Non-Linear Equations and Inequalities 643 We close this section discussing how non-linear inequalities can be used to describe regions in the plane which we first introduced in Section 2.4. Before we embark on some examples, a little motivation is in order. Suppose we wish to solve x2 < 4 − y2. If we mimic the algorithms for solving nonlinear inequalities in one variable, we would gather all of the terms on one side and leave a 0 on the other to obtain x2 + y2 − 4 < 0. Then we would find the zeros of the left hand side, that is, where is x2 + y2 − 4 = 0, or x2 + y2 = 4. Instead of obtaining a few numbers which divide the real number line into intervals, we get an equation of a curve, in this case, a circle, which divides the plane into two regions - the ‘inside’ and ‘outside’ of the circle - with the circle itself as the boundary between the two. Just like we used test values to determine whether or not an interval belongs to the solution of the inequality, we use test points in the each of the regions to see which of these belong to our solution set.3 We choose (0, 0) to represent the region inside the circle and (0, 3) to represent the points outside of the circle. When we substitute (0, 0) into x2 + y2 − 4 < 0, we get −4 < 4 which is true. This means (0, 0) and all the other points inside the circle are part of the solution. On the other hand, when we substitute (0, 3) into the same inequality, we get 5 < 0 which is false. This means (0, 3) along with all other points outside the circle are not part of the solution. What about points on the circle itself? Choosing a point on the circle, say (0, 2), we get 0 < 0, which means the circle itself does not satisfy the inequality.4 As a result, we leave the circle dashed in the final diagram. y 2 −2 2 x −2 The solution to x2 < 4 − y2 We put this technique to good use in the following example. Example 8.7
.3. Sketch the solution to the following nonlinear inequalities in the plane. 1. y2 − 4 ≤ x < y + 2 Solution. 2. x2 + y2 ≥ 4 x2 − 2x + y2 − 2y ≤ 0 1. The inequality y2 − 4 ≤ x < y + 2 is a compound inequality. It translates as y2 − 4 ≤ x and x < y + 2. As usual, we solve each inequality and take the set theoretic intersection to determine the region which satisfies both inequalities. To solve y2 − 4 ≤ x, we write 3The theory behind why all this works is, surprisingly, the same theory which guarantees that sign diagrams work the way they do - continuity and the Intermediate Value Theorem - but in this case, applied to functions of more than one variable. 4Another way to see this is that points on the circle satisfy x2 + y2 − 4 = 0, so they do not satisfy x2 + y2 − 4 < 0. 644 Systems of Equations and Matrices y2 − x − 4 ≤ 0. The curve y2 − x − 4 = 0 describes a parabola since exactly one of the variables is squared. Rewriting this in standard form, we get y2 = x + 4 and we see that the vertex is (−4, 0) and the parabola opens to the right. Using the test points (−5, 0) and (0, 0), we find that the solution to the inequality includes the region to the right of, or ‘inside’, the parabola. The points on the parabola itself are also part of the solution, since the vertex (−4, 0) satisfies the inequality. We now turn our attention to x < y + 2. Proceeding as before, we write x − y − 2 < 0 and focus our attention on x − y − 2 = 0, which is the line y = x − 2. Using the test points (0, 0) and (0, −4), we find points in the region above the line y = x − 2 satisfy the inequality. The points on the line y = x − 2 do not satisfy the inequality, since the y-intercept (0, −2) does not. We see that these two regions do overlap, and to make the graph more precise, we seek the intersection of these two
curves. That is, we need to solve the system of nonlinear equations (E1) y2 = x + 4 y = x − 2 (E2) Solving E1 for x, we get x = y2 − 4. Substituting this into E2 gives y = y2 − 4 − 2, or y2 − y − 6 = 0. We find y = −2 and y = 3 and since x = y2 − 4, we get that the graphs intersect at (0, −2) and (5, 3). Putting all of this together, we get our final answer below. y 3 y y −5−4 x 2 3 4 5 x −5−4 2 3 4 5 x −3 −3 −3 y2 − 4 ≤ x x < y + 2 y2 − 4 ≤ x < y + 2 2. To solve this system of inequalities, we need to find all of the points (x, y) which satisfy both inequalities. To do this, we solve each inequality separately and take the set theoretic intersection of the solution sets. We begin with the inequality x2 + y2 ≥ 4 which we rewrite as x2 + y2 − 4 ≥ 0. The points which satisfy x2 + y2 − 4 = 0 form our friendly circle x2 + y2 = 4. Using test points (0, 0) and (0, 3) we find that our solution comprises the region outside the circle. As far as the circle itself, the point (0, 2) satisfies the inequality, so the circle itself is part of the solution set. Moving to the inequality x2 − 2x + y2 − 2y ≤ 0, we start with x2 − 2x + y2 − 2y = 0. Completing the squares, we obtain (x − 1)2 + (y − 1)2 = 2, which is 2. Choosing (1, 1) to represent the inside of the a circle centered at (1, 1) with a radius of circle, (1, 3) as a point outside of the circle and (0, 0) as a point on the circle, we find that the solution to the inequality is the inside of the circle, including the circle itself. Our final answer, then, consists of the points on or outside of the circle x2 +
y2 = 4 which lie on or √ 8.7 Systems of Non-Linear Equations and Inequalities 645 inside the circle (x − 1)2 + (y − 1)2 = 2. To produce the most accurate graph, we need to find where these circles intersect. To that end, we solve the system (E1) x2 + y2 = 4 (E2) x2 − 2x + y2 − 2y = 0 We can eliminate both the x2 and y2 by replacing E2 with −E1 + E2. Doing so produces −2x − 2y = −4. Solving this for y, we get y = 2 − x. Substituting this into E1 gives x2 + (2 − x)2 = 4 which simplifies to x2 + 4 − 4x + x2 = 4 or 2x2 − 4x = 0. Factoring yields 2x(x − 2) which gives x = 0 or x = 2. Substituting these values into y = 2 − x gives the points (0, 2) and (2, 0). The intermediate graphs and final solution are below. y 1 −3 −2 −1 2 x −3 −2 −1 2 x −1 −2 −3 −1 −2 −3 x2 + y2 ≥ 4 x2 − 2x + y2 − 2y ≤ 0 Solution to the system. 646 Systems of Equations and Matrices 8.7.1 Exercises In Exercises 1 - 6, solve the given system of nonlinear equations. Sketch the graph of both equations on the same set of axes to verify the solution set. x2 − y = 4 x2 + y2 = 4 x2 + y2 = 16 9x2 − 16y2 = 144 1. 4. 2. 5. x2 + y2 = 4 x2 − y = 5 x2 + y2 = 16 16 x2 = 1 9 y2 − 1 1 x2 + y2 = 16 16x2 + 4y2 = 64 x2 + y2 = 16 x − y = 2 3. 6. In Exercises 9 - 15, solve the given system of nonlinear equations. Use a graph to help you avoid any potential extraneous solutions. 7. 10. x2 − y2 = 1 x2 + 4y2
= 4 (x − 2)2 + y2 = 1 x2 + 4y2 = 4 √ x + 1 − y = 0 x2 + 4y2 = 4 x2 + y2 = 25 y − x = 1 8. 11. 13. y = x3 + 8 y = 10x − x2 14. x2 − xy = 8 y2 − xy = 8 x + 2y2 = 2 x2 + 4y2 = 4 9. 12. 15.    x2 + y2 = 25 x2 + (y − 3)2 = 10 x2 + y2 = 25 4x2 − 9y = 0 3y2 − 16x = 0 16. A certain bacteria culture follows the Law of Uninbited Growth, Equation 6.4. After 10 minutes, there are 10,000 bacteria. Five minutes later, there are 14,000 bacteria. How many bacteria were present initially? How long before there are 50,000 bacteria? Consider the system of nonlinear equations below    1 If we let u = 1 x and v = 1 y then the system becomes 4u + 3v = 1 3u + 2v = −1 This associated system of linear equations can then be solved using any of the techniques presented earlier in the chapter to find that u = −5 and v = 7. Thus x = 1 We say that the original system is linear in form because its equations are not linear but a few substitutions reveal a structure that we can treat like a system of linear equations. Each system in Exercises 17 - 19 is linear in form. Make the appropriate substitutions and solve for x and y. 5 and.7 Systems of Non-Linear Equations and Inequalities 647 4x3 + 3 3x3 + 2 √ √ y = 1 y = −1 17. 18. 4ex + 3e−y = 1 3ex + 2e−y = −1 19. 4 ln(x) + 3y2 = 1 3 ln(x) + 2y2 = −1 20. Solve the following system    √ x2 + √ 3x2 − 2 −5x2 + 3 √ y +
log2(z) = 6 y + 2 log2(z) = 5 y + 4 log2(z) = 13 In Exercises 21 - 26, sketch the solution to each system of nonlinear inequalities in the plane. 21. 23. 25. x2 − y2 ≤ 1 x2 + 4y2 ≥ 4 (x − 2)2 + y2 < 1 x2 + 4y2 < 4 x + 2y2 > 2 x2 + 4y2 ≤ 4 22. 24. 26. x2 + y2 < 25 x2 + (y − 3)2 ≥ 10 y < y > 10x − x2 x3 + 8 x2 + y2 ≥ 25 y − x ≤ 1 27. Systems of nonlinear equations show up in third semester Calculus in the midst of some really cool problems. The system below came from a problem in which we were asked to find the dimensions of a rectangular box with a volume of 1000 cubic inches that has minimal surface area. The variables x, y and z are the dimensions of the box and λ is called a Lagrange multiplier. With the help of your classmates, solve the system.5    2y + 2z = λyz 2x + 2z = λxz 2y + 2x = λxy xyz = 1000 28. According to Theorem 3.16 in Section 3.4, the polynomial p(x) = x4 + 4 can be factored into the product linear and irreducible quadratic factors. In this exercise, we present a method for obtaining that factorization. (a) Show that p has no real zeros. (b) Because p has no real zeros, its factorization must be of the form (x2 +ax+b)(x2 +cx+d) where each factor is an irreducible quadratic. Expand this quantity and gather like terms together. (c) Create and solve the system of nonlinear equations which results from equating the coefficients of the expansion found above with those of x4 + 4. You should get four equations in the four unknowns a, b, c and d. Write p(x) in factored form. 29. Factor q(x) = x4 + 6x2 − 5x
+ 6. 5If using λ bothers you, change it to w when you solve the system. 648 Systems of Equations and Matrices 8.7.2 Answers √ 1. (±2, 0), ± y 3, −1 2 1 −2 −1 1 2 x −1 −2 −3 −4 3. (0, ±4) y 4 3 2 1 2. No solution y 2 1 −2 −1 1 2 x −1 −2 −3 −4 4. (±4, 0) y 4 3 2 1 −4−3−2−1 −1 1 2 3 4 x −6−5−4−3−2−1 −1 1 2 3 4 5 6 x −2 −3 −4 −2 −3 −4 √ ± 4 5. 7 5, ± 12 √ 5 y 2 √ 6. 1 + 7, −1 + y √ 7, 1 − √ 7, −1 − √ 7 4 3 2 1 −4−3−2−1 −1 1 2 3 4 x −2 −3 −4 √ 15 5, ± √ 7. ± 2 10 5 √ 104−3−2−1 −1 1 2 3 4 x −2 −3 −4 8. (0, 1) 9. (0, ±1), (2, 0) 11. (3, 4), (−4, −3) 12. (±3, 4) 13. (−4, −56), (1, 9), (2, 16) 14. (−2, 2), (2, −2) 15. (3, 4) 16. Initially, there are 250000 49 ≈ 5102 bacteria. It will take 5 ln(49/5) ln(7/5) ≈ 33.92 minutes for the colony to grow to 50,000 bacteria. 8.7 Systems of Non-Linear Equations and Inequalities 649 √ 17. − 3 5, 49 20. (1, 4, 8), (−1, 4, 8) 21. x2 − y2 ≤ 1 x2 + 4y2 ≥ 4 y 2 1 −2 −1 1 2 x −1 −2 23. (x − 2)2 + y2 < 1 x2 + 4y2 < 4 y 1 −1 1 2 x 25. x + 2y2 > 2 x
2 + 4y2 ≤ 4 y 1 −1 18. No solution 19. e−5, ± √ 7 22. x2 + y2 < 25 x2 + (y − 3)2 ≥ 10 y 4 3 2 1 −5−4−3−2−1 −1 1 2 3 4 5 x −2 −3 −4 −5 24. y > 10x − x2 x3 + 8 y < y 16 9 −4−3−2−1 1 2 x −56 26. x2 + y2 ≥ 25 5 −3 −1 1 3 5 x −3 −5 650 Systems of Equations and Matrices 27. x = 10, y = 10, z = 10, λ = 2 5 28. (c) x4 + 4 = (x2 − 2x + 2)(x2 + 2x + 2) 29. x4 + 6x2 − 5x + 6 = (x2 − x + 1)(x2 + x + 6) Chapter 9 Sequences and the Binomial Theorem 9.1 Sequences When we first introduced a function as a special type of relation in Section 1.3, we did not put any restrictions on the domain of the function. All we said was that the set of x-coordinates of the points in the function F is called the domain, and it turns out that any subset of the real numbers, regardless of how weird that subset may be, can be the domain of a function. As our exploration of functions continued beyond Section 1.3, we saw fewer and fewer functions with ‘weird’ domains. It is worth your time to go back through the text to see that the domains of the polynomial, rational, exponential, logarithmic and algebraic functions discussed thus far have fairly predictable domains which almost always consist of just a collection of intervals on the real line. This may lead some readers to believe that the only important functions in a College Algebra text have domains which consist of intervals and everything else was just introductory nonsense. In this section, we introduce sequences which are an important class of functions whose domains are the set of natural numbers.1 Before we get to far ahead of ourselves, let’s look at what the term ‘sequence’ means mathematically. Informally, we can think of a sequence as an infinite list of numbers. For example, consider the sequence 1 2,
− 3 4, 9 8, − 27 16,... (1) As usual, the periods of ellipsis,..., indicate that the proposed pattern continues forever. Each of the numbers in the list is called a term, and we call 1 8 the ‘third term’ and so forth. In numbering them this way, we are setting up a function, which we’ll call a per tradition, between the natural numbers and the terms in the sequence. 4 the ‘second term’, 9 2 the ‘first term’, − 3 1Recall that this is the set {1, 2, 3,...}. 652 Sequences and the Binomial Theorem 1 2 n a(n − 27 16...... In other words, a(n) is the nth term in the sequence. We formalize these ideas in our definition of a sequence and introduce some accompanying notation. Definition 9.1. A sequence is a function a whose domain is the natural numbers. The value a(n) is often written as an and is called the nth term of the sequence. The sequence itself is usually denoted using the notation: an, n ≥ 1 or the notation: {an}∞ n=1. 2, a2 = − 3 Applying the notation provided in Definition 9.1 to the sequence given (1), we have a1 = 1 4, a3 = 9 8 and so forth. Now suppose we wanted to know a117, that is, the 117th term in the sequence. While the pattern of the sequence is apparent, it would benefit us greatly to have an explicit formula for an. Unfortunately, there is no general algorithm that will produce a formula for every sequence, so any formulas we do develop will come from that greatest of teachers, experience. In other words, it is time for an example. Example 9.1.1. Write the first four terms of the following sequences. 1. an = 5n−1 3n, n ≥ 1 3. {2n − 1}∞ n=1, k ≥ 0 2. bk = (−1)k 2k + 1 1 + (−1)i i 4. ∞ i=2 5. a1 = 7, an+1 = 2 − an, n ≥ 1
6. f0 = 1, fn = n · fn−1, n ≥ 1 Solution. 32 = 5 1. Since we are given n ≥ 1, the first four terms of the sequence are a1, a2, a3 and a4. Since the notation a1 means the same thing as a(1), we obtain our first term by replacing every occurrence of n in the formula for an with n = 1 to get a1 = 51−1 3. Proceeding similarly, we get a2 = 52−1 33 = 25 27 and a4 = 54−1 2. For this sequence we have k ≥ 0, so the first four terms are b0, b1, b2 and b3. Proceeding as before, replacing in this case the variable k with the appropriate whole number, beginning with 0, we get b0 = (−1)0 2(3)+1 = − 1 7. (This sequence is called an alternating sequence since the signs alternate between + and −. The reader is encouraged to think what component of the formula is producing this effect.) 2(0)+1 = 1, b1 = (−1)1 5 and b3 = (−1)3 3, b2 = (−1)2 9, a3 = 53−1 2(1)+1 = − 1 2(2)+1 = 1 34 = 125 81. 31 = 1 9.1 Sequences 653 3. From {2n − 1}∞ n=1, we have that an = 2n − 1, n ≥ 1. We get a1 = 1, a2 = 3, a3 = 5 and a4 = 7. (The first four terms are the first four odd natural numbers. The reader is encouraged to examine whether or not this pattern continues indefinitely.) 4. Here, we are using the letter i as a counter, not as the imaginary unit we saw in Section 3.4. 2 and a5 = 0. Proceeding as before, we set ai = 1+(−1)i, i ≥ 2. We find a2 = 1, a3 = 0, a4 = 1 i 5. To obtain the terms of this sequence, we start with a1 = 7 and use the equation an+1 = 2−an for n ≥ 1 to generate
successive terms. When n = 1, this equation becomes a1 + 1 = 2 − a1 which simplifies to a2 = 2−a1 = 2−7 = −5. When n = 2, the equation becomes a2 + 1 = 2−a2 so we get a3 = 2 − a2 = 2 − (−5) = 7. Finally, when n = 3, we get a3 + 1 = 2 − a3 so a4 = 2 − a3 = 2 − 7 = −5. 6. As with the problem above, we are given a place to start with f0 = 1 and given a formula to build other terms of the sequence. Substituting n = 1 into the equation fn = n · fn−1, we get f1 = 1 · f0 = 1 · 1 = 1. Advancing to n = 2, we get f2 = 2 · f1 = 2 · 1 = 2. Finally, f3 = 3 · f2 = 3 · 2 = 6. Some remarks about Example 9.1.1 are in order. We first note that since sequences are functions, we can graph them in the same way we graph functions. For example, if we wish to graph the sequence {bk}∞ k=0 from Example 9.1.1, we graph the equation y = b(k) for the values k ≥ 0. That is, we plot the points (k, b(k)) for the values of k in the domain, k = 0, 1, 2,.... The resulting collection of points is the graph of the sequence. Note that we do not connect the dots in a pleasing fashion as we are used to doing, because the domain is just the whole numbers in this case, not a collection of intervals of real numbers. If you feel a sense of nostalgia, you should see Section 1.21 − 3 2 1 2 3 x Graphing y = bk = (−1)k 2k + 1, k ≥ 0 Speaking of {bk}∞ k=0, the astute and mathematically minded reader will correctly note that this technically isn’t a sequence, since according to Definition 9.1, sequences are functions whose domains are the natural numbers, not the whole numbers, as is the case with {bk}∞ k=0. In other words, to satisfy De�
��nition 9.1, we need to shift the variable k so it starts at k = 1 instead of k = 0. To see how we can do this, it helps to think of the problem graphically. What we want is to shift the graph of y = b(k) to the right one unit, and thinking back to Section 1.7, we can accomplish this by replacing k with k − 1 in the definition of {bk}∞ k=0. Specifically, let ck = bk−1 where k − 1 ≥ 0. We get ck = (−1)k−1 2k−1, where now k ≥ 1. We leave to the reader to verify that {ck}∞ k=0, but the former satisfies Definition 2(k−1)+1 = (−1)k−1 k=1 generates the same list of numbers as does {bk}∞ 654 Sequences and the Binomial Theorem 9.1, while the latter does not. Like so many things in this text, we acknowledge that this point is pedantic and join the vast majority of authors who adopt a more relaxed view of Definition 9.1 to include any function which generates a list of numbers which can then be matched up with the natural numbers.2 Finally, we wish to note the sequences in parts 5 and 6 are examples of sequences described recursively. In each instance, an initial value of the sequence is given which is then followed by a recursion equation − a formula which enables us to use known terms of the sequence to determine other terms. The terms of the sequence in part 6 are given a special name: fn = n! is called n-factorial. Using the ‘!’ notation, we can describe the factorial sequence as: 0! = 1 and n! = n(n − 1)! for n ≥ 1. After 0! = 1 the next four terms, written out in detail, are 1! = 1 · 0! = 1 · 1 = 1, 2! = 2 · 1! = 2 · 1 = 2, 3! = 3 · 2! = 3 · 2 · 1 = 6 and 4! = 4 · 3! = 4 · 3 · 2 · 1 = 24. From this, we see a more informal way of computing n!, which is