Augmented matrices, row operations, Gaussian elimination, Gauss-Jordan RREF, matrix multiplication, inverse matrices, determinants, and Cramer's Rule — everything for precalculus Chapter 10 with full worked examples.
A matrix is a rectangular array of numbers arranged in rows and columns, enclosed in brackets. Matrices are the central tool for organizing and solving linear systems in precalculus and beyond.
The dimension of a matrix is written as m x n, where m is the number of rows and n is the number of columns. Always state rows first, then columns.
3 x 2 matrix (3 rows, 2 columns):
[ 1 4 ]
[ 2 0 ]
[ 5 3 ]
The entry in row i and column j is written a sub-ij. Row index comes first. The entire matrix is written A = [a sub-ij] with dimension subscript m x n.
Same number of rows and columns (m = n). Required for determinants and inverses.
A 1 x n matrix — a single row. Dot products use row times column.
An m x 1 matrix — a single column. Constants in a system form a column vector.
All entries equal 0. Adding the zero matrix leaves any matrix unchanged.
Square matrix with 1s on the main diagonal and 0s elsewhere. Acts like the number 1 in multiplication.
Square matrix where all entries off the main diagonal are 0.
An augmented matrix encodes an entire linear system in a single rectangular array. The left block holds the coefficients; the right column (separated by a vertical bar) holds the constants. Row reduction on the augmented matrix is equivalent to applying algebraic operations to the original equations.
Step 1 — Write the system in standard form
All variables on the left, constants on the right, same variable order in every equation.
2x + 3y = 7
x - y = 1
Step 2 — Extract coefficients and constants
Each equation becomes one row. Include a 0 for any missing variable.
[ 2 3 | 7 ]
[ 1 -1 | 1 ]
System:
x + 2y - z = 4
2x - y + 3z = 1
-x + 3y + 2z = 5
Augmented matrix:
[ 1 2 -1 | 4 ]
[ 2 -1 3 | 1 ]
[-1 3 2 | 5 ]
Three types of row operations can be applied to an augmented matrix without changing the solution set of the system. These are the only legal moves in row reduction.
R2 swap R1
Interchange any two rows. The equations are just reordered; solutions stay the same.
Swap Row 1 and Row 2 so the leading 1 moves to the top.
Row 1 becomes (1/2) times Row 1
Multiply every entry in a row by the same nonzero constant. Equivalent to dividing both sides of an equation by that constant.
Multiply Row 1 by 1/2 to create a leading coefficient of 1.
Row 2 becomes Row 2 minus 2 times Row 1
Replace a row with the sum of that row and a multiple of another row. This is the workhorse operation — it creates zeros below (and above) pivot positions.
Row 2 becomes Row 2 minus 3 times Row 1 (eliminates the x term in Row 2).
Ri swap Rj
Swap rows i and j
k Ri to Ri
Multiply row i by scalar k
Ri + k Rj to Ri
Add k times row j to row i
Gaussian elimination uses row operations to transform the augmented matrix into row echelon form (REF). In REF, each row starts further right than the row above it, and all entries below each pivot are zero. Back substitution then recovers the solution.
All-zero rows (if any) are at the bottom.
The leading entry (pivot) of each non-zero row is to the right of the pivot in the row above.
All entries below a pivot are zero.
Worked Example — 2x2 System
Solve: 2x + 4y = 10 and x + 3y = 7
Start: Augmented matrix
[ 2 4 | 10 ]
[ 1 3 | 7 ]
Row 1 becomes (1/2) times Row 1 — create leading 1
[ 1 2 | 5 ]
[ 1 3 | 7 ]
Row 2 becomes Row 2 minus 1 times Row 1 — eliminate x from Row 2
[ 1 2 | 5 ]
[ 0 1 | 2 ]
Row echelon form achieved. Back substitute:
Row 2 gives y = 2.
Row 1: x + 2(2) = 5, so x + 4 = 5, so x = 1.
Solution: x = 1, y = 2
Gauss-Jordan elimination continues past row echelon form to produce reduced row echelon form (RREF). In RREF every pivot equals 1, and all entries above and below each pivot are zero. The solution can be read directly from the matrix without back substitution.
All-zero rows are at the bottom.
The leading entry (pivot) of each non-zero row is strictly to the right of the pivot above it.
All entries below each pivot are zero (same as REF).
Each pivot equals exactly 1 (scaled pivot).
All entries above each pivot are also zero (back elimination done).
Requirements 4 and 5 are what distinguish RREF from plain REF.
Worked Example — Continuing to RREF
Starting from the REF obtained above:
REF (where we left off)
[ 1 2 | 5 ]
[ 0 1 | 2 ]
Row 1 becomes Row 1 minus 2 times Row 2 — eliminate y from Row 1
[ 1 0 | 1 ]
[ 0 1 | 2 ]
RREF achieved. Read solution directly:
x = 1, y = 2
No back substitution needed — the answer is in the last column.
This full worked example shows every step of Gaussian and Gauss-Jordan elimination on a three-variable system. Follow the row operation notation carefully.
System
x + 2y - z = 2
2x + y + z = 7
x - y + 2z = 4
STEP 0 — Write augmented matrix
[ 1 2 -1 | 2 ]
[ 2 1 1 | 7 ]
[ 1 -1 2 | 4 ]
STEP 1 — Eliminate x from Rows 2 and 3
Row 2 becomes Row 2 minus 2 times Row 1
Row 3 becomes Row 3 minus 1 times Row 1
[ 1 2 -1 | 2 ]
[ 0 -3 3 | 3 ]
[ 0 -3 3 | 2 ]
STEP 2 — Scale Row 2
Row 2 becomes (-1/3) times Row 2
[ 1 2 -1 | 2 ]
[ 0 1 -1 | -1 ]
[ 0 -3 3 | 2 ]
STEP 3 — Eliminate y from Row 3
Row 3 becomes Row 3 plus 3 times Row 2
[ 1 2 -1 | 2 ]
[ 0 1 -1 | -1 ]
[ 0 0 0 | -1 ]
Inconsistent System Detected!
Row 3 reads: 0x + 0y + 0z = -1, which simplifies to 0 = -1. This is impossible. The system has no solution.
Consistent 3x3 — Full RREF Walkthrough
x + y + z = 6
2x - y + z = 3
x + 2y - z = 2
Augmented matrix
[ 1 1 1 | 6 ]
[ 2 -1 1 | 3 ]
[ 1 2 -1 | 2 ]
Row 2 becomes Row 2 minus 2 times Row 1 | Row 3 becomes Row 3 minus Row 1
[ 1 1 1 | 6 ]
[ 0 -3 -1 | -9 ]
[ 0 1 -2 | -4 ]
Swap Row 2 and Row 3 (to put a smaller pivot first)
[ 1 1 1 | 6 ]
[ 0 1 -2 | -4 ]
[ 0 -3 -1 | -9 ]
Row 3 becomes Row 3 plus 3 times Row 2
[ 1 1 1 | 6 ]
[ 0 1 -2 | -4 ]
[ 0 0 -7 |-21 ]
Row 3 becomes (-1/7) times Row 3
[ 1 1 1 | 6 ]
[ 0 1 -2 | -4 ]
[ 0 0 1 | 3 ]
Back eliminate: Row 2 becomes Row 2 plus 2 times Row 3
[ 1 1 1 | 6 ]
[ 0 1 0 | 2 ]
[ 0 0 1 | 3 ]
Row 1 becomes Row 1 minus Row 3 | then Row 1 becomes Row 1 minus Row 2
[ 1 0 0 | 1 ]
[ 0 1 0 | 2 ]
[ 0 0 1 | 3 ]
RREF reached — read solution directly:
x = 1, y = 2, z = 3
Verify: (1)+(2)+(3)=6 true | 2(1)-(2)+(3)=3 true | (1)+2(2)-(3)=2 true
Row reduction always reveals one of three outcomes: a unique solution, no solution (inconsistent), or infinitely many solutions (dependent). Learn to read the signals.
Every column (except the augmented column) contains a pivot. RREF looks like the identity matrix on the left.
One solution: x = a, y = b, z = c.
A row of the form [ 0 0 ... 0 | c ] with c nonzero appears. It says 0 = c, which is false.
No solution. Write: inconsistent system.
A row of all zeros appears, and there are fewer pivots than variables. Some variables are free.
Write the basic variables in terms of the free variable(s). Parametric solution.
After row reduction the augmented matrix is:
[ 1 2 0 | 3 ]
[ 0 0 1 | 5 ]
[ 0 0 0 | 0 ]
Pivots are in columns 1 and 3. Column 2 has no pivot, so y is the free variable.
Let y = t (any real number). Then:
z = 5
x = 3 - 2t
y = t
The solution set is a line in three-dimensional space, parametrized by t. Every value of t gives a valid solution.
Matrix arithmetic operates entry by entry for addition and subtraction (same-size matrices only) and scales every entry uniformly for scalar multiplication.
A + B and A - B are defined only when A and B have identical dimensions. Add (or subtract) corresponding entries:
(A + B) entry ij = a sub-ij plus b sub-ij
(A - B) entry ij = a sub-ij minus b sub-ij
Addition is commutative (A + B = B + A) and associative.
Multiplying matrix A by scalar k means multiplying every single entry by k. No size restrictions apply.
(kA) entry ij = k times a sub-ij
Distributes over addition: k(A + B) = kA + kB.
Worked Example
Let A = [ 1 3 ; -2 0 ] and B = [ 4 -1 ; 5 2 ]. Compute 2A - B.
2A = [ 2·1 2·3 ; 2·(-2) 2·0 ] = [ 2 6 ; -4 0 ]
2A - B = [ 2-4 6-(-1) ; -4-5 0-2 ]
2A - B = [ -2 7 ; -9 -2 ]
Matrix multiplication is more complex than entry-wise operations. Each entry of the product is a dot product of a row from the left matrix with a column from the right matrix.
Definition
(AB) entry ij = row i of A dotted with column j of B
= sum from k=1 to n of (a sub-ik times b sub-kj)
Worked Example — 2x2 Times 2x2
Multiply A = [ 2 1 ; 0 3 ] by B = [ 1 4 ; 2 -1 ].
Entry (1,1): row 1 of A dot col 1 of B = 2(1) + 1(2) = 4
Entry (1,2): row 1 of A dot col 2 of B = 2(4) + 1(-1) = 7
Entry (2,1): row 2 of A dot col 1 of B = 0(1) + 3(2) = 6
Entry (2,2): row 2 of A dot col 2 of B = 0(4) + 3(-1) = -3
AB = [ 4 7 ; 6 -3 ]
Non-Commutative Demonstration
Compute BA with the same matrices to show AB and BA differ.
Entry (1,1): row 1 of B dot col 1 of A = 1(2) + 4(0) = 2
Entry (1,2): row 1 of B dot col 2 of A = 1(1) + 4(3) = 13
Entry (2,1): row 2 of B dot col 1 of A = 2(2) + (-1)(0) = 4
Entry (2,2): row 2 of B dot col 2 of A = 2(1) + (-1)(3) = -1
BA = [ 2 13 ; 4 -1 ]
AB is not equal to BA — matrix multiplication is NOT commutative.
AB is not generally equal to BA
Not commutative — order matters
A(BC) = (AB)C
Associative — grouping does not matter
A(B + C) = AB + AC
Distributive over addition
(AB)T = BT AT
Transpose reverses order
AI = IA = A
Identity matrix acts like multiplying by 1
A times zero matrix = zero matrix
Multiplying by zero gives zero
The n x n identity matrix I has 1s on the main diagonal and 0s everywhere else. It is the matrix equivalent of the number 1:
2x2 identity:
[ 1 0 ]
[ 0 1 ]
3x3 identity:
[ 1 0 0 ]
[ 0 1 0 ]
[ 0 0 1 ]
AI = IA = A for any square matrix A of the same size.
A square matrix A is invertible if there exists a matrix A-inverse such that:
A times A-inverse = A-inverse times A = I
A matrix is invertible if and only if its determinant is not zero.
If the determinant equals 0, the matrix is called singular — no inverse exists.
Only square matrices can be invertible.
For A = [ a b ; c d ] with det(A) = ad - bc not equal to 0:
A-inverse = (1/(ad-bc)) times [ d -b ; -c a ]
Swap a and d. Negate b and c. Divide every entry by ad - bc.
Worked Example — Find A-inverse for A = [ 3 1 ; 5 2 ]
det(A) = 3(2) - 1(5) = 6 - 5 = 1
Swap 3 and 2, negate 1 and 5: adjugate = [ 2 -1 ; -5 3 ]
A-inverse = (1/1) times [ 2 -1 ; -5 3 ]
A-inverse = [ 2 -1 ; -5 3 ]
Verify: A times A-inverse = [ 3(2)+1(-5) 3(-1)+1(3) ; 5(2)+2(-5) 5(-1)+2(3) ] = [ 1 0 ; 0 1 ] = I
For matrices larger than 2x2, find the inverse by augmenting A with the identity matrix and row-reducing. When the left side becomes I, the right side is A-inverse.
Write the augmented matrix [A | I] — A on the left, identity on the right.
Apply row operations to reduce the left block A to the identity matrix I.
The same operations automatically transform the right block I into A-inverse.
If the left block cannot be reduced to I (produces a zero row), A is singular — no inverse exists.
Any linear system can be written in matrix form Ax = b, where A is the coefficient matrix, x is the column vector of unknowns, and b is the column vector of constants. When A is invertible, multiply both sides on the left by A-inverse to solve.
If Ax = b and A is invertible, then x = A-inverse times b
Multiply both sides on the left by A-inverse: A-inverse A x = A-inverse b, so I x = A-inverse b, so x = A-inverse b.
Worked Example
Solve the system: 3x + y = 5 and 5x + 2y = 8.
Write in matrix form: A = [ 3 1 ; 5 2 ], b = [ 5 ; 8 ]
From previous example: A-inverse = [ 2 -1 ; -5 3 ]
x = A-inverse times b = [ 2 -1 ; -5 3 ] times [ 5 ; 8 ]
x entry: 2(5) + (-1)(8) = 10 - 8 = 2
y entry: -5(5) + 3(8) = -25 + 24 = -1
x = 2, y = -1
Check: 3(2)+(-1)=5 true | 5(2)+2(-1)=8 true
The determinant is a single number computed from a square matrix. For a 2x2 matrix it tells you whether the matrix is invertible and plays a key role in Cramer's Rule and the area interpretation of linear transformations.
For A = [ a b ; c d ]:
det(A) = ad - bc
Multiply down the main diagonal (top-left to bottom-right), subtract the product of the anti-diagonal (top-right to bottom-left).
A = [ 4 3 ; 2 1 ]
det = 4(1) - 3(2) = 4 - 6 = -2
Invertible (det not 0)
B = [ 2 -3 ; -4 6 ]
det = 2(6) - (-3)(-4) = 12 - 12 = 0
Singular — no inverse
C = [ 5 2 ; 1 3 ]
det = 5(3) - 2(1) = 15 - 2 = 13
Invertible (det not 0)
Cramer's Rule gives a formula for each variable as a ratio of determinants. It applies to square systems with a unique solution (determinant of coefficient matrix not zero). For 2x2 systems it is often the fastest algebraic method.
Given system:
ax + by = e
cx + dy = f
D = det[ a b ; c d ] = ad - bc
Dx = det[ e b ; f d ] = ed - bf (replace column 1 with constants)
Dy = det[ a e ; c f ] = af - ec (replace column 2 with constants)
x = Dx / D y = Dy / D
Worked Example
Solve: 4x + 3y = 10 and x - 2y = -1
D = det[ 4 3 ; 1 -2 ] = (4)(-2) - (3)(1) = -8 - 3 = -11
Dx = det[ 10 3 ; -1 -2 ] = (10)(-2) - (3)(-1) = -20 + 3 = -17
Dy = det[ 4 10 ; 1 -1 ] = (4)(-1) - (10)(1) = -4 - 10 = -14
x = -17 / -11 = 17/11 y = -14 / -11 = 14/11
Check: 4(17/11) + 3(14/11) = 68/11 + 42/11 = 110/11 = 10 true
| Topic | Formula or Rule |
|---|---|
| Matrix size m x n | m rows and n columns; entry at row i column j is a-ij |
| Augmented matrix | [ coefficient matrix | constants column ] |
| Row operation 1 (swap) | Ri swap Rj — solution set unchanged |
| Row operation 2 (scale) | k times Ri to Ri — multiply row by nonzero k |
| Row operation 3 (replace) | Ri plus k times Rj to Ri — workhorse operation |
| Inconsistent signal | Row [ 0 0 ... | c ] with c not 0 — no solution |
| Dependent signal | Row of all zeros, fewer pivots than variables — parametric solution |
| Matrix addition | (A+B)-ij = a-ij + b-ij; requires same dimensions |
| Scalar mult | (kA)-ij = k times a-ij |
| Matrix mult AB | (AB)-ij = row i of A dotted with col j of B; need cols(A)=rows(B) |
| AB not equal to BA | Matrix multiplication is NOT commutative |
| Identity matrix I | 1s on diagonal, 0s elsewhere; AI = IA = A |
| 2x2 inverse formula | A-inverse = (1/(ad-bc)) times [d, -b; -c, a] |
| Row reduction inverse | Row reduce [A | I] to get [I | A-inverse] |
| Solve Ax=b | x = A-inverse times b (when A is invertible) |
| 2x2 determinant | det[a,b;c,d] = ad - bc |
| Invertible test | det(A) not 0 if and only if A is invertible |
| Cramer x | x = det(Ax) / det(A); Ax is A with col 1 replaced by b |
| Cramer y | y = det(Ay) / det(A); Ay is A with col 2 replaced by b |
Write every variable in every equation before building the matrix. If an equation is missing a variable, write 0 for that coefficient. One wrong entry in the setup ruins all downstream work.
Never skip recording which operation you used. Write Row 2 becomes Row 2 minus 3 times Row 1 before showing the new matrix. This lets you catch arithmetic errors and earn partial credit.
After each elimination step, scan for a row of all zeros in the coefficient block. If you see one with a nonzero constant, stop immediately — the system is inconsistent.
After computing A-inverse, always verify by multiplying A times A-inverse. The result must be the 2x2 identity matrix. This takes 30 seconds and catches sign errors.
Write the size of each matrix next to it before multiplying. A (3x2) times (2x4) = a (3x4) result. If the inner numbers do not match, the product is undefined.
When forming Dx, replace the entire first column with the constants vector — including the sign of each constant. Do not copy from the original system; copy from the constants column of the augmented matrix.
An augmented matrix combines the coefficient matrix with the constants column. For the system 2x + 3y = 7 and x - y = 1, write [ 2 3 | 7 ] on top and [ 1 -1 | 1 ] below, separated by a vertical bar. The left side holds coefficients of each variable in order; the right side holds the constants. Every equation becomes one row.
The three legal row operations are: (1) Swap two rows — swapping Row i and Row j changes the order but not the solution set. (2) Scale a row — multiply every entry in a row by a nonzero constant. (3) Row replacement — add a multiple of one row to another row and replace the second row with the result. All three preserve the solution set of the linear system.
Row echelon form (REF) requires: (1) all-zero rows at the bottom, (2) each leading entry (pivot) is to the right of the pivot above it, and (3) entries below each pivot are zero. This is the goal of Gaussian elimination. Reduced row echelon form (RREF) adds two more requirements: (4) each pivot equals exactly 1, and (5) entries above each pivot are also zero. RREF is produced by Gauss-Jordan elimination and gives the solution directly without back substitution.
During row reduction, if you produce a row of the form [ 0 0 0 | c ] where c is nonzero, the system is inconsistent — it has no solution. This row says 0 = c, which is impossible. Stop row-reducing and write 'no solution' or 'inconsistent system.'
A dependent system produces a row of all zeros [ 0 0 0 | 0 ] during row reduction, with fewer pivot columns than variables. The variables corresponding to columns without pivots are free variables. Express each basic variable in terms of the free variables using back substitution or RREF, then write the solution as a parametric family.
Matrix multiplication AB is defined only when the number of columns in A equals the number of rows in B. If A is m by n and B is n by p, the product AB is m by p. Matrix multiplication is NOT commutative in general: AB and BA may not be equal, and BA may not even be defined when AB is.
For A = [[a, b], [c, d]] with det(A) = ad - bc not equal to zero, the inverse is A inverse = (1 / (ad - bc)) times [[d, -b], [-c, a]]. The recipe: swap the main diagonal entries, negate the off-diagonal entries, then divide every entry by the determinant. If det(A) = 0 the matrix is singular and has no inverse.
For the 2x2 system ax + by = e and cx + dy = f, let D = ad - bc (the determinant of the coefficient matrix). Form Dx by replacing the first column with the constants [e, f] to get Dx = ed - bf. Form Dy by replacing the second column to get Dy = af - ce. Then x = Dx/D and y = Dy/D. Cramer's Rule requires D not equal to zero.
Deep dive into matrix addition, multiplication, determinants, and inverses with more worked examples.
Substitution and elimination methods, graphing systems, and applications.
Complete guide covering all Stewart precalculus chapters with 650 plus practice problems.
Row reduction, matrix multiplication, inverse matrices, and Cramer's Rule — with step-by-step AI explanations for every problem. No credit card required.
Start Practicing Free