Vectors, matrices, systems of equations, linear transformations, vector spaces, eigenvalues — the complete foundation for linear algebra with worked examples.
A vector in ℝⁿ is an ordered list of n real numbers. Geometrically it represents a directed arrow; algebraically it is a column matrix.
A vector is an arrow with direction and magnitude (length). Two arrows with the same length and direction represent the same vector regardless of where they start.
Vectors are column matrices. Operations follow component-wise arithmetic. The zero vector 0 has every component equal to zero.
The magnitude of a vector v = [v₁, v₂, …, vₙ]ᵀ is its Euclidean length:
A unit vector has magnitude 1. To normalize v: û = v / ‖v‖. Example: v = [3, 4]ᵀ has ‖v‖ = √(9+16) = 5. Unit vector: û = [3/5, 4/5]ᵀ.
u + v = [u₁+v₁, u₂+v₂, …]ᵀ
Add corresponding components. Geometrically: place v tip-to-tail after u. Commutative and associative.
cu = [cu₁, cu₂, …]ᵀ
Multiply every component by scalar c. Stretches (|c|>1), shrinks (|c|<1), or flips (c<0) the vector.
u · v = u₁v₁ + u₂v₂ + ⋯ + uₙvₙ
Returns a scalar. Equals ‖u‖‖v‖cos θ where θ is the angle between the vectors. Zero iff orthogonal.
A linear combination of vectors v₁, v₂, …, vₚ with scalars c₁, c₂, …, cₚ is:
The span of a set of vectors is the collection of all possible linear combinations. Determining whether a vector b is in the span of {v₁, …, vₚ} is equivalent to asking whether the system [v₁ v₂ ⋯ vₚ | b] is consistent.
Add matrices of the same size component-wise. A + B = C where Cᵢⱼ = Aᵢⱼ + Bᵢⱼ. Commutative: A + B = B + A.
Aᵀ flips rows and columns: (Aᵀ)ᵢⱼ = Aⱼᵢ. An m×n matrix becomes n×m. Key property: (AB)ᵀ = BᵀAᵀ.
Multiply A (m×n) by B (n×p) to get C (m×p). Entry Cᵢⱼ = row i of A dotted with column j of B. Requires inner dimensions to match. Not commutative in general.
A = [[1,2],[3,4]] B = [[5,6],[7,8]]
C₁₁ = 1·5 + 2·7 = 19 C₁₂ = 1·6 + 2·8 = 22
C₂₁ = 3·5 + 4·7 = 43 C₂₂ = 3·6 + 4·8 = 50
AB = [[19,22],[43,50]]
Properties that hold
A(BC) = (AB)C (associative)
A(B+C) = AB + AC (distributive)
AI = IA = A (identity)
Properties that do NOT hold
AB ≠ BA in general
AB = 0 does not imply A=0 or B=0
AB = AC does not imply B = C
Gaussian elimination transforms the augmented matrix [A|b] using elementary row operations to solve Ax = b systematically.
R₁ ↔ R₂Swap two rows — does not change the solution set.
cRᵢ → RᵢMultiply a row by a nonzero scalar — scales but preserves solutions.
Rᵢ + cRⱼ → RᵢReplace a row by itself plus a multiple of another row — the most-used operation in elimination.
Solve by back-substitution from bottom row up.
Solution is read directly — no back-substitution needed.
| RREF Result | Consistency | Solutions |
|---|---|---|
| Row [0 0 … 0 | c], c≠0 | Inconsistent | None |
| Every column is a pivot column | Consistent | Exactly one (unique) |
| At least one free variable | Consistent | Infinitely many |
The rank of A is the number of pivot positions in RREF — equivalently, the dimension of the column space Col(A) and also the row space Row(A).
The nullity of A is the dimension of the null space Nul(A) — the number of free variables in the homogeneous system Ax = 0.
For any m×n matrix A, the rank plus the nullity always equals n (the number of columns). Example: a 4×7 matrix with rank 3 has nullity 4.
A function T: ℝⁿ → ℝᵐ is a linear transformation if for all vectors u, v ∈ ℝⁿ and all scalars c:
Consequence: T(0) = 0 always. If T fails either property, it is not linear.
Every linear transformation T: ℝⁿ → ℝᵐ is represented by a unique m×n matrix A where T(x) = Ax. To find A: apply T to each standard basis vector eⱼ and place the result as the j-th column.
Rotation by θ
[[cos θ, −sin θ], [sin θ, cos θ]]
Rotates every vector counterclockwise by angle θ about the origin.
Reflection over x-axis
[[1, 0], [0, −1]]
Flips the y-component. Over y-axis: [[−1,0],[0,1]].
Scaling (dilation)
[[k, 0], [0, k]]
Scales all vectors by factor k. Uniform dilation from origin.
Horizontal shear
[[1, k], [0, 1]]
Slides points horizontally by k times their y-coordinate. Parallelogram effect.
A vector space is a set V with operations of addition and scalar multiplication satisfying 10 axioms (closure, associativity, commutativity, identity, inverses, distributivity). The most important examples: ℝⁿ, the space of m×n matrices, and the space of polynomials of degree ≤ n.
A subset H of a vector space V is a subspace if it satisfies three conditions:
The zero vector is in H
H is closed under addition: if u, v ∈ H then u + v ∈ H
H is closed under scalar multiplication: if v ∈ H and c ∈ ℝ then cv ∈ H
Span{v₁, …, vₚ} is the set of all linear combinations of the vectors. It is always a subspace — the smallest subspace containing all vᵢ.
A basis is a linearly independent spanning set. Every basis for a given subspace has the same number of vectors, which equals the dimension.
dim(H) = number of vectors in any basis for H. dim(ℝⁿ) = n. The standard basis for ℝⁿ is {e₁, e₂, …, eₙ}.
The null space of A is the solution set of the homogeneous equation Ax = 0. It lives in the input space ℝⁿ and is always a subspace.
Find by solving Ax = 0 via RREF, writing the solution in parametric vector form. Free-variable vectors form a basis for Nul(A).
The column space of A is the span of the columns of A — the set of all vectors Ax as x ranges over ℝⁿ. It lives in the output space ℝᵐ.
Find a basis by row reducing A and selecting the columns corresponding to pivot positions in RREF — but take those columns from the original A.
A = [[1,2,0,−1],[0,0,1,3],[0,0,0,0]]
RREF of A: [[1,2,0,−1],[0,0,1,3],[0,0,0,0]]
Pivots in columns 1 and 3. Free variables: x₂ = s, x₄ = t
From row 1: x₁ = −2s + t From row 2: x₃ = −3t
Nul(A) basis: { [−2,1,0,0]ᵀ , [1,0,−3,1]ᵀ } (nullity = 2)
Col(A) basis: columns 1,3 of A = { [1,0,0]ᵀ , [0,1,0]ᵀ } (rank = 2)
Check: rank + nullity = 2 + 2 = 4 = number of columns ✓
Vectors u and v are orthogonal (perpendicular) when their dot product is zero.
Example: [1, 0, −1]ᵀ and [1, 2, 1]ᵀ. Dot product = 1·1 + 0·2 + (−1)·1 = 0 ✓
The orthogonal complement W⊥ of a subspace W is the set of all vectors orthogonal to every vector in W. Key fact: Nul(A) = Row(A)⊥.
The projection of y onto a subspace W with orthogonal basis {u₁, …, uₚ} is the vector in W closest to y:
The error vector y − proj_W y is orthogonal to W. This is the basis of the least-squares method for finding best-fit solutions to overdetermined systems.
Converts a basis {x₁, x₂, …, xₚ} into an orthogonal basis {v₁, v₂, …, vₚ} for the same subspace:
Normalize each vᵢ by dividing by its length to obtain an orthonormal basis.
For an n×n matrix A, a scalar λ is an eigenvalue and a nonzero vector v is a corresponding eigenvector if:
Geometrically: multiplying v by A only scales v by λ — it does not change direction (or flips it if λ < 0). The set of all eigenvectors for a given λ plus 0 form the eigenspace Eλ = Nul(A − λI).
Form A − λI
Subtract λ from every diagonal entry of A.
Compute det(A − λI) = 0
This gives the characteristic polynomial. Its roots are the eigenvalues.
Find eigenvectors
For each eigenvalue λ, solve (A − λI)v = 0 to find Nul(A − λI).
A = [[4, 1], [2, 3]]
det(A − λI) = (4−λ)(3−λ) − (1)(2)
= λ² − 7λ + 12 − 2 = λ² − 7λ + 10 = (λ−5)(λ−2) = 0
Eigenvalues: λ₁ = 5, λ₂ = 2
For λ=5: (A−5I)v = [[-1,1],[2,-2]]v = 0 → v = [1,1]ᵀ
For λ=2: (A−2I)v = [[2,1],[2,1]]v = 0 → v = [1,−2]ᵀ
Eigenpairs: (5, [1,1]ᵀ) and (2, [1,−2]ᵀ)
System: x + 2y − z = 4 | 2x + y + z = 3 | x − y + 2z = −1
Augmented matrix: [[1,2,−1|4],[2,1,1|3],[1,−1,2|−1]]
R₂ → R₂ − 2R₁: [[1,2,−1|4],[0,−3,3|−5],[1,−1,2|−1]]
R₃ → R₃ − R₁: [[1,2,−1|4],[0,−3,3|−5],[0,−3,3|−5]]
R₃ → R₃ − R₂: [[1,2,−1|4],[0,−3,3|−5],[0,0,0|0]]
Free variable: z = t From R₂: −3y + 3t = −5 → y = (5+3t)/3
Infinitely many solutions (one free variable z = t)
T rotates vectors 90° counterclockwise in ℝ².
T(e₁) = T([1,0]ᵀ) = [0,1]ᵀ (right → up)
T(e₂) = T([0,1]ᵀ) = [−1,0]ᵀ (up → left)
Standard matrix: A = [[0,−1],[1,0]]
Verify: A[3,2]ᵀ = [−2,3]ᵀ ✓ (rotated 90° CCW)
Using formula: cos(90°)=0, sin(90°)=1 → [[0,−1],[1,0]] ✓
A = [[1,3,−2,0],[2,6,−5,−2],[0,0,5,10],[0,0,0,0]]
Row reduce to RREF: [[1,3,0,4],[0,0,1,2],[0,0,0,0],[0,0,0,0]]
Pivots in columns 1,3. Free variables: x₂ = s, x₄ = t
x₁ = −3s − 4t x₃ = −2t
x = s[−3,1,0,0]ᵀ + t[−4,0,−2,1]ᵀ
Basis for Nul(A): { [−3,1,0,0]ᵀ, [−4,0,−2,1]ᵀ } — nullity = 2
y = [2, 5, 1]ᵀ u = [1, 2, −1]ᵀ
y · u = 2·1 + 5·2 + 1·(−1) = 2 + 10 − 1 = 11
u · u = 1 + 4 + 1 = 6
proj_u y = (11/6)[1,2,−1]ᵀ = [11/6, 11/3, −11/6]ᵀ
Error: y − proj_u y = [2−11/6, 5−22/6, 1+11/6]
Verify orthogonality: error · u = (1/6)(1) + (8/6)(2) + (17/6)(−1) = (1+16−17)/6 = 0 ✓
A scalar is a single number with magnitude only — like temperature or speed. A vector has both magnitude and direction. In ℝⁿ, a vector is an ordered list of n real numbers, written as a column matrix. For example, v = [3, −1, 2]ᵀ is a vector in ℝ³ pointing in the direction determined by those three components. Geometrically in ℝ² or ℝ³, you can visualize a vector as an arrow from the origin to the point (3, −1, 2). Scalars multiply vectors by stretching or shrinking them; vectors add by placing them tip-to-tail.
To multiply matrix A (m×n) by matrix B (n×p), the number of columns in A must equal the number of rows in B. The result is an m×p matrix C where each entry Cᵢⱼ is the dot product of row i of A with column j of B. That is, Cᵢⱼ = Σₖ Aᵢₖ Bₖⱼ. Example: if A = [[1,2],[3,4]] and B = [[5,6],[7,8]], then C₁₁ = 1·5 + 2·7 = 19, C₁₂ = 1·6 + 2·8 = 22, C₂₁ = 3·5 + 4·7 = 43, C₂₂ = 3·6 + 4·8 = 50. Important: matrix multiplication is generally not commutative — AB ≠ BA.
Gaussian elimination is a systematic algorithm for solving systems of linear equations by transforming the augmented matrix into row echelon form (REF) using three elementary row operations: (1) swap two rows, (2) multiply a row by a nonzero scalar, (3) add a multiple of one row to another. Once in REF, back-substitution gives the solution. Gauss-Jordan elimination continues until reduced row echelon form (RREF) is reached, where each pivot is 1 and is the only nonzero entry in its column — no back-substitution needed. Use it whenever you need to solve Ax = b, find the rank of a matrix, or determine if a system is consistent.
For an m×n matrix A: the column space (or range) Col(A) is the set of all vectors b in ℝᵐ that can be written as Ax for some x — it is spanned by the columns of A. The null space (or kernel) Nul(A) is the set of all vectors x in ℝⁿ such that Ax = 0. Col(A) lives in the output space ℝᵐ; Nul(A) lives in the input space ℝⁿ. The Rank-Nullity Theorem connects them: rank(A) + nullity(A) = n, where rank = dim(Col(A)) and nullity = dim(Nul(A)). Example: a 3×5 matrix of rank 3 has nullity 2, meaning there are 2 free variables and the null space is 2-dimensional.
A linear transformation T: ℝⁿ → ℝᵐ is a function that preserves vector addition and scalar multiplication: T(u + v) = T(u) + T(v) and T(cu) = cT(u). Every linear transformation can be represented by an m×n matrix A such that T(x) = Ax. To find the standard matrix of T, form A by computing T applied to each standard basis vector: the j-th column of A is T(eⱼ). Common geometric linear transformations in ℝ² include rotations, reflections, scaling, and shearing — all expressible as 2×2 matrices.
For an n×n matrix A, a nonzero vector v is an eigenvector with corresponding eigenvalue λ if Av = λv — multiplying by A only stretches or flips v, not changes its direction. To find eigenvalues: solve the characteristic equation det(A − λI) = 0. The solutions are the eigenvalues. For each eigenvalue λ, find eigenvectors by solving (A − λI)v = 0, i.e., find Nul(A − λI). Example: for A = [[3,1],[0,2]], det(A − λI) = (3−λ)(2−λ) = 0, giving λ₁ = 3 and λ₂ = 2. Eigenvectors for λ=3: solve [0,1;0,−1]v=0 → v=[1,0]ᵀ. Eigenvalues and eigenvectors are central to diagonalization, principal component analysis, and differential equations.
A basis for a subspace H is a set of vectors that is (1) linearly independent and (2) spans H — every vector in H is a linear combination of basis vectors. The dimension of H is the number of vectors in any basis; all bases for the same subspace have the same number of vectors. To find a basis for the column space of A: row reduce A to RREF and identify the pivot columns. The corresponding columns of the original matrix A form a basis for Col(A). To find a basis for the null space: solve Ax = 0, write the solution in parametric vector form, and the vectors multiplied by free variables form a basis for Nul(A).
Two vectors u and v are orthogonal if their dot product is zero: u · v = 0. An orthogonal set is a collection of mutually orthogonal nonzero vectors; if each vector also has unit length, it is an orthonormal set. The projection of vector y onto a subspace W is the vector ŷ in W closest to y. If {u₁, u₂, …, uₚ} is an orthogonal basis for W, then ŷ = (y·u₁/u₁·u₁)u₁ + (y·u₂/u₂·u₂)u₂ + ⋯ + (y·uₚ/uₚ·uₚ)uₚ. The Gram-Schmidt process converts any basis into an orthogonal (or orthonormal) basis by successively subtracting projections.
A set of vectors {v₁, v₂, …, vₚ} is linearly independent if the only solution to c₁v₁ + c₂v₂ + ⋯ + cₚvₚ = 0 is c₁ = c₂ = ⋯ = cₚ = 0. In other words, no vector in the set can be written as a linear combination of the others. Geometrically: in ℝ², two vectors are linearly independent if they don't point in the same or opposite directions. To test: form a matrix with the vectors as columns and row reduce. If there are no free variables (every column has a pivot), the vectors are linearly independent. If any free variable exists, the set is linearly dependent.
Interactive problems with step-by-step solutions and private tutoring — free to try.
Start Practicing Free