Introduction to Vector Spaces
A vector space (also called a linear space) is a fundamental algebraic structure in mathematics that formalizes the concept of linear combinations. Vector spaces consist of vectors, which can be added together and multiplied by scalars (numbers), subject to certain axioms.
Vector spaces provide the mathematical framework for linear algebra and serve as the foundation for more advanced structures like Hilbert spaces in functional analysis. They appear throughout mathematics, physics, engineering, and computer science wherever linearity plays a role.
A vector space over a field $F$ (usually $\mathbb{R}$ or $\mathbb{C}$) is a set $V$ equipped with two operations:
- Vector addition: $+: V \times V \to V$, written as $(u, v) \mapsto u + v$
- Scalar multiplication: $\cdot: F \times V \to V$, written as $(a, v) \mapsto av$
These operations must satisfy the following 8 axioms for all $u, v, w \in V$ and all $a, b \in F$:
The 8 Vector Space Axioms
| Axiom Name | Mathematical Statement | Interpretation |
|---|---|---|
| 1. Commutativity of addition | $u + v = v + u$ | Order doesn't matter for addition |
| 2. Associativity of addition | $(u + v) + w = u + (v + w)$ | Grouping doesn't matter for addition |
| 3. Identity element of addition | $\exists 0 \in V: v + 0 = v$ | Zero vector exists |
| 4. Inverse elements of addition | $\forall v \in V, \exists (-v): v + (-v) = 0$ | Every vector has an additive inverse |
| 5. Compatibility of scalar multiplication | $a(bv) = (ab)v$ | Scalar multiplication is associative |
| 6. Identity element of scalar multiplication | $1v = v$ | Multiplying by 1 gives the same vector |
| 7. Distributivity of scalar addition | $(a + b)v = av + bv$ | Scalars distribute over vector addition |
| 8. Distributivity of vector addition | $a(u + v) = au + av$ | Vectors distribute over scalar multiplication |
Examples of Vector Spaces
1. Euclidean Space $\mathbb{R}^n$
The space of $n$-tuples of real numbers: $\mathbb{R}^n = \{(x_1, x_2, \ldots, x_n): x_i \in \mathbb{R}\}$ with componentwise addition and scalar multiplication.
Dimension: $n$
2. Function Space $F(X, \mathbb{R})$
All functions from a set $X$ to $\mathbb{R}$, with pointwise addition $(f+g)(x) = f(x)+g(x)$ and scalar multiplication $(af)(x) = af(x)$.
Dimension: Infinite (if $X$ is infinite)
3. Polynomial Space $P_n(\mathbb{R})$
All polynomials of degree at most $n$: $P_n(\mathbb{R}) = \{a_0 + a_1x + \cdots + a_nx^n: a_i \in \mathbb{R}\}$ with usual polynomial operations.
Dimension: $n+1$ (basis: $\{1, x, x^2, \ldots, x^n\}$)
4. Matrix Space $M_{m \times n}(\mathbb{R})$
All $m \times n$ matrices with real entries, with matrix addition and scalar multiplication.
Dimension: $mn$ (basis: matrices $E_{ij}$ with 1 in position $(i,j)$)
Subspaces: Vector Spaces Within Vector Spaces
A subset $W \subseteq V$ of a vector space $V$ is a subspace if:
- Contains zero vector: $0 \in W$
- Closed under addition: $u, v \in W \Rightarrow u + v \in W$
- Closed under scalar multiplication: $u \in W, a \in F \Rightarrow au \in W$
A subspace is itself a vector space under the operations inherited from $V$.
Consider $\mathbb{R}^3$ with standard operations. The following are subspaces:
- The zero subspace: $\{0\}$ (dimension 0)
- Lines through the origin: $\{t(a,b,c): t \in \mathbb{R}\}$ for fixed $(a,b,c) \neq 0$ (dimension 1)
- Planes through the origin: $\{(x,y,z): ax + by + cz = 0\}$ for fixed $(a,b,c) \neq 0$ (dimension 2)
- The whole space $\mathbb{R}^3$ itself (dimension 3)
These correspond to all possible subspaces of $\mathbb{R}^3$: 0D, 1D, 2D, and 3D.
Linear Independence, Span, and Basis
Linear Combination
A vector $v$ is a linear combination of vectors $v_1, \ldots, v_k$ if $v = a_1v_1 + \cdots + a_kv_k$ for some scalars $a_1, \ldots, a_k$.
Span
The span of a set $S = \{v_1, \ldots, v_k\}$ is the set of all linear combinations: $\text{span}(S) = \{a_1v_1 + \cdots + a_kv_k: a_i \in F\}$.
Linear Independence
Vectors $v_1, \ldots, v_k$ are linearly independent if $a_1v_1 + \cdots + a_kv_k = 0$ implies $a_1 = \cdots = a_k = 0$. Otherwise, they're linearly dependent.
A basis for a vector space $V$ is a set $B \subseteq V$ such that:
- $B$ is linearly independent
- $\text{span}(B) = V$ (i.e., $B$ spans $V$)
Every vector in $V$ can be written uniquely as a linear combination of basis vectors.
The Dimension Theorem
Let $V$ be a vector space. If $V$ has a finite basis, then:
- Every basis of $V$ has the same number of elements
- This common number is called the dimension of $V$, denoted $\dim V$
- If $V = \{0\}$, then $\dim V = 0$ (by convention)
If $V$ has no finite basis, we say $V$ is infinite-dimensional.
The key to proving all bases have the same size is the Steinitz exchange lemma:
If $\{v_1, \ldots, v_m\}$ spans $V$ and $\{w_1, \ldots, w_n\}$ is linearly independent, then $n \leq m$ and we can replace $n$ of the $v_i$'s with the $w_j$'s to get a new spanning set.
Consequences:
- Any two finite bases have the same number of elements
- Any linearly independent set can be extended to a basis
- Any spanning set can be reduced to a basis
Linear Transformations and Matrices
A function $T: V \to W$ between vector spaces is a linear transformation if:
- $T(u + v) = T(u) + T(v)$ for all $u, v \in V$ (additivity)
- $T(av) = aT(v)$ for all $a \in F, v \in V$ (homogeneity)
Equivalently: $T(au + bv) = aT(u) + bT(v)$ for all $a, b \in F$ and $u, v \in V$.
Given bases for $V$ and $W$, every linear transformation $T: V \to W$ corresponds to an $m \times n$ matrix $A$ where:
The matrix $A = [a_{ij}]$ has columns given by $T(v_j)$ expressed in the $w$-basis.
Inner Product Spaces: Adding Geometry to Algebra
An inner product space is a vector space equipped with an additional structure that allows measurement of lengths and angles.
An inner product on a vector space $V$ over $F$ ($\mathbb{R}$ or $\mathbb{C}$) is a function $\langle \cdot, \cdot \rangle: V \times V \to F$ satisfying:
- Conjugate symmetry: $\langle u, v \rangle = \overline{\langle v, u \rangle}$
- Linearity in first argument: $\langle au + bv, w \rangle = a\langle u, w \rangle + b\langle v, w \rangle$
- Positive definiteness: $\langle v, v \rangle \geq 0$, with equality iff $v = 0$
An inner product induces a norm: $\|v\| = \sqrt{\langle v, v \rangle}$.
On $\mathbb{R}^n$: $\langle x, y \rangle = \sum_{i=1}^n x_i y_i$ (dot product)
On $\mathbb{C}^n$: $\langle z, w \rangle = \sum_{i=1}^n z_i \overline{w_i}$
On $L^2([a,b])$: $\langle f, g \rangle = \int_a^b f(x)\overline{g(x)} dx$
On matrix space $M_{n}(\mathbb{C})$: $\langle A, B \rangle = \text{tr}(AB^*)$ (trace of $A$ times conjugate transpose of $B$)
Applications of Vector Space Theory
The system $Ax = b$ where $A$ is $m \times n$ can be understood in vector space terms:
- Column space: $\text{Col}(A) = \{Ax: x \in \mathbb{R}^n\} \subseteq \mathbb{R}^m$
- Null space: $\text{Null}(A) = \{x \in \mathbb{R}^n: Ax = 0\}$
- Solution exists iff $b \in \text{Col}(A)$
- Rank-nullity theorem: $\dim(\text{Col}(A)) + \dim(\text{Null}(A)) = n$
This geometric perspective explains solution structure and dimensions.
In quantum mechanics, the state of a system is represented by a vector in a complex Hilbert space (complete inner product space).
- States are unit vectors: $\|\psi\| = 1$
- Observables are self-adjoint operators on the space
- Superposition principle: Linear combinations of states are valid states
- Inner products give transition probabilities: $|\langle \phi|\psi \rangle|^2$
This is a direct application of infinite-dimensional vector space theory.
Vector spaces model images, signals, and geometric transformations:
- Image representation: Pixels as vectors in $\mathbb{R}^{mn}$ for $m \times n$ image
- Transformations: Rotation, scaling, shearing as linear transformations
- Compression: Projection onto lower-dimensional subspaces (JPEG, MPEG)
- Computer vision: Eigenfaces for facial recognition use principal component analysis
Linear algebra operations are implemented efficiently using vector space algorithms.
Connection to Hilbert Spaces and Functional Analysis
Vector spaces form the foundation for more advanced structures in functional analysis:
Hierarchy of Mathematical Structures:
- Vector Spaces (this page) → Algebraic structure with addition and scalar multiplication
- Normed Spaces → Vector spaces with a notion of length (norm)
- Inner Product Spaces → Vector spaces with angle measurement (inner product)
- Hilbert Spaces → Complete inner product spaces (crucial for quantum mechanics)
- Banach Spaces → Complete normed spaces
The Cauchy-Schwarz inequality holds in any inner product space and is fundamental to understanding angles and orthogonality in vector spaces.
The Viral Limit Problem in Vector Space Terms
The viral limit problem from our limit calculator involves sequences in an inner product space:
Given sequences $\{h_p\}, \{b_p\}, \{z_p\}$ in an inner product space $V$ with:
Assuming $\|h_p\| \to 1$, we can use vector space properties and the Cauchy-Schwarz inequality to find:
This demonstrates how vector space theory solves practical limit problems. See the complete solution on our inner product limit problem page.
Advanced Topics: Infinite-Dimensional Vector Spaces
While finite-dimensional vector spaces are well-understood, infinite-dimensional spaces present additional challenges and rich structure.
Hamel vs. Schauder Bases
Hamel basis: Linear combinations are finite sums. Every vector space has a Hamel basis (by axiom of choice).
Schauder basis: Allows infinite linear combinations with convergence. Exists only in certain topological vector spaces.
Dual Spaces
The dual space $V^*$ consists of all linear functionals $f: V \to F$. For finite-dimensional $V$, $\dim V^* = \dim V$. For infinite-dimensional spaces, $V^*$ is typically larger.
Quotient Spaces
Given a subspace $W \subseteq V$, the quotient space $V/W$ consists of equivalence classes $v + W = \{v + w: w \in W\}$. Dimension: $\dim(V/W) = \dim V - \dim W$ (when finite).
🚀 Ready to Apply Vector Space Theory?
Use our limit calculator to solve problems involving vector sequences, or study Hilbert spaces for the infinite-dimensional generalization.
Historical Development and Importance
The concept of vector spaces evolved from several mathematical traditions:
- 1844: Hermann Grassmann introduces the concept of linear extension (Ausdehnungslehre)
- 1888: Giuseppe Peano gives the first axiomatic definition of vector spaces
- 1920: Stefan Banach, David Hilbert, and others develop functional analysis based on infinite-dimensional vector spaces
- 1930: Bartel van der Waerden's "Moderne Algebra" popularizes abstract vector space approach
- 1940s: Applications explode in quantum mechanics, control theory, and computer science
- Today: Vector spaces are fundamental to machine learning (feature spaces), data science, and quantum computing
❓ Vector Spaces FAQ
What's the difference between vector space and field?
A field (like $\mathbb{R}$ or $\mathbb{C}$) is a set with addition, multiplication, and division (except by zero). A vector space is a set with addition and scalar multiplication by elements of a field. The scalars come from the field, but vectors are different objects.
Can a vector space be empty?
No. Every vector space must contain the zero vector (axiom 3). The smallest vector space is $\{0\}$ (the zero space), which has dimension 0.
How many bases does a vector space have?
Infinitely many (except for the zero space). For $\mathbb{R}^n$, there are uncountably many different bases. Any invertible $n \times n$ matrix gives a basis by its columns.
What is the difference between dimension and rank?
Dimension refers to a vector space: $\dim V$ is the number of vectors in any basis. Rank refers to a linear transformation $T$ or matrix $A$: $\text{rank}(T) = \dim(\text{image}(T))$. For a matrix, rank is the dimension of its column space.