np.linalg
(dot, solve, eig)dot
vs @
(matmul
)np.dot
and the @
operator perform matrix multiplication for 2-D arrays; for 1-D arrays, dot
gives the inner product. @
is preferred for readability and supports stacked/batched matmul across higher dimensions like np.matmul
.
import numpy as np
A = np.array([[1, 2],
[3, 4]]) # shape (2,2)
B = np.array([[5, 6],
[7, 8]]) # shape (2,2)
x = np.array([10, 20]) # shape (2,)
# Matrix @ Matrix
print(A @ B) # [[19 22],
# [43 50]]
# Matrix @ Vector
print(A @ x) # [ 50 110]
# dot with 1-D => inner product
u = np.array([1, 2, 3])
v = np.array([4, 5, 6])
print(np.dot(u, v)) # 32
# Batched matmul (3 matrices of 2x2 each)
stack = np.stack([A, B, np.eye(2)])
vecs = np.array([[1,1],[2,0],[0,3]]) # (3,2)
print(stack @ vecs[..., None]) # (3,2,1)
np.linalg.solve
Solve A x = b
without computing the inverse (more stable and faster). For multiple right-hand sides, pass a 2-D b
.
A = np.array([[3., 1.],
[1., 2.]])
b = np.array([9., 8.])
x = np.linalg.solve(A, b) # exact solution
print(x) # [2. 3.]
# Multiple RHS
B = np.array([[9., 1.],
[8., 0.]]) # two RHS columns
X = np.linalg.solve(A, B)
print(X) # columns are solutions for each RHS
Using np.linalg.inv(A) @ b
is numerically weaker and slower than solve
. Prefer solve
or lstsq
for least squares.
eig
and eigh
eig
works for general (possibly non-symmetric) matrices; results may be complex. For real symmetric (or Hermitian) matrices, use eigh
(faster and numerically better).
# General matrix
G = np.array([[0., -1.],
[1., 0.]])
vals, vecs = np.linalg.eig(G)
print(vals) # complex eigenvalues
print(vecs) # columns are eigenvectors
# Symmetric matrix -> use eigh
S = np.array([[2., 1.],
[1., 2.]])
w, v = np.linalg.eigh(S)
print(w) # sorted ascending
print(v) # orthonormal eigenvectors (columns)
# Check: S v = v diag(w)
print(np.allclose(S @ v, v * w))
Norms measure vector length or matrix size; condition number estimates numerical sensitivity.
M = np.array([[1., 2.],
[3., 4.]])
# Vector norms
x = np.array([3., 4.])
print(np.linalg.norm(x, ord=2)) # 5.0
# Matrix norms
print(np.linalg.norm(M, ord=2)) # spectral norm
print(np.linalg.cond(M)) # condition number
When A
is not square or not full rank, use np.linalg.lstsq(A, b, rcond=None)
to minimize ||A x - b||
. It returns the solution, residuals, rank, and singular values.
A = np.array([[1., 1.],
[1., 2.],
[1., 3.]]) # (3,2)
b = np.array([1., 2., 2.5])
x, residuals, rank, s = np.linalg.lstsq(A, b, rcond=None)
print(x) # best-fit slope & intercept (order depends on columns)
These are available, but use them judiciously.
A = np.array([[2., 1.],
[5., 3.]])
print(np.linalg.det(A))
print(np.linalg.inv(A)) # use solve instead for systems
print(np.linalg.pinv(A)) # Moore-Penrose pseudo-inverse (SVD-based)
A.shape
vs B.shape
before matmul.cond(A)
means solutions may be unstable; prefer least squares or regularization.float64
for stability; integers get cast but be explicit when needed.@
/matmul
for many systems at once (when shapes allow).# 1) Verify Ax = b using solve
A = np.array([[4., 1.],
[2., 3.]])
b = np.array([9., 13.])
x = np.linalg.solve(A, b)
print(np.allclose(A @ x, b))
# 2) Show why inverse is not recommended
x_slow = np.linalg.inv(A) @ b
print(np.allclose(x, x_slow))
# 3) Use eigh for a symmetric matrix and verify orthonormality of eigenvectors
S = np.array([[3., 2.],
[2., 3.]])
w, v = np.linalg.eigh(S)
print(np.allclose(v.T @ v, np.eye(2)))
# 4) Fit a line y = a + b x via least squares
X = np.c_[np.ones(5), np.arange(5)]
y = np.array([1., 2., 1.5, 3., 2.5])
params, *_ = np.linalg.lstsq(X, y, rcond=None)
print(params)
# 5) Compute condition numbers for two matrices; compare stability
M1 = np.array([[1., 0.], [0., 1.]])
M2 = np.array([[1., 1.], [1., 1.0001]])
print(np.linalg.cond(M1), np.linalg.cond(M2))
Author
🎥 Join me live on YouTubePassionate about coding and teaching, I publish practical tutorials on PHP, Python, JavaScript, SQL, and web development. My goal is to make learning simple, engaging, and project‑oriented with real examples and source code.