.shape
before operations.np.newaxis
/None
or expand_dims
to add singleton dims.keepdims=True
to preserve alignment.reshape
needs the total number of elements to remain constant.
import numpy as np
a = np.arange(10) # 10 elements
# a.reshape(3, 4) # ValueError: 3*4=12 != 10
print(a.reshape(2, 5).shape) # (2, 5)
print(a.reshape(-1, 5).shape) # (2, 5) (-1 lets NumPy infer)
“operands could not be broadcast together with shapes …” means an aligned pair of dims differ and neither is 1.
A = np.zeros((3,4))
b = np.array([10,20,30]) # (3,)
# A + b -> ValueError (align right: 4 vs 3)
b_col = b[:, None] # (3,1) -> expands across columns
print((A + b_col).shape) # (3,4)
Reducing removes a dimension; use keepdims=True
if you will broadcast the result back.
X = np.arange(12, dtype=float).reshape(3,4) # (rows, cols)
mu = X.mean(axis=0) # (4,)
print((X - mu).shape) # (3,4)
mu2 = X.mean(axis=1, keepdims=True) # (3,1)
print((X - mu2).shape) # (3,4) column-wise broadcast
np.concatenate
joins along an existing axis; np.stack
adds a new axis.
a = np.ones((2,3))
b = np.zeros((2,3))
print(np.concatenate([a,b], axis=0).shape) # (4,3)
print(np.concatenate([a,b], axis=1).shape) # (2,6)
print(np.stack([a,b], axis=0).shape) # (2,2,3) new axis at 0
print(np.stack([a,b], axis=1).shape) # (2,2,3) -> different layout
For A @ B
(2D), inner dims must match: (m, k) @ (k, n) → (m, n)
.
A = np.arange(6).reshape(2,3) # (2,3)
B = np.arange(9).reshape(3,3) # (3,3)
print((A @ B).shape) # (2,3)
# (2,3) @ (2,) -> error; fix by making (3,)
v = np.array([1,2,3])
print((A @ v).shape) # (2,)
squeeze()
removes size-1 dims; expand_dims
/newaxis
adds them. Misplacing leads to alignment errors.
y = np.arange(6).reshape(6,1) # (6,1)
y_flat = np.squeeze(y) # (6,)
x = np.arange(6) # (6,)
x_col = x[:, None] # (6,1) using newaxis
x_row = x[None, :] # (1,6)
After .T
or complex slicing, shapes change and memory may be non-contiguous. Some operations may copy.
M = np.arange(12).reshape(3,4)
MT = M.T # (4,3)
print(M.shape, MT.shape)
# If you need contiguous memory:
MTc = np.ascontiguousarray(MT)
Match the trailing dims; broadcast singleton dims when needed.
T = np.arange(2*3*4).reshape(2,3,4) # (depth, rows, cols)
bias = np.array([1,2,3,4]) # (4,)
print((T + bias).shape) # (2,3,4)
depth_bias = np.array([10, 20])[:, None, None] # (2,1,1)
print((T + depth_bias).shape) # (2,3,4)
print('name', arr.shape)
after each transform.assert arr.shape[1] == d
.# ValueError: cannot reshape array of size X into shape (a,b)
# → Check a*b == X; use -1 to infer one dim.
# ValueError: all the input arrays must have same number of dimensions
# → For concatenate: ensure same rank; use expand_dims/reshape.
# ValueError: operands could not be broadcast together with shapes ...
# → Align from right; insert singleton dims with None/newaxis.
import numpy as np
# 1) Fix the broadcast to add [1,2,3] to each row of a (5,3) matrix.
A = np.arange(15).reshape(5,3)
row = np.array([1,2,3]) # (3,)
print(A + row) # should work
# 2) Concatenate three (10,4) blocks vertically, then compute column means.
blocks = [np.full((10,4), i) for i in range(3)]
big = np.concatenate(blocks, axis=0) # (30,4)
print(big.mean(axis=0))
# 3) Make a column vector from [10,20,30] and add to a (3,5) array.
col = np.array([10,20,30])[:, None] # (3,1)
B = np.arange(15).reshape(3,5)
print(B + col)
# 4) Turn a flat array of 24 items into a (2,3,4) tensor and subtract a (4,) bias.
t = np.arange(24)
T = t.reshape(2,3,4)
bias = np.array([1,2,3,4])
print(T - bias)
Author
🎥 Join me live on YouTubePassionate about coding and teaching, I publish practical tutorials on PHP, Python, JavaScript, SQL, and web development. My goal is to make learning simple, engaging, and project‑oriented with real examples and source code.