nn.Module
in PyTorchnn.Module
is the base class for all neural networks in PyTorch. It helps you organize layers, parameters, and the forward pass in a clean, modular way. In this page, we’ll define a simple feedforward neural network using nn.Module
.
The custom network class must inherit from nn.Module
and implement:
__init__()
– for defining layers.forward()
– for implementing forward propagation logic.import torch
import torch.nn as nn
class SimpleNet(nn.Module):
def __init__(self):
super(SimpleNet, self).__init__()
self.fc1 = nn.Linear(4, 3) # Input layer
self.relu = nn.ReLU() # Activation function
self.fc2 = nn.Linear(3, 1) # Output layer
def forward(self, x):
x = self.fc1(x)
x = self.relu(x)
x = self.fc2(x)
return x
model = SimpleNet()
print(model)
# Input sample
input_data = torch.tensor([[1.0, 2.0, 3.0, 4.0]])
output = model(input_data)
print("Output:", output)
All weights and biases of layers can be accessed and updated using:
for name, param in model.named_parameters():
print(name, param.shape)
To train the model, you need a loss function and an optimizer:
criterion = nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
# Dummy target
target = torch.tensor([[1.0]])
# Forward pass
output = model(input_data)
loss = criterion(output, target)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
print("Loss:", loss.item())
import torch
import torch.nn as nn
class MultiLayerNet(nn.Module):
def __init__(self):
super(MultiLayerNet, self).__init__()
self.fc1 = nn.Linear(4, 6)
self.act1 = nn.Tanh()
self.fc2 = nn.Linear(6, 3)
self.act2 = nn.ReLU()
self.fc3 = nn.Linear(3, 1)
def forward(self, x):
x = self.act1(self.fc1(x))
x = self.act2(self.fc2(x))
x = self.fc3(x)
return x
model = MultiLayerNet()
input_data = torch.tensor([[1.0, 2.0, 3.0, 4.0]])
output = model(input_data)
print(output)
batch_input = torch.tensor([
[1.0, 2.0, 3.0, 4.0],
[4.0, 3.0, 2.0, 1.0]
])
batch_output = model(batch_input)
print(batch_output)
def init_weights(m):
if isinstance(m, nn.Linear):
nn.init.xavier_uniform_(m.weight)
nn.init.zeros_(m.bias)
model.apply(init_weights)
# Predicting a numeric output (e.g., house price)
import torch.optim as optim
model = MultiLayerNet()
criterion = nn.MSELoss()
optimizer = optim.Adam(model.parameters(), lr=0.01)
X = torch.tensor([
[1200.0, 3.0, 2.0, 1.0]
])
y = torch.tensor([[250000.0]])
# Training step
output = model(X)
loss = criterion(output, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
print("Loss:", loss.item())
for param in model.fc1.parameters():
param.requires_grad = False
for name, param in model.named_parameters():
if param.requires_grad:
print(name, param.shape)
Using nn.Module
gives structure and clarity to PyTorch model definitions. It allows you to encapsulate model architecture, reuse components, and integrate easily with optimizers and training loops.
Author
🎥 Join me live on YouTubePassionate about coding and teaching, I publish practical tutorials on PHP, Python, JavaScript, SQL, and web development. My goal is to make learning simple, engaging, and project‑oriented with real examples and source code.