Special Annexure 2: Important PyTorch Coding Exercises with Solutions

Abstract:

Below is your Special Annexure 2: Important PyTorch Coding Exercises with Solutions — comprehensive, structured, and directly useful for learners preparing for interviews, practical exams, or professional development.


Special Annexure 2: Important PyTorch Coding Exercises with Solutions

This annexure provides a curated collection of hands-on PyTorch coding exercises, ranging from beginner to advanced levels. Each exercise includes problem statements, step-by-step guidance, and complete solutions, helping learners strengthen their practical understanding of PyTorch.


Part A: Beginner-Level Coding Exercises


Exercise 1: Create a Tensor and Perform Basic Operations

Problem

Create two tensors a and b of size (3,3).
Perform addition, subtraction, element-wise multiplication, and matrix multiplication.

Solution

import torch

a = torch.randn(3, 3)
b = torch.randn(3, 3)

add_result = a + b
sub_result = a - b
mul_result = a * b
matmul_result = a @ b

print(add_result, sub_result, mul_result, matmul_result, sep="\n\n")

Exercise 2: Reshape a Tensor

Problem

Create a tensor of size (4,4) and reshape it to (2,8).

Solution

x = torch.arange(16)
x = x.reshape(4, 4)

reshaped = x.reshape(2, 8)
print(reshaped)

Exercise 3: Use GPU if Available

Problem

Move a tensor to GPU (if available), square it, and return it back to CPU.

Solution

device = "cuda" if torch.cuda.is_available() else "cpu"

x = torch.tensor([1., 2., 3.], device=device)
y = x ** 2
y_cpu = y.cpu()

print(y_cpu)

Exercise 4: Calculate Gradient Using Autograd

Problem

Given y = x^3 + 2x, compute dy/dx at x=2.

Solution

x = torch.tensor(2.0, requires_grad=True)
y = x**3 + 2*x
y.backward()
print(x.grad)     # dy/dx = 3x^2 + 2

Part B: Intermediate-Level Coding Exercises


Exercise 5: Build a Simple Neural Network (MLP)

Problem

Build a neural network with:

  • Input size: 10

  • Hidden size: 20

  • Output size: 3

And perform one forward pass.

Solution

import torch.nn as nn

class MLP(nn.Module):
    def __init__(self):
        super().__init__()
        self.fc1 = nn.Linear(10, 20)
        self.fc2 = nn.Linear(20, 3)

    def forward(self, x):
        x = torch.relu(self.fc1(x))
        return self.fc2(x)

model = MLP()
x = torch.randn(5, 10)     # batch size = 5
output = model(x)
print(output)

Exercise 6: Compute Loss and Backpropagate

Problem

Use MSELoss with predictions and targets to compute gradients.

Solution

pred = torch.tensor([[3.0], [4.0]], requires_grad=True)
target = torch.tensor([[1.0], [2.0]])

criterion = nn.MSELoss()
loss = criterion(pred, target)

loss.backward()
print(pred.grad)

Exercise 7: Training Loop (Manual Implementation)

Problem

Write a minimal training loop for a linear regression model.

Solution

# Data
X = torch.randn(100, 1)
y = 3*X + 2 + 0.1*torch.randn(100, 1)

model = nn.Linear(1, 1)
optimizer = torch.optim.SGD(model.parameters(), lr=0.05)
criterion = nn.MSELoss()

for epoch in range(200):
    optimizer.zero_grad()
    pred = model(X)
    loss = criterion(pred, y)
    loss.backward()
    optimizer.step()

print("Learned parameters:", model.weight.item(), model.bias.item())

Part C: Advanced-Level Coding Exercises


Exercise 8: Custom Dataset and DataLoader

Problem

Create a custom dataset loading (x, y) pairs where:
y = 2x + 1 for x in range(0, 100).

Solution

from torch.utils.data import Dataset, DataLoader

class MyDataset(Dataset):
    def __init__(self):
        self.x = torch.arange(0, 100, dtype=torch.float32).unsqueeze(1)
        self.y = 2 * self.x + 1

    def __len__(self):
        return len(self.x)

    def __getitem__(self, idx):
        return self.x[idx], self.y[idx]

dataset = MyDataset()
loader = DataLoader(dataset, batch_size=16, shuffle=True)

for batch_x, batch_y in loader:
    print(batch_x[:3], batch_y[:3])
    break

Exercise 9: Custom Loss Function

Problem

Write a custom L1 + L2 hybrid loss function.

Solution

class HybridLoss(nn.Module):
    def __init__(self, alpha=0.5):
        super().__init__()
        self.alpha = alpha

    def forward(self, pred, target):
        l1 = torch.mean(torch.abs(pred - target))
        l2 = torch.mean((pred - target)**2)
        return self.alpha*l1 + (1-self.alpha)*l2

loss_fn = HybridLoss()
print(loss_fn(torch.randn(3,1), torch.randn(3,1)))

Exercise 10: Save and Load Model Checkpoints

Problem

Save model + optimizer state and load them again.

Solution

# Saving
torch.save({
    'epoch': 10,
    'model_state': model.state_dict(),
    'optim_state': optimizer.state_dict()
}, "checkpoint.pth")

# Loading
checkpoint = torch.load("checkpoint.pth")
model.load_state_dict(checkpoint['model_state'])
optimizer.load_state_dict(checkpoint['optim_state'])

print("Loaded checkpoint at epoch:", checkpoint['epoch'])

Part D: Expert-Level Coding Exercises


Exercise 11: Implement a CNN for CIFAR-10 (Minimal Example)

Solution

class SimpleCNN(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(3, 16, kernel_size=3, padding=1)
        self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
        self.fc = nn.Linear(32*8*8, 10)

    def forward(self, x):
        x = torch.relu(self.conv1(x))
        x = torch.max_pool2d(x, 2)
        x = torch.relu(self.conv2(x))
        x = torch.max_pool2d(x, 2)
        x = x.view(x.size(0), -1)
        return self.fc(x)

Exercise 12: Implement a Basic Transformer Encoder Block

Solution

from torch.nn import MultiheadAttention, LayerNorm

class TransformerBlock(nn.Module):
    def __init__(self, embed_dim=64, num_heads=8):
        super().__init__()
        self.attn = MultiheadAttention(embed_dim, num_heads)
        self.norm1 = LayerNorm(embed_dim)
        self.ff = nn.Sequential(
            nn.Linear(embed_dim, 128),
            nn.ReLU(),
            nn.Linear(128, embed_dim)
        )
        self.norm2 = LayerNorm(embed_dim)

    def forward(self, x):
        attn_out, _ = self.attn(x, x, x)
        x = self.norm1(x + attn_out)
        ff_out = self.ff(x)
        return self.norm2(x + ff_out)

Exercise 13: Implement Gradient Clipping

Solution

torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=1.0)

Part E: Bonus Challenges


Exercise 14: Implement Early Stopping

Solution (Minimal)

best_loss = float("inf")
patience = 3
wait = 0

for epoch in range(20):
    ...
    if loss < best_loss:
        best_loss = loss
        wait = 0
    else:
        wait += 1
        if wait >= patience:
            print("Early stopping triggered")
            break

Exercise 15: Mixed Precision Training with autocast

Solution

from torch.cuda.amp import autocast, GradScaler

scaler = GradScaler()

for data, target in loader:
    optimizer.zero_grad()
    with autocast():
        output = model(data)
        loss = criterion(output, target)
    scaler.scale(loss).backward()
    scaler.step(optimizer)
    scaler.update()


Comments