Special Annexure 3: PyTorch Coding Challenges (Without Solutions)
Abstract:
Below is your Special Annexure 3: PyTorch Coding Challenges (Without Solutions) — perfect for assignments, examinations, classroom practice, lab sessions, and interview preparation.
These challenges range from basic to expert level and are intentionally provided without solutions to encourage problem-solving and self-practice.
Special Annexure 3: PyTorch Coding Challenges (Without Solutions)
Practice-Oriented Problems for Learners, Students & Professionals
Part A: Beginner-Level Challenges
Challenge 1: Create Tensors of Different Types
Create:
-
A float tensor
-
A long integer tensor
-
A boolean tensor
-
A 2D tensor of size (3,4)
Print their shapes and data types.
Challenge 2: Basic Tensor Arithmetic
Given two tensors of size (5,5), perform:
-
Addition
-
Subtraction
-
Element-wise multiplication
-
Element-wise division
Print the results.
Challenge 3: Reshaping and Flattening
Create a tensor of size (6,6).
Perform:
-
Reshape to (3,12)
-
Flatten it
-
Convert it to a 1D tensor
Challenge 4: Working with GPU
Write code that:
-
Checks if GPU is available
-
Moves a tensor to the GPU
-
Performs a square operation
-
Moves it back to CPU
Challenge 5: Autograd Basics
Let
y = x² + 3x,
Compute dy/dx using autograd for x = 5.
Part B: Intermediate-Level Challenges
Challenge 6: Build a Simple Neural Network Class
Create an MLP with:
-
Input size: 8
-
Hidden layer: 16 units
-
Output size: 1
Perform a forward pass on a dummy batch of size (4,8).
Challenge 7: Training Loop Without Using High-Level APIs
Implement a manual training loop for linear regression.
Include:
-
Zero gradients
-
Forward pass
-
Loss computation
-
Backpropagation
-
Optimization step
Run it for 100 epochs.
Challenge 8: Custom Dataset
Create a custom PyTorch Dataset that returns pairs:
(x, y) where y = x³ – x + 2 for x ∈ [0, 999].
Wrap it in a DataLoader.
Challenge 9: Custom Loss Function
Write a custom loss function:
Loss = Mean(|pred – target|³)
Use nn.Module.
Challenge 10: Save & Load Model
Implement:
-
Save model weights
-
Save optimizer state
-
Load both back successfully
Part C: Advanced-Level Challenges
Challenge 11: Implement a CNN
Build a CNN with:
-
Two convolution layers
-
ReLU activation
-
MaxPooling
-
One fully connected layer
Run a forward pass for a batch of size (16, 3, 32, 32).
Challenge 12: Custom Activation Function
Write a custom activation function:
f(x) = x * sigmoid(x)
Implement using autograd so that backward works correctly.
Challenge 13: Sequence Model (RNN / LSTM / GRU)
Create a single-layer LSTM:
-
Input size: 10
-
Hidden size: 32
Run a forward pass on input of shape (seq_len=15, batch=4, features=10).
Challenge 14: Transformer Encoder Block
Build a minimal Transformer block that includes:
-
Multihead attention
-
Add & Norm
-
Feedforward layer
-
Second Add & Norm
Perform a forward pass on dummy input.
Challenge 15: Gradient Clipping
Implement a training loop for any model where gradients are clipped at max_norm = 2.0.
Part D: Expert-Level Challenges
Challenge 16: Mixed Precision Training
Use PyTorch AMP (autocast + GradScaler)
in a training loop with any existing model.
Challenge 17: Implement Early Stopping
Write a training loop that:
-
Monitors validation loss
-
Stops if no improvement for 5 epochs
-
Saves the best model
Challenge 18: Implement a Learning Rate Scheduler
Use a scheduler such as:
-
StepLR or
-
ReduceLROnPlateau
Plot the learning rate changes across epochs.
Challenge 19: Distributed Data Parallel (DDP)
Write code to:
-
Initialize DDP
-
Wrap a simple model
-
Train on multiple GPUs
(Skeleton code acceptable)
Challenge 20: Build and Train a Full Image Classifier
Build a full training pipeline:
-
Use CIFAR-10
-
Use a CNN or ResNet
-
Include DataLoader, augmentations
-
Training + validation loops
-
Accuracy computation
-
Model saving
Part E: Bonus Innovation Challenges
Challenge 21: Implement a GAN (Generator + Discriminator)
Train a simple GAN on MNIST.
Challenge 22: Implement a Variational Autoencoder (VAE)
Train on any dataset of your choice.
Compute reconstruction loss and KL divergence.
Challenge 23: Implement Attention From Scratch
Implement attention:
Attention(Q, K, V) = softmax(QKáµ€ / √dâ‚–) V
No multihead layers — just raw tensor ops.
Challenge 24: Build a Custom Dataloader with Image Augmentation
Use:
-
Random crop
-
Random flip
-
Normalization
Challenge 25: Create Your Own Optimizer
Implement a custom optimizer similar to SGD with momentum:
v = βv + (1 – β) * grad
param = param – lr * v
Comments
Post a Comment
"Thank you for seeking advice on your career journey! Our team is dedicated to providing personalized guidance on education and success. Please share your specific questions or concerns, and we'll assist you in navigating the path to a fulfilling and successful career."