top of page

Learn through our Blogs, Get Expert Help & Innovate with Colabcodes

Welcome to Colabcodes, where technology meets innovation. Our articles are designed to provide you with the latest news and information about the world of tech. From software development to artificial intelligence, we cover it all. Stay up-to-date with the latest trends and technological advancements. If you need help with any of the mentioned technologies or any of its variants, feel free to contact us and connect with our freelancers and mentors for any assistance and guidance. 

blog cover_edited.jpg

ColabCodes

Writer's picturesamuel black

Implementing AlexNet with PyTorch’s torchvision in Python using Cifar-10 Dataset

AlexNet is one of the pioneering architectures in deep learning, marking a significant breakthrough in the field of computer vision. Introduced by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton in their 2012 paper, AlexNet won the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) by a significant margin. This blog will guide you through implementing AlexNet using PyTorch’s torchvision library, demonstrating its application in image classification tasks.


Table of Contents

  1. Introduction to AlexNet

  2. Setting Up the Environment

  3. Loading AlexNet from torchvision

  4. Using AlexNet for Feature Extraction

  5. Fine-Tuning AlexNet for Custom Datasets

  6. Training AlexNet on the CIFAR-10 Dataset

  7. Conclusion

AlexNet with PyTorch’s torchvision in Python - colabcodes

1. Introduction to AlexNet

AlexNet was a breakthrough in the use of deep Convolutional Neural Networks (CNNs) for image classification. The architecture of AlexNet consists of several layers, including convolutional layers, max pooling layers, and fully connected layers. The convolutional layers are designed to learn spatial hierarchies of features from the input image, while the max pooling layers are used to reduce the dimensionality of the feature maps and make the network more robust to small translations of the input. The fully connected layers are used for the final classification of the input image. Here are a few bullet points highlighting the importance of AlexNet:


  • Breakthrough in Deep Learning: AlexNet's victory in the 2012 ImageNet competition demonstrated the power of deep convolutional neural networks (CNNs) in image classification, sparking widespread adoption of deep learning in computer vision.


  • Introduction of ReLU Activation: AlexNet popularized the use of the ReLU (Rectified Linear Unit) activation function, which helped mitigate the vanishing gradient problem and allowed for deeper networks to be trained more effectively.


  • GPU Utilization: AlexNet was one of the first deep learning models to utilize GPUs extensively, demonstrating the benefits of parallel processing for training large neural networks, which led to faster and more efficient training.


  • Foundational Architecture: The architectural principles of AlexNet, such as stacked convolutional layers and max-pooling, laid the groundwork for many subsequent deep learning models like VGG, ResNet, and Inception.


  • Catalyst for Advancements: AlexNet's success brought deep learning into the mainstream, leading to significant advancements in fields like autonomous vehicles, facial recognition, medical image analysis, and more.


  • Benchmark for Future Models: AlexNet set a new standard for performance in image recognition tasks, serving as a benchmark against which future models were measured and improved.


2. Setting Up the Environment

To get started with AlexNet in PyTorch, ensure that you have the following dependencies installed:


  • PyTorch: The core library for deep learning in Python.

  • torchvision: A package containing popular datasets, model architectures, and image transformations.


You can install these packages using pip:

pip install torch torchvision

3. Loading AlexNet using Pytorch - torchvision

torchvision provides a pre-trained AlexNet model that is trained on the ImageNet dataset. You can easily load and use this model as follows:


import torch

import torchvision.models as models


# Load the pre-trained AlexNet model

alexnet = models.alexnet(pretrained=True)


# Display the model's architecture

print(alexnet)


Output for the code above:

AlexNet(
  (features): Sequential(
    (0): Conv2d(3, 64, kernel_size=(11, 11), stride=(4, 4), padding=(2, 2))
    (1): ReLU(inplace=True)
    (2): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
    (3): Conv2d(64, 192, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
    (4): ReLU(inplace=True)
    (5): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
    (6): Conv2d(192, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (7): ReLU(inplace=True)
    (8): Conv2d(384, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (9): ReLU(inplace=True)
    (10): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (11): ReLU(inplace=True)
    (12): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
  )
  (avgpool): AdaptiveAvgPool2d(output_size=(6, 6))
  (classifier): Sequential(
    (0): Dropout(p=0.5, inplace=False)
    (1): Linear(in_features=9216, out_features=4096, bias=True)
    (2): ReLU(inplace=True)
    (3): Dropout(p=0.5, inplace=False)
    (4): Linear(in_features=4096, out_features=4096, bias=True)
    (5): ReLU(inplace=True)
    (6): Linear(in_features=4096, out_features=1000, bias=True)
  )
)

The pretrained=True argument loads the model with weights pre-trained on ImageNet, which includes 1.2 million images across 1,000 classes.


4. Using AlexNet for Feature Extraction

AlexNet’s convolutional layers can be used as a powerful feature extractor. This is especially useful when you want to use AlexNet on a different dataset or a smaller dataset where training a model from scratch might not be feasible.


from torch import nn


# Freeze the convolutional layers

for param in alexnet.features.parameters():

    param.requires_grad = False


# Replace the classifier with a custom classifier

alexnet.classifier = nn.Sequential(

    nn.Dropout(),

    nn.Linear(256  6  6, 4096),

    nn.ReLU(inplace=True),

    nn.Dropout(),

    nn.Linear(4096, 4096),

    nn.ReLU(inplace=True),

    nn.Linear(4096, 10) # Assuming we have 10 classes in the new dataset

)


# Move the model to GPU if available

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

alexnet.to(device)


In this example, the convolutional base of AlexNet is frozen, and a new classifier is attached to adapt the model for a new task, such as classifying images into 10 categories.


5. Fine-Tuning AlexNet for Custom Datasets

Fine-tuning allows the model to adjust its pre-trained weights to better fit a new dataset. In fine-tuning, only the top few layers are trained, while the rest of the model remains frozen.


# Unfreeze the last few layers of the convolutional base

for param in alexnet.features[-3:].parameters():

    param.requires_grad = True


# Define the loss function and optimizer

criterion = nn.CrossEntropyLoss()

optimizer = torch.optim.Adam(alexnet.parameters(), lr=1e-4)


This approach enables the model to leverage its learned features while fine-tuning them to better suit the specific characteristics of the new dataset.


6. Training AlexNet on the CIFAR-10 Dataset

Let’s apply AlexNet to the CIFAR-10 dataset, a common benchmark in image classification tasks. CIFAR-10 images are 32x32 pixels, so they need to be resized to 224x224 pixels to fit AlexNet’s input size.


import torchvision.transforms as transforms

from torchvision.datasets import CIFAR10

from torch.utils.data import DataLoader


# Define transformations

transform = transforms.Compose([

    transforms.Resize(224),

    transforms.ToTensor(),

    transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))

])


# Load CIFAR-10 dataset

trainset = CIFAR10(root='./data', train=True, download=True, transform=transform)

trainloader = DataLoader(trainset, batch_size=32, shuffle=True, num_workers=2)


testset = CIFAR10(root='./data', train=False, download=True, transform=transform)

testloader = DataLoader(testset, batch_size=32, shuffle=False, num_workers=2)


# Training loop

num_epochs = 10

for epoch in range(num_epochs):

    running_loss = 0.0

    for inputs, labels in trainloader:

        inputs, labels = inputs.to(device), labels.to(device)


        optimizer.zero_grad()

        outputs = alexnet(inputs)

        loss = criterion(outputs, labels)

        loss.backward()

        optimizer.step()


        running_loss += loss.item()

    print(f"Epoch [{epoch+1}/{num_epochs}], Loss: {running_loss/len(trainloader):.4f}")


print("Finished Training")


Output for the code above:

Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to ./data/cifar-10-python.tar.gz
100%|██████████| 170498071/170498071 [00:13<00:00, 12782725.20it/s]
Extracting ./data/cifar-10-python.tar.gz to ./data
Files already downloaded and verified

Epoch [1/10], Loss: 0.6980
Epoch [2/10], Loss: 0.4352
Epoch [3/10], Loss: 0.3243
Epoch [4/10], Loss: 0.2329
Epoch [5/10], Loss: 0.1756
Epoch [6/10], Loss: 0.1298
Epoch [7/10], Loss: 0.1060
Epoch [8/10], Loss: 0.0881
Epoch [9/10], Loss: 0.0747
Epoch [10/10], Loss: 0.0701
Finished Training

This script fine-tunes AlexNet on the CIFAR-10 dataset, allowing the pre-trained model to adapt to this new classification task.


7. Conclusion

In this blog, we explored how to implement AlexNet using PyTorch’s torchvision library. We covered how to load the pre-trained AlexNet model, use it for feature extraction, fine-tune it for specific tasks, and apply it to the CIFAR-10 dataset. AlexNet, despite being an older architecture, remains a powerful tool for image classification and serves as a strong foundation for learning and experimenting with deep learning models.



Related Posts

See All

Commenti


Get in touch for customized mentorship and freelance solutions tailored to your needs.

bottom of page