Advanced CNNs and Feature Visualization
Description
In this assignment, I extended my work on CIFAR‑10 by moving from a simple CNN to a deeper ResNet‑style architecture with residual connections, batch normalization, and dropout, which significantly improved performance and achieved about 89% test accuracy while avoiding overfitting. I carefully analyzed the training and validation curves to confirm stable learning and generalization. In the second part, I applied activation maximization and feature visualization techniques on AlexNet to better understand what convolutional filters learn, generating synthetic images that revealed how early layers capture edges and colors while deeper layers respond to more complex patterns. I also experimented with adversarial‑style image generation, showing how gradient ascent can produce inputs that strongly activate specific neurons or even force the network to classify noise as a target class with high confidence.
My conclusion was that deeper architectures with skip connections are far more effective for CIFAR‑10, and visualization methods provide valuable insight into how neural networks represent and sometimes misinterpret visual information.
My conclusion was that deeper architectures with skip connections are far more effective for CIFAR‑10, and visualization methods provide valuable insight into how neural networks represent and sometimes misinterpret visual information.
PDF Preview
Project Files
Loading project files…