After reading all three articles in this fantastic introduction to convolutional neural networks, I decided to read all of the papers mentioned in the third article of the series: “The 9 Deep Learning Papers You Need To Know About”
The paper I read today is called: “Visualizing and Understanding Convolutional Networks.”
Yesterday I read the AlexNet Paper, “ImageNet Classification with Deep Convolutional Neural Network,” which this paper builds upon. The authors of the ZFNet paper dive into the inner working of the convolutional neural network that the AlexNet paper describes, and through diving in and understanding the inner working of the AlexNet model, they were able to tease out some ways to make improvements to it.
Overall, this paper shows several inquiries into the fundamental inner workings of a convolutional neural network. For example, they are able to visualize what’s going on inside the filter of a hidden layer. They found that each layer of the convnet makes increasingly higher level observations about the image. Using the same tool, they were able to visualize the way that different layers change during training. They also experimented with seeing how rotating, scaling, and translating the same image affects the feature layers produced at different layers, and they experimented with blocking out certain parts of the image to see how that affects the activations of hidden layers and output probabilities. This paper is a good read for understanding and visualizing the way that layers in a convolutional neural network compute correspondence between specific object parts in the layer below.