Deep learning (DL) models have been performing exceptionally well on a number of challenging tasks lately. Unfortunately, given the current blackbox nature of these DL models, it is difficult to try and “understand” what the network is seeing and how it is making its decisions. Building upon our previous post discussing how to train a DenseNet for classification, we discuss here how to apply various visualization techniques to enable us to interrogate the network. The code here is designed as drop-in functionality for any network trained using the previous post, hopefully easing the burden of its implementation.