# UnderstandingAndVisualizingCNN **Repository Path**: changlin_nanshan/UnderstandingAndVisualizingCNN ## Basic Information - **Project Name**: UnderstandingAndVisualizingCNN - **Description**: Several a*proaches for understanding and visualizing Convolutional Networks have been developed in the literature. - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2020-07-20 - **Last Updated**: 2020-12-19 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # UnderstandingAndVisualizingCNN Several approaches for understanding and visualizing Convolutional Networks have been developed in the literature,partly as a response the common criticism that the learned features in a Neural Network are not interpretable. In this section we briefly survey some of these approaches and related work.It's about Visualizing what ConvNets learn. - [Visualizing the activations and first-layer weights](#Visualizing the activations and first-layer weights) - [Retrieving images that maximally activate a neuron](#Retrieving images that maximally activate a neuron) - [Embedding the codes with t-SNE](#Embedding the codes with t-SNE) - [Occluding parts of the image](#Occluding parts of the image) ## Visualizing the activations and first-layer weights **Layer Activations.** The most straight-forward visualization technique is to show the activations of the network during the forward pass. For ReLU networks, the activations usually start out looking relatively blobby(满是点滴状的) and dense, but as the training progresses the activations ususlly become more sparse and localized. One dangerous pitfall(缺陷,陷阱) that can be easily noticed with this visualization is that some activation maps may be all zero for many different inputs, which can indicate dead filters, and can be a symptom of high learning rates.