Defects are ubiquitous in semiconductor industry, and their detection, classification, and localization at various levels is a challenge that requires rigorous sampling. Current state-of-the art optical and e-beam inspection systems based on gray-scale diverences for detection and rule-based binning for classfication are rigid and are not invariant to defect type, size, and substrate material. Furthermore, every new technology offers a new challenge and requires numerous hours of setup, debugging, and manual tuning of process parameters by integrated chip manufacturers. In this work, we propose a deep-learning based work ow which circumvents these challenges and enables accurate defect detection, classification, and localization in a single framework. In particular, we train a convolutional neural network (CNN) using high resolution e-beam images of wafers patterned with various types of intentional defects and achieve high detection and classification accuracy. Furthermore, we analyze the convolution filters and their corresponding activation maps to understand the underlying decision-making process of the deep model. To interpret the network predictions, we further generate class activation maps highlighting the focus region of model while making a prediction. This classification-trained model also showcases remarkable defect localization ability, despite not being explicitly trained for that. High sensitivity (97%) and high specificity (100%) along with rapid and accurate defect localization ability of this model demonstrate its potential for deployment in production line.