The goal of this project is an explication framework for machine learning techniques with a particular focus on neural networks. As current classifiers are extremely brittle, we will first work on provable robustness of the classifier decisions as robust decisions are a necessary key property to derive meaningful explications. This will be achieved either by a large margin approach specifically targeted to ReLU networks or by penalising the smoothness of the classifier for general networks. For such robust classifiers we will provide operational explications in terms of presenting to the user how the input would have to be changed so that the classifier changes its decision. These changes will be restricted so that they are likely under the data-generating probability distribution. We will investigate the efficient visualisation and presentation of the results, towards an interactive process exploring operational explications in the neighbourhood of the original input, supporting users in plausibilising the classifier decision, and supporting machine learning experts in debugging.