With the ever-increasing use of complex machine learning models in critical applications, explaining the model’s decisions has become necessary. With applications spanning from credit scoring to healthcare, the impact of these models is undeniable. Among the multiple ways in which one can explain the decisions of these complicated models, local post hoc model agnostic explanations have gained massive adoption. These methods allow one to explain each prediction independent of the modelling technique that was used while training. As explanations, they either give individual feature attributions or provide sufficient rules that represent conditions for a prediction to be made. The current state-of-the-art methods use rudimentary techniques to generate explanations using a fairly simple surrogate model. The data structures they employ restrict them to local explanations, which fail to scale up to big data. In this paper, we come up with a novel pipeline for building an explanation. A Generative Adversarial Network is employed for generating synthetic data, while a piecewise linear model in the form of Linear Model Trees is used as the surrogate model. The combination of these two techniques provides a powerful yet intuitive data structure to explain complex machine learning models. The novelty of this data structure is that it provides an explanation in the form of both decision rules and feature attributions. The data structure also enables us to build a global explanation model in a computationally efficient manner such that it scales with large amounts of data.