Upskill your Team on Generative AI. Start here >

Brain tumor Detection and Classification using EfficientNet-B5 and Attention-based Global Average Pooling with Explainable AI

Author(s): Anik Chakraborty, Sayantani Ghosh, Raktim Chakraborty,Dr. Indranil Mitra, Prasun Nandy

Abstract

A brain tumor can be deadly and may cause many severe issues and even death if not diagnosed and treated at the early stages. So, early detection of brain tumors is of prime importance. Meningioma, Glioma, and Pituitary tumors are the most common, and 73% (1) of brain tumors are diagnosed as one of these types. Brain MRI image is one of the crucial methods to diagnose a brain tumor. Manual reading of MRIs could be time- consuming, and interpretation may vary based on the reader’s expertise. So an AI-based automated computer-aided diagnosis (CAD) can help identify and classify various brain tumors. The extensive potentiality of AI has successfully solved innumerable complex problems in medicine. However, the lack of transparency persists due to the increased complexity of advanced Deep Learning models. The black-box nature of the models complicates the decisions of AI applications in clinical use. Thus to perceive the underlying high-stake decision-making processes and to make the models interpretable, explainable AI (XAI) can be used. This study has proposed a classification model based on EfficientNet- B5 with attention-based weighted Global Average Pooling (GAP) to classify three brain tumors. It also demonstrated the use of Explainable AI to visualize the affected region or area of interest identified by our black-box model. Transfer Learning has been used, and specific layers of pre-trained EfficientNet- B5 have been fine-tuned with an attention-based GAP layer. The proposed model achieved 93.73% validation accuracy in multi-class classification, which showed 2% higher than the EfficientNet B5+GAP without the Attention layer. We have used Grad CAM, to implement the visual interpretability of our classification model. Our model achieved a macro-F1 score of .94 on the validation dataset, indicating very high macro-Recall and macro-Precision. Still, if there’s any case of false positive or false negative, that can be identified by looking into the visual interpretation of the model predictions. The proposed method outperformed many similar studies and effectively explained the region of interest identified by our black-box model using Explainable AI.

The Chartered Data Scientist Designation

Achieve the highest distinction in the data science profession.

Elevate Your Team's AI Skills with our Proven Training Programs

Strengthen Critical AI Skills with Trusted Generative AI Training by Association of Data Scientists

Explore more from Association of Data Scientists