AUTHOR=Chauhan Sohamkumar , Cheruku Ramalingaswamy , Reddy Edla Damodar , Kampa Lavanya , Nayak Soumya Ranjan , Giri Jayant , Mallik Saurav , Aluvala Srinivas , Boddu Vijayasree , Qin Hong TITLE=BT-CNN: a balanced binary tree architecture for classification of brain tumour using MRI imaging JOURNAL=Frontiers in Physiology VOLUME=15 YEAR=2024 URL=https://www.frontiersin.org/journals/physiology/articles/10.3389/fphys.2024.1349111 DOI=10.3389/fphys.2024.1349111 ISSN=1664-042X ABSTRACT=
Deep learning is a very important technique in clinical diagnosis and therapy in the present world. Convolutional Neural Network (CNN) is a recent development in deep learning that is used in computer vision. Our medical investigation focuses on the identification of brain tumour. To improve the brain tumour classification performance a Balanced binary Tree CNN (BT-CNN) which is framed in a binary tree-like structure is proposed. It has a two distinct modules-the convolution and the depthwise separable convolution group. The usage of convolution group achieves lower time and higher memory, while the opposite is true for the depthwise separable convolution group. This balanced binarty tree inspired CNN balances both the groups to achieve maximum performance in terms of time and space. The proposed model along with state-of-the-art models like CNN-KNN and models proposed by Musallam et al., Saikat et al., and Amin et al. are experimented on public datasets. Before we feed the data into model the images are pre-processed using CLAHE, denoising, cropping, and scaling. The pre-processed dataset is partitioned into training and testing datasets as per 5 fold cross validation. The proposed model is trained and compared its perforarmance with state-of-the-art models like CNN-KNN and models proposed by Musallam et al., Saikat et al., and Amin et al. The proposed model reported average training accuracy of 99.61% compared to other models. The proposed model achieved 96.06% test accuracy where as other models achieved 68.86%, 85.8%, 86.88%, and 90.41% respectively. Further, the proposed model obtained lowest standard deviation on training and test accuracies across all folds, making it invariable to dataset.