\nPlain Language SummaryBrain tumors present a formidable diagnostic challenge due to their aberrant cell growth. Accurate determination of tumor location and size is paramount for effective diagnosis. Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) are pivotal tools in clinical diagnosis, yet tumor segmentation within their images remains challenging, particularly at boundary pixels, owing to limited sensitivity. Recent endeavors have introduced fusion-based strategies to refine segmentation accuracy, yet these methods often prove inadequate. In response, we introduce the Parallel-Way framework to surmount these obstacles. Our approach integrates MRI and PET data for a holistic analysis. Initially, we enhance image quality by employing noise reduction, bias field correction, and adaptive thresholding, leveraging Improved Kalman Filter (IKF), Expectation Maximization (EM), and Improved Vibe Algorithm (IVib), respectively. Subsequently, we conduct multi-modality image fusion through the Dual-Tree Complex Wavelet Transform (DTWCT) to amalgamate data from both modalities. Following fusion, we extract pertinent features using the Advanced Capsule Network (ACN) and reduce feature dimensionality via Multi-objective Diverse Evolution-based selection. Tumor segmentation is then executed utilizing the Twin Vision Transformer with dual attention mechanism. Implemented our Parallel-Way framework which exhibits heightened model performance. Evaluation across multiple metrics, including accuracy, sensitivity, specificity, F1-Score, and AUC, underscores its superiority over existing methodologies.Diagnosing brain tumors is challenging due to their abnormal growth. Accurately determining the tumor’s location and size is crucial for effective treatment. Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) are important tools in clinical diagnosis, but accurately segmenting tumors in their images, especially at the edges, is difficult due to limited sensitivity. To address this, we propose the Parallel-Way framework, which integrates MRI and PET data for a comprehensive analysis. First, we enhance image quality using noise reduction, bias field correction, and adaptive thresholding techniques like the Improved Kalman Filter (IKF), Expectation Maximization (EM), and Improved Vibe Algorithm (IVib). Next, we merge the MRI and PET data using the Dual-Tree Complex Wavelet Transform (DTWCT) to combine information from both modalities. After fusion, we extract relevant features using the Advanced Capsule Network (ACN) and reduce their complexity with Multi-objective Diverse Evolution-based selection. Tumor segmentation is then performed using the Twin Vision Transformer with a dual attention mechanism. Our Parallel-Way framework aims to improve the accuracy of brain tumor segmentation, overcoming limitations of existing methods. By integrating data from MRI and PET and applying advanced image processing and machine learning techniques, we believe our approach will provide a more reliable and accurate method for diagnosing and treating brain tumors. Evaluations using metrics such as accuracy, sensitivity, specificity, F1-Score, and AUC demonstrate that our method is superior to existing approaches, providing a more reliable and accurate way to diagnose brain tumors. [ABSTRACT FROM AUTHOR]