1. Deep artificial neural network based multilayer gated recurrent model for effective prediction of software development effort.
- Author
-
Anitha, CH and Parveen, Nikath
- Subjects
COMPUTER software development ,HONEY ,DEEP learning ,OPTIMIZATION algorithms ,SOFT computing ,PREDICTION models ,DATA scrubbing - Abstract
Project management requires the chaotic but important task of estimating software development effort. Several soft computing approaches have been proposed to increase estimation accuracy, and optimization techniques are utilized to concentrate on key aspects. However, a majority of works use data processing that has been found to be unreliable, time-consuming and typically leads to greater error rates. Therefore, the research proposes an efficient software development effort prediction model employing unique deep learning technology to alleviate the existing limitations. Data collection, pre-processing, feature selection and software development effort prediction are only a few of the varied phases of the proposed objective. After data collection, the data are pre-processed, including data cleaning, normalization, missing values and imputation. The expanded archer fish optimization method (Ext_AFO) is used to choose the best features from the pre-processed data. The Multilayer Perceptron Assisted Honey Bidirectional Gated Recurrent Feed Forward Network (Multi-HBiG) is built into this research work to provide an intelligent prediction model for software development effort estimation. The model parameters are adjusted using the Adaptive Honey Badger Optimisation Algorithm (A-Hba) to improve the overall estimation performance. The Albrecht dataset, China dataset, Desharnais dataset, Kemerer dataset, Maxwell dataset, Kitchenham dataset and Cocomos81 dataset are the datasets used in this study. The proposed approach surpassed other models when compared against Mean Relative Error (MRE), Mean Magnitude of Relative Error (MMRE), Mean Balanced Relative Error (MBRE) and Mean Inverted Balanced Relative Error (MIBRE) in the results section. The proposed model was assessed in this study using the MAE in each dataset, and it achieved 0.0753 for the China dataset, 0.0763 for the Cocomos81 dataset, 0.0737 for the Desharnais dataset, 0.0754 for the Kemerer dataset, 0.0759 for the Kitchenham dataset, 0.0734 for the Maxwell dataset and 0.0737 for the Albrecht dataset, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF