Back to Search Start Over

Missing person identification system.

Authors :
Anitha, G.
Rao, E. Gurumohan
Kumar, V. Pavan
Prashanth, M.
Balaram, A.
Source :
AIP Conference Proceedings; 2023, Vol. 2492 Issue 1, p1-6, 6p
Publication Year :
2023

Abstract

Every year, tens of thousands of people are declared missing in India. A significant number of people who have gone missing have yet to be found. With the aid of facial recognition, this paper suggests a unique deep learning technique application for recognizing the identified missing person from the images of a large number of people available. Photographs of suspicious people may be uploaded to a shared portal, including location and timings. The snapshot will be linked to the missing person's images in the repository. The input individual picture is classified, and the picture with the good fit is chosen from a missing person database. Using the image reported by the public, an algorithm is learned such that we can accurately find missing person in archive with missing people's images generated. Face recognition is done using Convolution Neural Network (CNN) which is a common deep learning algorithm to process image-based applications. Face features are derived from pictures using a VGG(Visual Geometry Group)-Face deep architecture CNN model that has been pre-trained. Unlike other deep learning algorithms, ours only uses a convolutional network as a high feature attribute, with the qualified KNN classifier doing the individual recognition. When properly prepared, VGG-Face, the highest performing CNN algorithm for face detection, outputs a deep learning algorithm that is distortion, lighting, and person age and exceeds previous approaches in facial recognition-based missing person identification. This recognition device classification accuracy is 90%. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
0094243X
Volume :
2492
Issue :
1
Database :
Complementary Index
Journal :
AIP Conference Proceedings
Publication Type :
Conference
Accession number :
164041458
Full Text :
https://doi.org/10.1063/5.0113331