1. Multi-modal remote perception learning for object sensory data
- Author
-
Nouf Abdullah Almujally, Adnan Ahmed Rafique, Naif Al Mudawi, Abdulwahab Alazeb, Mohammed Alonazi, Asaad Algarni, Ahmad Jalal, and Hui Liu
- Subjects
multi-modal ,sensory data ,objects recognition ,visionary sensor ,simulation environment multi-modal ,simulation environment ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
IntroductionWhen it comes to interpreting visual input, intelligent systems make use of contextual scene learning, which significantly improves both resilience and context awareness. The management of enormous amounts of data is a driving force behind the growing interest in computational frameworks, particularly in the context of autonomous cars.MethodThe purpose of this study is to introduce a novel approach known as Deep Fused Networks (DFN), which improves contextual scene comprehension by merging multi-object detection and semantic analysis.ResultsTo enhance accuracy and comprehension in complex situations, DFN makes use of a combination of deep learning and fusion techniques. With a minimum gain of 6.4% in accuracy for the SUN-RGB-D dataset and 3.6% for the NYU-Dv2 dataset.DiscussionFindings demonstrate considerable enhancements in object detection and semantic analysis when compared to the methodologies that are currently being utilized.
- Published
- 2024
- Full Text
- View/download PDF