In artificial intelligence applications, advanced computational models, such as deep learning, are employed to achieve high accuracy, often requiring the execution of numerous operations. Conversely, lightweight computational models are typically more resource-efficient, making them suitable for various devices, including smartphones, tablets, and wearable technology. This paper presents an ultra-low-computation solution for interpreting sign languages to assist deaf and hard-of-hearing individuals without needing specialized hardware or significant computational resources. The proposed approach initially performs data abstraction on the input data. During this process, the image is systematically scanned from various perspectives, and the collected information is then encoded into a one-dimensional vector. Subsequently, the abstracted information undergoes processing through a Fully Connected Neural Network (FCN), resulting in highly accurate output. We also introduced two abstraction methods, namely Opaque and Glass, inspired by the interaction of light with different types of objects. The proposed abstractions facilitate the comprehension of the hand gesture's outer boundary as well as its row-wise and column-wise density of pixels. Our experiments on three datasets confirm the efficiency of the proposed method, achieving an accuracy of 99.4% in recognizing American Sign Language, 99.96% accuracy in recognizing Indian Sign Language, and 99.95% accuracy in recognizing Bangla Sign Language. Notably, the model size and the number of MAC operations are significantly smaller than state-of-the-art computational models trained on the same datasets. • We introduced an ultra-low-computation method for interpreting sign languages. • We developed a new encoding for the abstraction of hand gesture images. • Our approach significantly reduces both model size and MAC operations. • We achieved high accuracy in understanding American, Indian, and Bangla SLs. [ABSTRACT FROM AUTHOR]