1. DISubNet: Depthwise Separable Inception Subnetwork for Pig Treatment Classification Using Thermal Data.
- Author
-
Colaco, Savina Jassica, Kim, Jung Hwan, Poulose, Alwin, Neethirajan, Suresh, and Han, Dong Seog
- Subjects
IMAGE recognition (Computer vision) ,THERMOGRAPHY ,SUSTAINABILITY ,DEEP learning ,ANIMAL culture ,ANIMAL welfare ,SWINE - Abstract
Simple Summary: Thermal imaging is gaining popularity in poultry, swine, and dairy animal husbandry for detecting disease and distress. In this study, we present a depthwise separable inception subnetwork (DISubNet) for classifying pig treatments, offering two versions: DISubNetV1 and DISubNetV2. These lightweight models are compared to other deep learning models used for image classification. A forward-looking infrared (FLIR) camera captures thermal data for model training. Experimental results show the proposed models outperform others in classifying pig treatments using thermal images, achieving 99.96–99.98% accuracy with fewer parameters, potentially improving animal welfare and promoting sustainable production. Thermal imaging is increasingly used in poultry, swine, and dairy animal husbandry to detect disease and distress. In intensive pig production systems, early detection of health and welfare issues is crucial for timely intervention. Using thermal imaging for pig treatment classification can improve animal welfare and promote sustainable pig production. In this paper, we present a depthwise separable inception subnetwork (DISubNet), a lightweight model for classifying four pig treatments. Based on the modified model architecture, we propose two DISubNet versions: DISubNetV1 and DISubNetV2. Our proposed models are compared to other deep learning models commonly employed for image classification. The thermal dataset captured by a forward-looking infrared (FLIR) camera is used to train these models. The experimental results demonstrate that the proposed models for thermal images of various pig treatments outperform other models. In addition, both proposed models achieve approximately 99.96–99.98% classification accuracy with fewer parameters. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF