A MobileNet Model for Identifying Male and Female on Crowd

Main Article Content

Priyanka Chauhan , Rajeev G. Vishwakarma

Abstract

Efficient real-time models are of paramount importance in several applications within the dynamic domain of computer vision. One such application pertains to the identification of gender in densely populated settings. This study introduces a robust MobileNet model that is especially tailored to accurately classify individuals as either male or female in a highly crowded environment. MobileNet is well recognized for its compact architecture and efficient computational performance, making it highly ideal for on-device applications with constrained resources. The study started with the compilation of an extensive dataset including several crowd scenarios distinguished by diverse levels of density, ethnic compositions, and lighting circumstances. To augment the dataset, several techniques were used, including random cropping, rotation, and horizontal flipping. The augmentation of the model's ability for generalization was accompanied by the mitigation of the problem of overfitting, a phenomenon often seen in specialized domains. The proposed MobileNet model underwent many fine-tuning techniques, focusing particularly on layers that mostly affect spatial features and facial characteristics. By refining these layers, the model demonstrated increased responsiveness to gender-specific cues, such as hair length, facial structure, and attire. The comparative assessment of our MobileNet model with traditional convolutional neural networks revealed comparable levels of accuracy, coupled by a significant decrease in computational burden. The model exhibited a test dataset accuracy of 98.58%, showcasing its exceptional performance relative to other models. Additionally, it showcased a noteworthy decrease of 13% in computational resource use.

Article Details

Section
Articles