Loading…
Gender and Age Detection Using Multimodal Deep Neural Network
This research article introduces an innovative method for identifying gender and age leveraging the capabilities of multimodal deep neural networks (DNN) integrated with OpenCV and Scikit-learn frameworks. The ability to accurately infer gender and age from facial images is crucial for various appli...
Saved in:
Main Authors: | , , , , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | This research article introduces an innovative method for identifying gender and age leveraging the capabilities of multimodal deep neural networks (DNN) integrated with OpenCV and Scikit-learn frameworks. The ability to accurately infer gender and age from facial images is crucial for various applications ranging from marketing analytics to security surveillance. However, achieving robust performance in real-world scenarios remains challenging due to variations in facial expressions, lighting conditions, and occlusions. To address these challenges, we propose a multimodal DNN architecture that fuses information from both facial images and accompanying metadata. Our model consists of convolutional neural networks (CNNs) for image feature extraction and recurrent neural networks (RNNs) for processing sequential metadata. By integrating OpenCV, we preprocess the images to enhance their quality and extract facial features efficiently. Additionally, Scikit-learn is employed for metadata preprocessing and model evaluation. We conduct extensive experiments on benchmark datasets to evaluate the effectiveness of our proposed approach. The results demonstrate superior performance in gender and age classification compared to existing methods. This research contributes to the advancement of multimodal deep learning techniques for gender and age detection tasks. The integration of OpenCV and Scikit-learn facilitates seamless preprocessing, feature extraction, and evaluation, thereby enhancing the practical applicability of our approach. |
---|---|
ISSN: | 2771-1358 |
DOI: | 10.1109/ICCUBEA61740.2024.10775234 |