Loading…

A new context-based feature for classification of emotions in photographs

A high volume of images is shared on the public Internet each day. Many of these are photographs of people with facial expressions and actions displaying various emotions. In this work, we examine the problem of classifying broad categories of emotions based on such images, including Bullying, Mildl...

Full description

Saved in:
Bibliographic Details
Published in:Multimedia tools and applications 2021-04, Vol.80 (10), p.15589-15618
Main Authors: Krishnani, Divya, Shivakumara, Palaiahnakote, Lu, Tong, Pal, Umapada, Lopresti, Daniel, Kumar, Govindaraju Hemantha
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c319t-6d8386a382fc7f24b23dc74ed114579e5a2505a8690e7fa7b89864bfeccaa2933
cites cdi_FETCH-LOGICAL-c319t-6d8386a382fc7f24b23dc74ed114579e5a2505a8690e7fa7b89864bfeccaa2933
container_end_page 15618
container_issue 10
container_start_page 15589
container_title Multimedia tools and applications
container_volume 80
creator Krishnani, Divya
Shivakumara, Palaiahnakote
Lu, Tong
Pal, Umapada
Lopresti, Daniel
Kumar, Govindaraju Hemantha
description A high volume of images is shared on the public Internet each day. Many of these are photographs of people with facial expressions and actions displaying various emotions. In this work, we examine the problem of classifying broad categories of emotions based on such images, including Bullying, Mildly Aggressive, Very Aggressive, Unhappy, Disdain and Happy. This work proposes the Context-based Features for Classification of Emotions in Photographs (CFCEP). The proposed method first detects faces as a foreground component, and other information (non-face) as background components to extract context features. Next, for each foreground and background component, we explore the Hanman transform to study local variations in the components. The proposed method combines the Hanman transform (H) values of foreground and background components according to their merits, which results in two feature vectors. The two feature vectors are fused by deriving weights to generate one feature vector. Furthermore, the feature vector is fed to a CNN classifier for classification of images of different emotions uploaded on social media and public internet. Experimental results on our dataset of different emotion classes and the benchmark dataset show that the proposed method is effective in terms of average classification rate. It reports 91.7% for our 10-class dataset, 92.3% for 5 classes of standard dataset and 81.4% for FERPlus dataset. In addition, a comparative study with existing methods on the benchmark dataset of 5-classes, standard dataset of facial expression (FERPlus) and another dataset of 10-classes show that the proposed method is best in terms of scalability and robustness.
doi_str_mv 10.1007/s11042-020-10404-8
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2518857814</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2518857814</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-6d8386a382fc7f24b23dc74ed114579e5a2505a8690e7fa7b89864bfeccaa2933</originalsourceid><addsrcrecordid>eNp9kEtLAzEUhYMoWKt_wFXAdTQ3j0lmWYovKLjRdchkknZKOxmTFPXfO3UEd67uWZzvXPgQugZ6C5SquwxABSOUUTIGKog-QTOQihOlGJyOmWtKlKRwji5y3lIKlWRihp4XuPcf2MW--M9CGpt9i4O35ZA8DjFht7M5d6FztnSxxzFgv4_HmHHX42ETS1wnO2zyJToLdpf91e-do7eH-9flE1m9PD4vFyviONSFVK3murJcs-BUYKJhvHVK-BZASFV7aZmk0uqqpl4Fqxpd60o0wTtnLas5n6ObaXdI8f3gczHbeEj9-NIwCVpLpUGMLTa1XIo5Jx_MkLq9TV8GqDkqM5MyMyozP8qMHiE-QXks92uf_qb_ob4BcppubQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2518857814</pqid></control><display><type>article</type><title>A new context-based feature for classification of emotions in photographs</title><source>ABI/INFORM global</source><source>Springer Link</source><creator>Krishnani, Divya ; Shivakumara, Palaiahnakote ; Lu, Tong ; Pal, Umapada ; Lopresti, Daniel ; Kumar, Govindaraju Hemantha</creator><creatorcontrib>Krishnani, Divya ; Shivakumara, Palaiahnakote ; Lu, Tong ; Pal, Umapada ; Lopresti, Daniel ; Kumar, Govindaraju Hemantha</creatorcontrib><description>A high volume of images is shared on the public Internet each day. Many of these are photographs of people with facial expressions and actions displaying various emotions. In this work, we examine the problem of classifying broad categories of emotions based on such images, including Bullying, Mildly Aggressive, Very Aggressive, Unhappy, Disdain and Happy. This work proposes the Context-based Features for Classification of Emotions in Photographs (CFCEP). The proposed method first detects faces as a foreground component, and other information (non-face) as background components to extract context features. Next, for each foreground and background component, we explore the Hanman transform to study local variations in the components. The proposed method combines the Hanman transform (H) values of foreground and background components according to their merits, which results in two feature vectors. The two feature vectors are fused by deriving weights to generate one feature vector. Furthermore, the feature vector is fed to a CNN classifier for classification of images of different emotions uploaded on social media and public internet. Experimental results on our dataset of different emotion classes and the benchmark dataset show that the proposed method is effective in terms of average classification rate. It reports 91.7% for our 10-class dataset, 92.3% for 5 classes of standard dataset and 81.4% for FERPlus dataset. In addition, a comparative study with existing methods on the benchmark dataset of 5-classes, standard dataset of facial expression (FERPlus) and another dataset of 10-classes show that the proposed method is best in terms of scalability and robustness.</description><identifier>ISSN: 1380-7501</identifier><identifier>EISSN: 1573-7721</identifier><identifier>DOI: 10.1007/s11042-020-10404-8</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Benchmarks ; Bullying ; Classification ; Comparative studies ; Computer Communication Networks ; Computer Science ; Context ; Data Structures and Information Theory ; Datasets ; Digital media ; Emotions ; Feature extraction ; Image classification ; Internet ; Multimedia Information Systems ; Special Purpose and Application-Based Systems</subject><ispartof>Multimedia tools and applications, 2021-04, Vol.80 (10), p.15589-15618</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC part of Springer Nature 2021</rights><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC part of Springer Nature 2021.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c319t-6d8386a382fc7f24b23dc74ed114579e5a2505a8690e7fa7b89864bfeccaa2933</citedby><cites>FETCH-LOGICAL-c319t-6d8386a382fc7f24b23dc74ed114579e5a2505a8690e7fa7b89864bfeccaa2933</cites><orcidid>0000-0001-9026-4613</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/2518857814/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$H</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2518857814?pq-origsite=primo$$EHTML$$P50$$Gproquest$$H</linktohtml><link.rule.ids>314,776,780,11669,27903,27904,36039,44342,74641</link.rule.ids></links><search><creatorcontrib>Krishnani, Divya</creatorcontrib><creatorcontrib>Shivakumara, Palaiahnakote</creatorcontrib><creatorcontrib>Lu, Tong</creatorcontrib><creatorcontrib>Pal, Umapada</creatorcontrib><creatorcontrib>Lopresti, Daniel</creatorcontrib><creatorcontrib>Kumar, Govindaraju Hemantha</creatorcontrib><title>A new context-based feature for classification of emotions in photographs</title><title>Multimedia tools and applications</title><addtitle>Multimed Tools Appl</addtitle><description>A high volume of images is shared on the public Internet each day. Many of these are photographs of people with facial expressions and actions displaying various emotions. In this work, we examine the problem of classifying broad categories of emotions based on such images, including Bullying, Mildly Aggressive, Very Aggressive, Unhappy, Disdain and Happy. This work proposes the Context-based Features for Classification of Emotions in Photographs (CFCEP). The proposed method first detects faces as a foreground component, and other information (non-face) as background components to extract context features. Next, for each foreground and background component, we explore the Hanman transform to study local variations in the components. The proposed method combines the Hanman transform (H) values of foreground and background components according to their merits, which results in two feature vectors. The two feature vectors are fused by deriving weights to generate one feature vector. Furthermore, the feature vector is fed to a CNN classifier for classification of images of different emotions uploaded on social media and public internet. Experimental results on our dataset of different emotion classes and the benchmark dataset show that the proposed method is effective in terms of average classification rate. It reports 91.7% for our 10-class dataset, 92.3% for 5 classes of standard dataset and 81.4% for FERPlus dataset. In addition, a comparative study with existing methods on the benchmark dataset of 5-classes, standard dataset of facial expression (FERPlus) and another dataset of 10-classes show that the proposed method is best in terms of scalability and robustness.</description><subject>Benchmarks</subject><subject>Bullying</subject><subject>Classification</subject><subject>Comparative studies</subject><subject>Computer Communication Networks</subject><subject>Computer Science</subject><subject>Context</subject><subject>Data Structures and Information Theory</subject><subject>Datasets</subject><subject>Digital media</subject><subject>Emotions</subject><subject>Feature extraction</subject><subject>Image classification</subject><subject>Internet</subject><subject>Multimedia Information Systems</subject><subject>Special Purpose and Application-Based Systems</subject><issn>1380-7501</issn><issn>1573-7721</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>M0C</sourceid><recordid>eNp9kEtLAzEUhYMoWKt_wFXAdTQ3j0lmWYovKLjRdchkknZKOxmTFPXfO3UEd67uWZzvXPgQugZ6C5SquwxABSOUUTIGKog-QTOQihOlGJyOmWtKlKRwji5y3lIKlWRihp4XuPcf2MW--M9CGpt9i4O35ZA8DjFht7M5d6FztnSxxzFgv4_HmHHX42ETS1wnO2zyJToLdpf91e-do7eH-9flE1m9PD4vFyviONSFVK3murJcs-BUYKJhvHVK-BZASFV7aZmk0uqqpl4Fqxpd60o0wTtnLas5n6ObaXdI8f3gczHbeEj9-NIwCVpLpUGMLTa1XIo5Jx_MkLq9TV8GqDkqM5MyMyozP8qMHiE-QXks92uf_qb_ob4BcppubQ</recordid><startdate>20210401</startdate><enddate>20210401</enddate><creator>Krishnani, Divya</creator><creator>Shivakumara, Palaiahnakote</creator><creator>Lu, Tong</creator><creator>Pal, Umapada</creator><creator>Lopresti, Daniel</creator><creator>Kumar, Govindaraju Hemantha</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>8G5</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>M2O</scope><scope>MBDVC</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0001-9026-4613</orcidid></search><sort><creationdate>20210401</creationdate><title>A new context-based feature for classification of emotions in photographs</title><author>Krishnani, Divya ; Shivakumara, Palaiahnakote ; Lu, Tong ; Pal, Umapada ; Lopresti, Daniel ; Kumar, Govindaraju Hemantha</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-6d8386a382fc7f24b23dc74ed114579e5a2505a8690e7fa7b89864bfeccaa2933</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Benchmarks</topic><topic>Bullying</topic><topic>Classification</topic><topic>Comparative studies</topic><topic>Computer Communication Networks</topic><topic>Computer Science</topic><topic>Context</topic><topic>Data Structures and Information Theory</topic><topic>Datasets</topic><topic>Digital media</topic><topic>Emotions</topic><topic>Feature extraction</topic><topic>Image classification</topic><topic>Internet</topic><topic>Multimedia Information Systems</topic><topic>Special Purpose and Application-Based Systems</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Krishnani, Divya</creatorcontrib><creatorcontrib>Shivakumara, Palaiahnakote</creatorcontrib><creatorcontrib>Lu, Tong</creatorcontrib><creatorcontrib>Pal, Umapada</creatorcontrib><creatorcontrib>Lopresti, Daniel</creatorcontrib><creatorcontrib>Kumar, Govindaraju Hemantha</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI-INFORM Complete</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection</collection><collection>Computing Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>Research Library (Alumni Edition)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>ProQuest Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM global</collection><collection>Computing Database</collection><collection>ProQuest Research Library</collection><collection>Research Library (Corporate)</collection><collection>ProQuest advanced technologies &amp; aerospace journals</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>One Business (ProQuest)</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central Basic</collection><jtitle>Multimedia tools and applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Krishnani, Divya</au><au>Shivakumara, Palaiahnakote</au><au>Lu, Tong</au><au>Pal, Umapada</au><au>Lopresti, Daniel</au><au>Kumar, Govindaraju Hemantha</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A new context-based feature for classification of emotions in photographs</atitle><jtitle>Multimedia tools and applications</jtitle><stitle>Multimed Tools Appl</stitle><date>2021-04-01</date><risdate>2021</risdate><volume>80</volume><issue>10</issue><spage>15589</spage><epage>15618</epage><pages>15589-15618</pages><issn>1380-7501</issn><eissn>1573-7721</eissn><abstract>A high volume of images is shared on the public Internet each day. Many of these are photographs of people with facial expressions and actions displaying various emotions. In this work, we examine the problem of classifying broad categories of emotions based on such images, including Bullying, Mildly Aggressive, Very Aggressive, Unhappy, Disdain and Happy. This work proposes the Context-based Features for Classification of Emotions in Photographs (CFCEP). The proposed method first detects faces as a foreground component, and other information (non-face) as background components to extract context features. Next, for each foreground and background component, we explore the Hanman transform to study local variations in the components. The proposed method combines the Hanman transform (H) values of foreground and background components according to their merits, which results in two feature vectors. The two feature vectors are fused by deriving weights to generate one feature vector. Furthermore, the feature vector is fed to a CNN classifier for classification of images of different emotions uploaded on social media and public internet. Experimental results on our dataset of different emotion classes and the benchmark dataset show that the proposed method is effective in terms of average classification rate. It reports 91.7% for our 10-class dataset, 92.3% for 5 classes of standard dataset and 81.4% for FERPlus dataset. In addition, a comparative study with existing methods on the benchmark dataset of 5-classes, standard dataset of facial expression (FERPlus) and another dataset of 10-classes show that the proposed method is best in terms of scalability and robustness.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11042-020-10404-8</doi><tpages>30</tpages><orcidid>https://orcid.org/0000-0001-9026-4613</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 1380-7501
ispartof Multimedia tools and applications, 2021-04, Vol.80 (10), p.15589-15618
issn 1380-7501
1573-7721
language eng
recordid cdi_proquest_journals_2518857814
source ABI/INFORM global; Springer Link
subjects Benchmarks
Bullying
Classification
Comparative studies
Computer Communication Networks
Computer Science
Context
Data Structures and Information Theory
Datasets
Digital media
Emotions
Feature extraction
Image classification
Internet
Multimedia Information Systems
Special Purpose and Application-Based Systems
title A new context-based feature for classification of emotions in photographs
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-28T05%3A07%3A19IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20new%20context-based%20feature%20for%20classification%20of%20emotions%20in%20photographs&rft.jtitle=Multimedia%20tools%20and%20applications&rft.au=Krishnani,%20Divya&rft.date=2021-04-01&rft.volume=80&rft.issue=10&rft.spage=15589&rft.epage=15618&rft.pages=15589-15618&rft.issn=1380-7501&rft.eissn=1573-7721&rft_id=info:doi/10.1007/s11042-020-10404-8&rft_dat=%3Cproquest_cross%3E2518857814%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c319t-6d8386a382fc7f24b23dc74ed114579e5a2505a8690e7fa7b89864bfeccaa2933%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2518857814&rft_id=info:pmid/&rfr_iscdi=true