Loading…
Automated Food Weight and Content Estimation Using Computer Vision and AI Algorithms
The work aims to leverage computer vision and artificial intelligence technologies to quantify key components in food distribution services. Specifically, it focuses on dish counting, content identification, and portion size estimation in a dining hall setting. An RGB camera is employed to capture t...
Saved in:
Published in: | Sensors (Basel, Switzerland) Switzerland), 2024-11, Vol.24 (23), p.7660 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | cdi_FETCH-LOGICAL-c3501-930caa144c1fdda214d93c84231d1c96a91295ecef2a6130dc008a599eccb4243 |
container_end_page | |
container_issue | 23 |
container_start_page | 7660 |
container_title | Sensors (Basel, Switzerland) |
container_volume | 24 |
creator | Gonzalez, Bryan Garcia, Gonzalo Velastin, Sergio A GholamHosseini, Hamid Tejeda, Lino Farias, Gonzalo |
description | The work aims to leverage computer vision and artificial intelligence technologies to quantify key components in food distribution services. Specifically, it focuses on dish counting, content identification, and portion size estimation in a dining hall setting. An RGB camera is employed to capture the tray delivery process in a self-service restaurant, providing test images for plate counting and content identification algorithm comparison, using standard evaluation metrics. The approach utilized the YOLO architecture, a widely recognized deep learning model for object detection and computer vision. The model is trained on labeled image data, and its performance is assessed using a precision-recall curve at a confidence threshold of 0.5, achieving a mean average precision (mAP) of 0.873, indicating robust overall performance. The weight estimation procedure combines computer vision techniques to measure food volume using both RGB and depth cameras. Subsequently, density models specific to each food type are applied to estimate the detected food weight. The estimation model's parameters are calibrated through experiments that generate volume-to-weight conversion tables for different food items. Validation of the system was conducted using rice and chicken, yielding error margins of 5.07% and 3.75%, respectively, demonstrating the feasibility and accuracy of the proposed method. |
doi_str_mv | 10.3390/s24237660 |
format | article |
fullrecord | <record><control><sourceid>gale_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_d5e4cd1d9dd24768835be38b9ec68af4</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A819956811</galeid><doaj_id>oai_doaj_org_article_d5e4cd1d9dd24768835be38b9ec68af4</doaj_id><sourcerecordid>A819956811</sourcerecordid><originalsourceid>FETCH-LOGICAL-c3501-930caa144c1fdda214d93c84231d1c96a91295ecef2a6130dc008a599eccb4243</originalsourceid><addsrcrecordid>eNpdkk1v3CAQhq2qUfPRHPoHKku9tIdNGcAsnCprlaQrReolaY6IBexlZcMWcKX---JuukoiDqCZd56ZF01VfQB0RYhAXxOmmCwZQ2-qM6CYLjjG6O2z92l1ntIOIUwI4e-qUyIYZyDYWXXfTjmMKltT34Rg6kfr-m2ulTf1Kvhsfa6vU3ZF4YKvH5LzfUmM-ynbWP90aY7O4nZdt0MfosvbMb2vTjo1JHv5dF9UDzfX96vvi7sft-tVe7fQpEGwEARppYBSDZ0xCgM1gmhevIABLZgSgEVjte2wYkCQ0Qhx1Qhhtd4Ua-SiWh-4Jqid3McyZvwjg3LyXyDEXqqYnR6sNI2l2oARxmC6ZJyTZmMJ3xQW46qbWd8OrP20Ga3RxXlUwwvoy4x3W9mH3xKAUSqIKITPT4QYfk02ZTm6pO0wKG_DlCQBygQsOZ-bfXol3YUp-vJXs4pCI5ZiBl4dVL0qDpzvQmmsyzF2dDp427kSbzkI0TAOUAq-HAp0DClF2x3HByTnTZHHTSnaj8_9HpX_V4P8BUXmt48</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3144159799</pqid></control><display><type>article</type><title>Automated Food Weight and Content Estimation Using Computer Vision and AI Algorithms</title><source>Publicly Available Content Database</source><source>PubMed Central</source><creator>Gonzalez, Bryan ; Garcia, Gonzalo ; Velastin, Sergio A ; GholamHosseini, Hamid ; Tejeda, Lino ; Farias, Gonzalo</creator><creatorcontrib>Gonzalez, Bryan ; Garcia, Gonzalo ; Velastin, Sergio A ; GholamHosseini, Hamid ; Tejeda, Lino ; Farias, Gonzalo</creatorcontrib><description>The work aims to leverage computer vision and artificial intelligence technologies to quantify key components in food distribution services. Specifically, it focuses on dish counting, content identification, and portion size estimation in a dining hall setting. An RGB camera is employed to capture the tray delivery process in a self-service restaurant, providing test images for plate counting and content identification algorithm comparison, using standard evaluation metrics. The approach utilized the YOLO architecture, a widely recognized deep learning model for object detection and computer vision. The model is trained on labeled image data, and its performance is assessed using a precision-recall curve at a confidence threshold of 0.5, achieving a mean average precision (mAP) of 0.873, indicating robust overall performance. The weight estimation procedure combines computer vision techniques to measure food volume using both RGB and depth cameras. Subsequently, density models specific to each food type are applied to estimate the detected food weight. The estimation model's parameters are calibrated through experiments that generate volume-to-weight conversion tables for different food items. Validation of the system was conducted using rice and chicken, yielding error margins of 5.07% and 3.75%, respectively, demonstrating the feasibility and accuracy of the proposed method.</description><identifier>ISSN: 1424-8220</identifier><identifier>EISSN: 1424-8220</identifier><identifier>DOI: 10.3390/s24237660</identifier><identifier>PMID: 39686196</identifier><language>eng</language><publisher>Switzerland: MDPI AG</publisher><subject>Algorithms ; Artificial Intelligence ; Automation ; Cameras ; Computer vision ; Deep Learning ; Food ; Food habits ; Food service ; food weight estimation ; Humans ; Identification ; Image Processing, Computer-Assisted - methods ; Ingredients ; Machine vision ; Restaurants</subject><ispartof>Sensors (Basel, Switzerland), 2024-11, Vol.24 (23), p.7660</ispartof><rights>COPYRIGHT 2024 MDPI AG</rights><rights>2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>2024 by the authors. 2024</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c3501-930caa144c1fdda214d93c84231d1c96a91295ecef2a6130dc008a599eccb4243</cites><orcidid>0000-0001-9968-960X ; 0000-0001-6775-7137 ; 0000-0002-0135-2687 ; 0000-0003-2186-4126</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/3144159799/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/3144159799?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>230,314,727,780,784,885,25751,27922,27923,37010,37011,44588,53789,53791,74896</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/39686196$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Gonzalez, Bryan</creatorcontrib><creatorcontrib>Garcia, Gonzalo</creatorcontrib><creatorcontrib>Velastin, Sergio A</creatorcontrib><creatorcontrib>GholamHosseini, Hamid</creatorcontrib><creatorcontrib>Tejeda, Lino</creatorcontrib><creatorcontrib>Farias, Gonzalo</creatorcontrib><title>Automated Food Weight and Content Estimation Using Computer Vision and AI Algorithms</title><title>Sensors (Basel, Switzerland)</title><addtitle>Sensors (Basel)</addtitle><description>The work aims to leverage computer vision and artificial intelligence technologies to quantify key components in food distribution services. Specifically, it focuses on dish counting, content identification, and portion size estimation in a dining hall setting. An RGB camera is employed to capture the tray delivery process in a self-service restaurant, providing test images for plate counting and content identification algorithm comparison, using standard evaluation metrics. The approach utilized the YOLO architecture, a widely recognized deep learning model for object detection and computer vision. The model is trained on labeled image data, and its performance is assessed using a precision-recall curve at a confidence threshold of 0.5, achieving a mean average precision (mAP) of 0.873, indicating robust overall performance. The weight estimation procedure combines computer vision techniques to measure food volume using both RGB and depth cameras. Subsequently, density models specific to each food type are applied to estimate the detected food weight. The estimation model's parameters are calibrated through experiments that generate volume-to-weight conversion tables for different food items. Validation of the system was conducted using rice and chicken, yielding error margins of 5.07% and 3.75%, respectively, demonstrating the feasibility and accuracy of the proposed method.</description><subject>Algorithms</subject><subject>Artificial Intelligence</subject><subject>Automation</subject><subject>Cameras</subject><subject>Computer vision</subject><subject>Deep Learning</subject><subject>Food</subject><subject>Food habits</subject><subject>Food service</subject><subject>food weight estimation</subject><subject>Humans</subject><subject>Identification</subject><subject>Image Processing, Computer-Assisted - methods</subject><subject>Ingredients</subject><subject>Machine vision</subject><subject>Restaurants</subject><issn>1424-8220</issn><issn>1424-8220</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><sourceid>DOA</sourceid><recordid>eNpdkk1v3CAQhq2qUfPRHPoHKku9tIdNGcAsnCprlaQrReolaY6IBexlZcMWcKX---JuukoiDqCZd56ZF01VfQB0RYhAXxOmmCwZQ2-qM6CYLjjG6O2z92l1ntIOIUwI4e-qUyIYZyDYWXXfTjmMKltT34Rg6kfr-m2ulTf1Kvhsfa6vU3ZF4YKvH5LzfUmM-ynbWP90aY7O4nZdt0MfosvbMb2vTjo1JHv5dF9UDzfX96vvi7sft-tVe7fQpEGwEARppYBSDZ0xCgM1gmhevIABLZgSgEVjte2wYkCQ0Qhx1Qhhtd4Ua-SiWh-4Jqid3McyZvwjg3LyXyDEXqqYnR6sNI2l2oARxmC6ZJyTZmMJ3xQW46qbWd8OrP20Ga3RxXlUwwvoy4x3W9mH3xKAUSqIKITPT4QYfk02ZTm6pO0wKG_DlCQBygQsOZ-bfXol3YUp-vJXs4pCI5ZiBl4dVL0qDpzvQmmsyzF2dDp427kSbzkI0TAOUAq-HAp0DClF2x3HByTnTZHHTSnaj8_9HpX_V4P8BUXmt48</recordid><startdate>20241129</startdate><enddate>20241129</enddate><creator>Gonzalez, Bryan</creator><creator>Garcia, Gonzalo</creator><creator>Velastin, Sergio A</creator><creator>GholamHosseini, Hamid</creator><creator>Tejeda, Lino</creator><creator>Farias, Gonzalo</creator><general>MDPI AG</general><general>MDPI</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7X7</scope><scope>7XB</scope><scope>88E</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>K9.</scope><scope>M0S</scope><scope>M1P</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>7X8</scope><scope>5PM</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0001-9968-960X</orcidid><orcidid>https://orcid.org/0000-0001-6775-7137</orcidid><orcidid>https://orcid.org/0000-0002-0135-2687</orcidid><orcidid>https://orcid.org/0000-0003-2186-4126</orcidid></search><sort><creationdate>20241129</creationdate><title>Automated Food Weight and Content Estimation Using Computer Vision and AI Algorithms</title><author>Gonzalez, Bryan ; Garcia, Gonzalo ; Velastin, Sergio A ; GholamHosseini, Hamid ; Tejeda, Lino ; Farias, Gonzalo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c3501-930caa144c1fdda214d93c84231d1c96a91295ecef2a6130dc008a599eccb4243</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Algorithms</topic><topic>Artificial Intelligence</topic><topic>Automation</topic><topic>Cameras</topic><topic>Computer vision</topic><topic>Deep Learning</topic><topic>Food</topic><topic>Food habits</topic><topic>Food service</topic><topic>food weight estimation</topic><topic>Humans</topic><topic>Identification</topic><topic>Image Processing, Computer-Assisted - methods</topic><topic>Ingredients</topic><topic>Machine vision</topic><topic>Restaurants</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Gonzalez, Bryan</creatorcontrib><creatorcontrib>Garcia, Gonzalo</creatorcontrib><creatorcontrib>Velastin, Sergio A</creatorcontrib><creatorcontrib>GholamHosseini, Hamid</creatorcontrib><creatorcontrib>Tejeda, Lino</creatorcontrib><creatorcontrib>Farias, Gonzalo</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Health & Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Medical Database (Alumni Edition)</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Health & Medical Complete (Alumni)</collection><collection>Health & Medical Collection (Alumni Edition)</collection><collection>Medical Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>Sensors (Basel, Switzerland)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Gonzalez, Bryan</au><au>Garcia, Gonzalo</au><au>Velastin, Sergio A</au><au>GholamHosseini, Hamid</au><au>Tejeda, Lino</au><au>Farias, Gonzalo</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Automated Food Weight and Content Estimation Using Computer Vision and AI Algorithms</atitle><jtitle>Sensors (Basel, Switzerland)</jtitle><addtitle>Sensors (Basel)</addtitle><date>2024-11-29</date><risdate>2024</risdate><volume>24</volume><issue>23</issue><spage>7660</spage><pages>7660-</pages><issn>1424-8220</issn><eissn>1424-8220</eissn><abstract>The work aims to leverage computer vision and artificial intelligence technologies to quantify key components in food distribution services. Specifically, it focuses on dish counting, content identification, and portion size estimation in a dining hall setting. An RGB camera is employed to capture the tray delivery process in a self-service restaurant, providing test images for plate counting and content identification algorithm comparison, using standard evaluation metrics. The approach utilized the YOLO architecture, a widely recognized deep learning model for object detection and computer vision. The model is trained on labeled image data, and its performance is assessed using a precision-recall curve at a confidence threshold of 0.5, achieving a mean average precision (mAP) of 0.873, indicating robust overall performance. The weight estimation procedure combines computer vision techniques to measure food volume using both RGB and depth cameras. Subsequently, density models specific to each food type are applied to estimate the detected food weight. The estimation model's parameters are calibrated through experiments that generate volume-to-weight conversion tables for different food items. Validation of the system was conducted using rice and chicken, yielding error margins of 5.07% and 3.75%, respectively, demonstrating the feasibility and accuracy of the proposed method.</abstract><cop>Switzerland</cop><pub>MDPI AG</pub><pmid>39686196</pmid><doi>10.3390/s24237660</doi><orcidid>https://orcid.org/0000-0001-9968-960X</orcidid><orcidid>https://orcid.org/0000-0001-6775-7137</orcidid><orcidid>https://orcid.org/0000-0002-0135-2687</orcidid><orcidid>https://orcid.org/0000-0003-2186-4126</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1424-8220 |
ispartof | Sensors (Basel, Switzerland), 2024-11, Vol.24 (23), p.7660 |
issn | 1424-8220 1424-8220 |
language | eng |
recordid | cdi_doaj_primary_oai_doaj_org_article_d5e4cd1d9dd24768835be38b9ec68af4 |
source | Publicly Available Content Database; PubMed Central |
subjects | Algorithms Artificial Intelligence Automation Cameras Computer vision Deep Learning Food Food habits Food service food weight estimation Humans Identification Image Processing, Computer-Assisted - methods Ingredients Machine vision Restaurants |
title | Automated Food Weight and Content Estimation Using Computer Vision and AI Algorithms |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T11%3A40%3A51IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Automated%20Food%20Weight%20and%20Content%20Estimation%20Using%20Computer%20Vision%20and%20AI%20Algorithms&rft.jtitle=Sensors%20(Basel,%20Switzerland)&rft.au=Gonzalez,%20Bryan&rft.date=2024-11-29&rft.volume=24&rft.issue=23&rft.spage=7660&rft.pages=7660-&rft.issn=1424-8220&rft.eissn=1424-8220&rft_id=info:doi/10.3390/s24237660&rft_dat=%3Cgale_doaj_%3EA819956811%3C/gale_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c3501-930caa144c1fdda214d93c84231d1c96a91295ecef2a6130dc008a599eccb4243%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3144159799&rft_id=info:pmid/39686196&rft_galeid=A819956811&rfr_iscdi=true |