Loading…

UrOAC: Urban objects in any-light conditions

In the past years, several works on urban object detection from the point of view of a person have been made. These works are intended to provide an enhanced understanding of the environment for blind and visually challenged people. The mentioned approaches mostly rely in deep learning and machine l...

Full description

Saved in:
Bibliographic Details
Published in:Data in brief 2022-06, Vol.42, p.108172-108172, Article 108172
Main Authors: Gomez-Donoso, Francisco, Moreno-Martinez, Marcos, Cazorla, Miguel
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites cdi_FETCH-LOGICAL-c432t-7537f152eac3312b1a7c98216d18b0d719a670d004ad16e8af2c83f6d406063c3
container_end_page 108172
container_issue
container_start_page 108172
container_title Data in brief
container_volume 42
creator Gomez-Donoso, Francisco
Moreno-Martinez, Marcos
Cazorla, Miguel
description In the past years, several works on urban object detection from the point of view of a person have been made. These works are intended to provide an enhanced understanding of the environment for blind and visually challenged people. The mentioned approaches mostly rely in deep learning and machine learning methods. Nonetheless, these approaches only work with direct and bright light, namely, they will only perform correctly on daylight conditions. This is because deep learning algorithms require large amounts of data and the currently available datasets do not address this matter. In this work, we propose UrOAC, a dataset of urban objects captured in a range of different lightning conditions, from bright daylight to low and poor night-time lighting conditions. In the latter, the objects are only lit by low ambient light, street lamps and headlights of passing-by vehicles. The dataset depicts the following objects: pedestrian crosswalks, green traffic lights and red traffic lights. The annotations include the category and the bounding-box of each object. This dataset could be used for improve the performance at night-time and under low-light conditions of any vision-based method that involves urban objects. For instance, guidance and object detection devices for the visually challenged or self-driving and intelligent vehicles.
doi_str_mv 10.1016/j.dib.2022.108172
format article
fullrecord <record><control><sourceid>proquest_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_c4b40844ecae48ed87d5432a214d1398</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S2352340922003766</els_id><doaj_id>oai_doaj_org_article_c4b40844ecae48ed87d5432a214d1398</doaj_id><sourcerecordid>2660102188</sourcerecordid><originalsourceid>FETCH-LOGICAL-c432t-7537f152eac3312b1a7c98216d18b0d719a670d004ad16e8af2c83f6d406063c3</originalsourceid><addsrcrecordid>eNqNkU1r3DAQhk1paUKaH9BL8DGHejujL8spFMLSj0Agl-5ZyJK8kfFKieQN5N9XW6chuZReJM3onVejearqI8IKAcXncWV9vyJASIkltuRNdUwoJw1l0L19cT6qTnMeAQA5K0n-vjoqKwLh3XH1aZNuLtcX9Sb1OtSxH52Zc-1DrcNjM_nt7VybGKyffQz5Q_Vu0FN2p0_7SbX5_u3X-mdzffPjan153RhGydy0nLYDcuK0oRRJj7o1nSQoLMoebIudFi1YAKYtCif1QIykg7AMBAhq6El1tfjaqEd1l_xOp0cVtVd_EjFtlU6zN5NThvUMJGPOaMeks7K1vDShCTKLtJPF6-vidbfvd84aF-akp1emr2-Cv1Xb-KA64JILLAbnTwYp3u9dntXOZ-OmSQcX91kRIQqP8h75HymUuaM8tIWL1KSYc3LDc0cI6sBXjarwVQe-auFbas5efuW54i_NIviyCFyB8-BdUtl4F4yzPhWuZXr-H_a_AZeIsik</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2660102188</pqid></control><display><type>article</type><title>UrOAC: Urban objects in any-light conditions</title><source>Elsevier ScienceDirect Journals</source><source>PubMed Central</source><creator>Gomez-Donoso, Francisco ; Moreno-Martinez, Marcos ; Cazorla, Miguel</creator><creatorcontrib>Gomez-Donoso, Francisco ; Moreno-Martinez, Marcos ; Cazorla, Miguel</creatorcontrib><description>In the past years, several works on urban object detection from the point of view of a person have been made. These works are intended to provide an enhanced understanding of the environment for blind and visually challenged people. The mentioned approaches mostly rely in deep learning and machine learning methods. Nonetheless, these approaches only work with direct and bright light, namely, they will only perform correctly on daylight conditions. This is because deep learning algorithms require large amounts of data and the currently available datasets do not address this matter. In this work, we propose UrOAC, a dataset of urban objects captured in a range of different lightning conditions, from bright daylight to low and poor night-time lighting conditions. In the latter, the objects are only lit by low ambient light, street lamps and headlights of passing-by vehicles. The dataset depicts the following objects: pedestrian crosswalks, green traffic lights and red traffic lights. The annotations include the category and the bounding-box of each object. This dataset could be used for improve the performance at night-time and under low-light conditions of any vision-based method that involves urban objects. For instance, guidance and object detection devices for the visually challenged or self-driving and intelligent vehicles.</description><identifier>ISSN: 2352-3409</identifier><identifier>EISSN: 2352-3409</identifier><identifier>DOI: 10.1016/j.dib.2022.108172</identifier><identifier>PMID: 35510259</identifier><language>eng</language><publisher>Netherlands: Elsevier Inc</publisher><subject>Data ; data collection ; lightning ; Low-light conditions ; Object recognition ; people ; solar radiation ; traffic ; Urban environments</subject><ispartof>Data in brief, 2022-06, Vol.42, p.108172-108172, Article 108172</ispartof><rights>2022</rights><rights>2022 The Author(s). Published by Elsevier Inc.</rights><rights>2022 The Author(s). Published by Elsevier Inc. 2022</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c432t-7537f152eac3312b1a7c98216d18b0d719a670d004ad16e8af2c83f6d406063c3</cites><orcidid>0000-0001-6805-3633 ; 0000-0002-7830-2661</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC9058561/pdf/$$EPDF$$P50$$Gpubmedcentral$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.sciencedirect.com/science/article/pii/S2352340922003766$$EHTML$$P50$$Gelsevier$$Hfree_for_read</linktohtml><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/35510259$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Gomez-Donoso, Francisco</creatorcontrib><creatorcontrib>Moreno-Martinez, Marcos</creatorcontrib><creatorcontrib>Cazorla, Miguel</creatorcontrib><title>UrOAC: Urban objects in any-light conditions</title><title>Data in brief</title><addtitle>Data Brief</addtitle><description>In the past years, several works on urban object detection from the point of view of a person have been made. These works are intended to provide an enhanced understanding of the environment for blind and visually challenged people. The mentioned approaches mostly rely in deep learning and machine learning methods. Nonetheless, these approaches only work with direct and bright light, namely, they will only perform correctly on daylight conditions. This is because deep learning algorithms require large amounts of data and the currently available datasets do not address this matter. In this work, we propose UrOAC, a dataset of urban objects captured in a range of different lightning conditions, from bright daylight to low and poor night-time lighting conditions. In the latter, the objects are only lit by low ambient light, street lamps and headlights of passing-by vehicles. The dataset depicts the following objects: pedestrian crosswalks, green traffic lights and red traffic lights. The annotations include the category and the bounding-box of each object. This dataset could be used for improve the performance at night-time and under low-light conditions of any vision-based method that involves urban objects. For instance, guidance and object detection devices for the visually challenged or self-driving and intelligent vehicles.</description><subject>Data</subject><subject>data collection</subject><subject>lightning</subject><subject>Low-light conditions</subject><subject>Object recognition</subject><subject>people</subject><subject>solar radiation</subject><subject>traffic</subject><subject>Urban environments</subject><issn>2352-3409</issn><issn>2352-3409</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>DOA</sourceid><recordid>eNqNkU1r3DAQhk1paUKaH9BL8DGHejujL8spFMLSj0Agl-5ZyJK8kfFKieQN5N9XW6chuZReJM3onVejearqI8IKAcXncWV9vyJASIkltuRNdUwoJw1l0L19cT6qTnMeAQA5K0n-vjoqKwLh3XH1aZNuLtcX9Sb1OtSxH52Zc-1DrcNjM_nt7VybGKyffQz5Q_Vu0FN2p0_7SbX5_u3X-mdzffPjan153RhGydy0nLYDcuK0oRRJj7o1nSQoLMoebIudFi1YAKYtCif1QIykg7AMBAhq6El1tfjaqEd1l_xOp0cVtVd_EjFtlU6zN5NThvUMJGPOaMeks7K1vDShCTKLtJPF6-vidbfvd84aF-akp1emr2-Cv1Xb-KA64JILLAbnTwYp3u9dntXOZ-OmSQcX91kRIQqP8h75HymUuaM8tIWL1KSYc3LDc0cI6sBXjarwVQe-auFbas5efuW54i_NIviyCFyB8-BdUtl4F4yzPhWuZXr-H_a_AZeIsik</recordid><startdate>20220601</startdate><enddate>20220601</enddate><creator>Gomez-Donoso, Francisco</creator><creator>Moreno-Martinez, Marcos</creator><creator>Cazorla, Miguel</creator><general>Elsevier Inc</general><general>Elsevier</general><scope>6I.</scope><scope>AAFTH</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><scope>7S9</scope><scope>L.6</scope><scope>5PM</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0001-6805-3633</orcidid><orcidid>https://orcid.org/0000-0002-7830-2661</orcidid></search><sort><creationdate>20220601</creationdate><title>UrOAC: Urban objects in any-light conditions</title><author>Gomez-Donoso, Francisco ; Moreno-Martinez, Marcos ; Cazorla, Miguel</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c432t-7537f152eac3312b1a7c98216d18b0d719a670d004ad16e8af2c83f6d406063c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Data</topic><topic>data collection</topic><topic>lightning</topic><topic>Low-light conditions</topic><topic>Object recognition</topic><topic>people</topic><topic>solar radiation</topic><topic>traffic</topic><topic>Urban environments</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Gomez-Donoso, Francisco</creatorcontrib><creatorcontrib>Moreno-Martinez, Marcos</creatorcontrib><creatorcontrib>Cazorla, Miguel</creatorcontrib><collection>ScienceDirect Open Access Titles</collection><collection>Elsevier:ScienceDirect:Open Access</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><collection>AGRICOLA</collection><collection>AGRICOLA - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>Data in brief</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Gomez-Donoso, Francisco</au><au>Moreno-Martinez, Marcos</au><au>Cazorla, Miguel</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>UrOAC: Urban objects in any-light conditions</atitle><jtitle>Data in brief</jtitle><addtitle>Data Brief</addtitle><date>2022-06-01</date><risdate>2022</risdate><volume>42</volume><spage>108172</spage><epage>108172</epage><pages>108172-108172</pages><artnum>108172</artnum><issn>2352-3409</issn><eissn>2352-3409</eissn><abstract>In the past years, several works on urban object detection from the point of view of a person have been made. These works are intended to provide an enhanced understanding of the environment for blind and visually challenged people. The mentioned approaches mostly rely in deep learning and machine learning methods. Nonetheless, these approaches only work with direct and bright light, namely, they will only perform correctly on daylight conditions. This is because deep learning algorithms require large amounts of data and the currently available datasets do not address this matter. In this work, we propose UrOAC, a dataset of urban objects captured in a range of different lightning conditions, from bright daylight to low and poor night-time lighting conditions. In the latter, the objects are only lit by low ambient light, street lamps and headlights of passing-by vehicles. The dataset depicts the following objects: pedestrian crosswalks, green traffic lights and red traffic lights. The annotations include the category and the bounding-box of each object. This dataset could be used for improve the performance at night-time and under low-light conditions of any vision-based method that involves urban objects. For instance, guidance and object detection devices for the visually challenged or self-driving and intelligent vehicles.</abstract><cop>Netherlands</cop><pub>Elsevier Inc</pub><pmid>35510259</pmid><doi>10.1016/j.dib.2022.108172</doi><tpages>1</tpages><orcidid>https://orcid.org/0000-0001-6805-3633</orcidid><orcidid>https://orcid.org/0000-0002-7830-2661</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2352-3409
ispartof Data in brief, 2022-06, Vol.42, p.108172-108172, Article 108172
issn 2352-3409
2352-3409
language eng
recordid cdi_doaj_primary_oai_doaj_org_article_c4b40844ecae48ed87d5432a214d1398
source Elsevier ScienceDirect Journals; PubMed Central
subjects Data
data collection
lightning
Low-light conditions
Object recognition
people
solar radiation
traffic
Urban environments
title UrOAC: Urban objects in any-light conditions
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-03-06T12%3A05%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=UrOAC:%20Urban%20objects%20in%20any-light%20conditions&rft.jtitle=Data%20in%20brief&rft.au=Gomez-Donoso,%20Francisco&rft.date=2022-06-01&rft.volume=42&rft.spage=108172&rft.epage=108172&rft.pages=108172-108172&rft.artnum=108172&rft.issn=2352-3409&rft.eissn=2352-3409&rft_id=info:doi/10.1016/j.dib.2022.108172&rft_dat=%3Cproquest_doaj_%3E2660102188%3C/proquest_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c432t-7537f152eac3312b1a7c98216d18b0d719a670d004ad16e8af2c83f6d406063c3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2660102188&rft_id=info:pmid/35510259&rfr_iscdi=true