Loading…

OpenEDS: Open Eye Dataset

We present a large scale data set, OpenEDS: Open Eye Dataset, of eye-images captured using a virtual-reality (VR) head mounted display mounted with two synchronized eyefacing cameras at a frame rate of 200 Hz under controlled illumination. This dataset is compiled from video capture of the eye-regio...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2019-05
Main Authors: Garbin, Stephan J, Shen, Yiru, Schuetz, Immo, Cavin, Robert, Hughes, Gregory, Talathi, Sachin S
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Garbin, Stephan J
Shen, Yiru
Schuetz, Immo
Cavin, Robert
Hughes, Gregory
Talathi, Sachin S
description We present a large scale data set, OpenEDS: Open Eye Dataset, of eye-images captured using a virtual-reality (VR) head mounted display mounted with two synchronized eyefacing cameras at a frame rate of 200 Hz under controlled illumination. This dataset is compiled from video capture of the eye-region collected from 152 individual participants and is divided into four subsets: (i) 12,759 images with pixel-level annotations for key eye-regions: iris, pupil and sclera (ii) 252,690 unlabelled eye-images, (iii) 91,200 frames from randomly selected video sequence of 1.5 seconds in duration and (iv) 143 pairs of left and right point cloud data compiled from corneal topography of eye regions collected from a subset, 143 out of 152, participants in the study. A baseline experiment has been evaluated on OpenEDS for the task of semantic segmentation of pupil, iris, sclera and background, with the mean intersectionover-union (mIoU) of 98.3 %. We anticipate that OpenEDS will create opportunities to researchers in the eye tracking community and the broader machine learning and computer vision community to advance the state of eye-tracking for VR applications. The dataset is available for download upon request at https://research.fb.com/programs/openeds-challenge
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2222823357</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2222823357</sourcerecordid><originalsourceid>FETCH-proquest_journals_22228233573</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSQ9C9IzXN1CbZSADEUXCtTFVwSSxKLU0t4GFjTEnOKU3mhNDeDsptriLOHbkFRfmFpanFJfFZ-aVEeUCreCAgsgJaYmhsTpwoAI-wpaQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2222823357</pqid></control><display><type>article</type><title>OpenEDS: Open Eye Dataset</title><source>Publicly Available Content Database</source><creator>Garbin, Stephan J ; Shen, Yiru ; Schuetz, Immo ; Cavin, Robert ; Hughes, Gregory ; Talathi, Sachin S</creator><creatorcontrib>Garbin, Stephan J ; Shen, Yiru ; Schuetz, Immo ; Cavin, Robert ; Hughes, Gregory ; Talathi, Sachin S</creatorcontrib><description>We present a large scale data set, OpenEDS: Open Eye Dataset, of eye-images captured using a virtual-reality (VR) head mounted display mounted with two synchronized eyefacing cameras at a frame rate of 200 Hz under controlled illumination. This dataset is compiled from video capture of the eye-region collected from 152 individual participants and is divided into four subsets: (i) 12,759 images with pixel-level annotations for key eye-regions: iris, pupil and sclera (ii) 252,690 unlabelled eye-images, (iii) 91,200 frames from randomly selected video sequence of 1.5 seconds in duration and (iv) 143 pairs of left and right point cloud data compiled from corneal topography of eye regions collected from a subset, 143 out of 152, participants in the study. A baseline experiment has been evaluated on OpenEDS for the task of semantic segmentation of pupil, iris, sclera and background, with the mean intersectionover-union (mIoU) of 98.3 %. We anticipate that OpenEDS will create opportunities to researchers in the eye tracking community and the broader machine learning and computer vision community to advance the state of eye-tracking for VR applications. The dataset is available for download upon request at https://research.fb.com/programs/openeds-challenge</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Communities ; Computer vision ; Datasets ; Downloading ; Helmet mounted displays ; Image annotation ; Image segmentation ; Machine learning ; Tracking ; Virtual reality</subject><ispartof>arXiv.org, 2019-05</ispartof><rights>2019. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2222823357?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25752,37011,44589</link.rule.ids></links><search><creatorcontrib>Garbin, Stephan J</creatorcontrib><creatorcontrib>Shen, Yiru</creatorcontrib><creatorcontrib>Schuetz, Immo</creatorcontrib><creatorcontrib>Cavin, Robert</creatorcontrib><creatorcontrib>Hughes, Gregory</creatorcontrib><creatorcontrib>Talathi, Sachin S</creatorcontrib><title>OpenEDS: Open Eye Dataset</title><title>arXiv.org</title><description>We present a large scale data set, OpenEDS: Open Eye Dataset, of eye-images captured using a virtual-reality (VR) head mounted display mounted with two synchronized eyefacing cameras at a frame rate of 200 Hz under controlled illumination. This dataset is compiled from video capture of the eye-region collected from 152 individual participants and is divided into four subsets: (i) 12,759 images with pixel-level annotations for key eye-regions: iris, pupil and sclera (ii) 252,690 unlabelled eye-images, (iii) 91,200 frames from randomly selected video sequence of 1.5 seconds in duration and (iv) 143 pairs of left and right point cloud data compiled from corneal topography of eye regions collected from a subset, 143 out of 152, participants in the study. A baseline experiment has been evaluated on OpenEDS for the task of semantic segmentation of pupil, iris, sclera and background, with the mean intersectionover-union (mIoU) of 98.3 %. We anticipate that OpenEDS will create opportunities to researchers in the eye tracking community and the broader machine learning and computer vision community to advance the state of eye-tracking for VR applications. The dataset is available for download upon request at https://research.fb.com/programs/openeds-challenge</description><subject>Communities</subject><subject>Computer vision</subject><subject>Datasets</subject><subject>Downloading</subject><subject>Helmet mounted displays</subject><subject>Image annotation</subject><subject>Image segmentation</subject><subject>Machine learning</subject><subject>Tracking</subject><subject>Virtual reality</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSQ9C9IzXN1CbZSADEUXCtTFVwSSxKLU0t4GFjTEnOKU3mhNDeDsptriLOHbkFRfmFpanFJfFZ-aVEeUCreCAgsgJaYmhsTpwoAI-wpaQ</recordid><startdate>20190517</startdate><enddate>20190517</enddate><creator>Garbin, Stephan J</creator><creator>Shen, Yiru</creator><creator>Schuetz, Immo</creator><creator>Cavin, Robert</creator><creator>Hughes, Gregory</creator><creator>Talathi, Sachin S</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20190517</creationdate><title>OpenEDS: Open Eye Dataset</title><author>Garbin, Stephan J ; Shen, Yiru ; Schuetz, Immo ; Cavin, Robert ; Hughes, Gregory ; Talathi, Sachin S</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_22228233573</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Communities</topic><topic>Computer vision</topic><topic>Datasets</topic><topic>Downloading</topic><topic>Helmet mounted displays</topic><topic>Image annotation</topic><topic>Image segmentation</topic><topic>Machine learning</topic><topic>Tracking</topic><topic>Virtual reality</topic><toplevel>online_resources</toplevel><creatorcontrib>Garbin, Stephan J</creatorcontrib><creatorcontrib>Shen, Yiru</creatorcontrib><creatorcontrib>Schuetz, Immo</creatorcontrib><creatorcontrib>Cavin, Robert</creatorcontrib><creatorcontrib>Hughes, Gregory</creatorcontrib><creatorcontrib>Talathi, Sachin S</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Garbin, Stephan J</au><au>Shen, Yiru</au><au>Schuetz, Immo</au><au>Cavin, Robert</au><au>Hughes, Gregory</au><au>Talathi, Sachin S</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>OpenEDS: Open Eye Dataset</atitle><jtitle>arXiv.org</jtitle><date>2019-05-17</date><risdate>2019</risdate><eissn>2331-8422</eissn><abstract>We present a large scale data set, OpenEDS: Open Eye Dataset, of eye-images captured using a virtual-reality (VR) head mounted display mounted with two synchronized eyefacing cameras at a frame rate of 200 Hz under controlled illumination. This dataset is compiled from video capture of the eye-region collected from 152 individual participants and is divided into four subsets: (i) 12,759 images with pixel-level annotations for key eye-regions: iris, pupil and sclera (ii) 252,690 unlabelled eye-images, (iii) 91,200 frames from randomly selected video sequence of 1.5 seconds in duration and (iv) 143 pairs of left and right point cloud data compiled from corneal topography of eye regions collected from a subset, 143 out of 152, participants in the study. A baseline experiment has been evaluated on OpenEDS for the task of semantic segmentation of pupil, iris, sclera and background, with the mean intersectionover-union (mIoU) of 98.3 %. We anticipate that OpenEDS will create opportunities to researchers in the eye tracking community and the broader machine learning and computer vision community to advance the state of eye-tracking for VR applications. The dataset is available for download upon request at https://research.fb.com/programs/openeds-challenge</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2019-05
issn 2331-8422
language eng
recordid cdi_proquest_journals_2222823357
source Publicly Available Content Database
subjects Communities
Computer vision
Datasets
Downloading
Helmet mounted displays
Image annotation
Image segmentation
Machine learning
Tracking
Virtual reality
title OpenEDS: Open Eye Dataset
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T07%3A26%3A27IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=OpenEDS:%20Open%20Eye%20Dataset&rft.jtitle=arXiv.org&rft.au=Garbin,%20Stephan%20J&rft.date=2019-05-17&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2222823357%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_22228233573%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2222823357&rft_id=info:pmid/&rfr_iscdi=true