Loading…

Active Mini-Batch Sampling Using Repulsive Point Processes

The convergence speed of stochastic gradient descent (SGD) can be improved by actively selecting mini-batches. We explore sampling schemes where similar data points are less likely to be selected in the same mini-batch. In particular, we prove that such repulsive sampling schemes lower the variance...

Full description

Saved in:
Bibliographic Details
Main Authors: Zhang, Cheng, Öztireli, Cengiz, Mandt, Stephan, Salvi, Giampiero
Format: Conference Proceeding
Language:English
Citations: Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c219t-12347ffc5511d9f75b07d0e6d1ad5b1bdd414a521b32b5d251fae7104e604163
cites
container_end_page 5748
container_issue 1
container_start_page 5741
container_title
container_volume 33
creator Zhang, Cheng
Öztireli, Cengiz
Mandt, Stephan
Salvi, Giampiero
description The convergence speed of stochastic gradient descent (SGD) can be improved by actively selecting mini-batches. We explore sampling schemes where similar data points are less likely to be selected in the same mini-batch. In particular, we prove that such repulsive sampling schemes lower the variance of the gradient estimator. This generalizes recent work on using Determinantal Point Processes (DPPs) for mini-batch diversification (Zhang et al., 2017) to the broader class of repulsive point processes. We first show that the phenomenon of variance reduction by diversified sampling generalizes in particular to non-stationary point processes. We then show that other point processes may be computationally much more efficient than DPPs. In particular, we propose and investigate Poisson Disk sampling—frequently encountered in the computer graphics community—for this task. We show empirically that our approach improves over standard SGD both in terms of convergence speed as well as final model performance.
doi_str_mv 10.1609/aaai.v33i01.33015741
format conference_proceeding
fullrecord <record><control><sourceid>crossref</sourceid><recordid>TN_cdi_crossref_primary_10_1609_aaai_v33i01_33015741</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>10_1609_aaai_v33i01_33015741</sourcerecordid><originalsourceid>FETCH-LOGICAL-c219t-12347ffc5511d9f75b07d0e6d1ad5b1bdd414a521b32b5d251fae7104e604163</originalsourceid><addsrcrecordid>eNo1kMtOwzAURC0EElXpH7DIDzj4-toxZlcqXlIRFZS15fgBRmkSxaESf0-jwixmZjGaxSHkElgJFdNX1tpU7hETgxKRgVQCTsiMoxIURXV9euggNZWo9TlZ5PzFDhIaANSM3CzdmPaheE5tord2dJ_Fm931TWo_ivc8-Wvov5s8bTZdasdiM3Qu5BzyBTmLtslh8Zdzsr2_264e6frl4Wm1XFPHQY8UOAoVo5MSwOuoZM2UZ6HyYL2sofZegLCSQ428lp5LiDYoYCJUTECFcyKOt27och5CNP2Qdnb4McDMRMBMBMyRgPkngL_5ek93</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Active Mini-Batch Sampling Using Repulsive Point Processes</title><source>Freely Accessible Science Journals - check A-Z of ejournals</source><creator>Zhang, Cheng ; Öztireli, Cengiz ; Mandt, Stephan ; Salvi, Giampiero</creator><creatorcontrib>Zhang, Cheng ; Öztireli, Cengiz ; Mandt, Stephan ; Salvi, Giampiero</creatorcontrib><description>The convergence speed of stochastic gradient descent (SGD) can be improved by actively selecting mini-batches. We explore sampling schemes where similar data points are less likely to be selected in the same mini-batch. In particular, we prove that such repulsive sampling schemes lower the variance of the gradient estimator. This generalizes recent work on using Determinantal Point Processes (DPPs) for mini-batch diversification (Zhang et al., 2017) to the broader class of repulsive point processes. We first show that the phenomenon of variance reduction by diversified sampling generalizes in particular to non-stationary point processes. We then show that other point processes may be computationally much more efficient than DPPs. In particular, we propose and investigate Poisson Disk sampling—frequently encountered in the computer graphics community—for this task. We show empirically that our approach improves over standard SGD both in terms of convergence speed as well as final model performance.</description><identifier>ISSN: 2159-5399</identifier><identifier>EISSN: 2374-3468</identifier><identifier>DOI: 10.1609/aaai.v33i01.33015741</identifier><language>eng</language><ispartof>Proceedings of the ... AAAI Conference on Artificial Intelligence, 2019, Vol.33 (1), p.5741-5748</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c219t-12347ffc5511d9f75b07d0e6d1ad5b1bdd414a521b32b5d251fae7104e604163</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27903,27904</link.rule.ids></links><search><creatorcontrib>Zhang, Cheng</creatorcontrib><creatorcontrib>Öztireli, Cengiz</creatorcontrib><creatorcontrib>Mandt, Stephan</creatorcontrib><creatorcontrib>Salvi, Giampiero</creatorcontrib><title>Active Mini-Batch Sampling Using Repulsive Point Processes</title><title>Proceedings of the ... AAAI Conference on Artificial Intelligence</title><description>The convergence speed of stochastic gradient descent (SGD) can be improved by actively selecting mini-batches. We explore sampling schemes where similar data points are less likely to be selected in the same mini-batch. In particular, we prove that such repulsive sampling schemes lower the variance of the gradient estimator. This generalizes recent work on using Determinantal Point Processes (DPPs) for mini-batch diversification (Zhang et al., 2017) to the broader class of repulsive point processes. We first show that the phenomenon of variance reduction by diversified sampling generalizes in particular to non-stationary point processes. We then show that other point processes may be computationally much more efficient than DPPs. In particular, we propose and investigate Poisson Disk sampling—frequently encountered in the computer graphics community—for this task. We show empirically that our approach improves over standard SGD both in terms of convergence speed as well as final model performance.</description><issn>2159-5399</issn><issn>2374-3468</issn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2019</creationdate><recordtype>conference_proceeding</recordtype><recordid>eNo1kMtOwzAURC0EElXpH7DIDzj4-toxZlcqXlIRFZS15fgBRmkSxaESf0-jwixmZjGaxSHkElgJFdNX1tpU7hETgxKRgVQCTsiMoxIURXV9euggNZWo9TlZ5PzFDhIaANSM3CzdmPaheE5tord2dJ_Fm931TWo_ivc8-Wvov5s8bTZdasdiM3Qu5BzyBTmLtslh8Zdzsr2_264e6frl4Wm1XFPHQY8UOAoVo5MSwOuoZM2UZ6HyYL2sofZegLCSQ428lp5LiDYoYCJUTECFcyKOt27och5CNP2Qdnb4McDMRMBMBMyRgPkngL_5ek93</recordid><startdate>20190101</startdate><enddate>20190101</enddate><creator>Zhang, Cheng</creator><creator>Öztireli, Cengiz</creator><creator>Mandt, Stephan</creator><creator>Salvi, Giampiero</creator><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>20190101</creationdate><title>Active Mini-Batch Sampling Using Repulsive Point Processes</title><author>Zhang, Cheng ; Öztireli, Cengiz ; Mandt, Stephan ; Salvi, Giampiero</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c219t-12347ffc5511d9f75b07d0e6d1ad5b1bdd414a521b32b5d251fae7104e604163</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2019</creationdate><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Cheng</creatorcontrib><creatorcontrib>Öztireli, Cengiz</creatorcontrib><creatorcontrib>Mandt, Stephan</creatorcontrib><creatorcontrib>Salvi, Giampiero</creatorcontrib><collection>CrossRef</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhang, Cheng</au><au>Öztireli, Cengiz</au><au>Mandt, Stephan</au><au>Salvi, Giampiero</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Active Mini-Batch Sampling Using Repulsive Point Processes</atitle><btitle>Proceedings of the ... AAAI Conference on Artificial Intelligence</btitle><date>2019-01-01</date><risdate>2019</risdate><volume>33</volume><issue>1</issue><spage>5741</spage><epage>5748</epage><pages>5741-5748</pages><issn>2159-5399</issn><eissn>2374-3468</eissn><abstract>The convergence speed of stochastic gradient descent (SGD) can be improved by actively selecting mini-batches. We explore sampling schemes where similar data points are less likely to be selected in the same mini-batch. In particular, we prove that such repulsive sampling schemes lower the variance of the gradient estimator. This generalizes recent work on using Determinantal Point Processes (DPPs) for mini-batch diversification (Zhang et al., 2017) to the broader class of repulsive point processes. We first show that the phenomenon of variance reduction by diversified sampling generalizes in particular to non-stationary point processes. We then show that other point processes may be computationally much more efficient than DPPs. In particular, we propose and investigate Poisson Disk sampling—frequently encountered in the computer graphics community—for this task. We show empirically that our approach improves over standard SGD both in terms of convergence speed as well as final model performance.</abstract><doi>10.1609/aaai.v33i01.33015741</doi><tpages>8</tpages></addata></record>
fulltext fulltext
identifier ISSN: 2159-5399
ispartof Proceedings of the ... AAAI Conference on Artificial Intelligence, 2019, Vol.33 (1), p.5741-5748
issn 2159-5399
2374-3468
language eng
recordid cdi_crossref_primary_10_1609_aaai_v33i01_33015741
source Freely Accessible Science Journals - check A-Z of ejournals
title Active Mini-Batch Sampling Using Repulsive Point Processes
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-25T20%3A31%3A49IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Active%20Mini-Batch%20Sampling%20Using%20Repulsive%20Point%20Processes&rft.btitle=Proceedings%20of%20the%20...%20AAAI%20Conference%20on%20Artificial%20Intelligence&rft.au=Zhang,%20Cheng&rft.date=2019-01-01&rft.volume=33&rft.issue=1&rft.spage=5741&rft.epage=5748&rft.pages=5741-5748&rft.issn=2159-5399&rft.eissn=2374-3468&rft_id=info:doi/10.1609/aaai.v33i01.33015741&rft_dat=%3Ccrossref%3E10_1609_aaai_v33i01_33015741%3C/crossref%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c219t-12347ffc5511d9f75b07d0e6d1ad5b1bdd414a521b32b5d251fae7104e604163%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true