Loading…

A speculative approach to parallelization in particle swarm optimization

Particle swarm optimization (PSO) has previously been parallelized primarily by distributing the computation corresponding to particles across multiple processors. In these approaches, the only benefit of additional processors is an increased swarm size. However, in many cases this is not efficient...

Full description

Saved in:
Bibliographic Details
Published in:Swarm intelligence 2012-06, Vol.6 (2), p.77-116
Main Authors: Gardner, Matthew, McNabb, Andrew, Seppi, Kevin
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Particle swarm optimization (PSO) has previously been parallelized primarily by distributing the computation corresponding to particles across multiple processors. In these approaches, the only benefit of additional processors is an increased swarm size. However, in many cases this is not efficient when scaled to very large swarm sizes (on very large clusters). Current methods cannot answer well the question: “How can 1000 processors be fully utilized when 50 or 100 particles is the most efficient swarm size?” In this paper we attempt to answer that question with a speculative approach to the parallelization of PSO that we refer to as SEPSO. In our approach, we refactor PSO such that the computation needed for iteration t +1 can be done concurrently with the computation needed for iteration t . Thus we can perform two iterations of PSO at once. Even with some amount of wasted computation, we show that this approach to parallelization in PSO often outperforms the standard parallelization of simply adding particles to the swarm. SEPSO produces results that are exactly equivalent to PSO; that is, SEPSO is a new method of parallelization and not a new PSO algorithm or variant. However, given this new parallelization model, we can relax the requirement of exactly reproducing PSO in an attempt to produce better results. We present several such relaxations, including keeping the best speculative position evaluated instead of the one corresponding to the standard behavior of PSO, and speculating several iterations ahead instead of just one. We show that these methods dramatically improve the performance of parallel PSO in many cases, giving speed ups of up to six compared to previous parallelization techniques.
ISSN:1935-3812
1935-3820
DOI:10.1007/s11721-011-0066-8