Loading…

SIMD compression and the intersection of sorted integers

Summary Sorted lists of integers are commonly used in inverted indexes and database systems. They are often compressed in memory. We can use the single‐instruction, multiple data (SIMD) instructions available in common processors to boost the speed of integer compression schemes. Our S4‐BP128‐D4 sch...

Full description

Saved in:
Bibliographic Details
Published in:Software, practice & experience practice & experience, 2016-06, Vol.46 (6), p.723-749
Main Authors: Lemire, Daniel, Boytsov, Leonid, Kurz, Nathan
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Summary Sorted lists of integers are commonly used in inverted indexes and database systems. They are often compressed in memory. We can use the single‐instruction, multiple data (SIMD) instructions available in common processors to boost the speed of integer compression schemes. Our S4‐BP128‐D4 scheme uses as little as 0.7 CPU cycles per decoded 32‐bit integer while still providing state‐of‐the‐art compression. However, if the subsequent processing of the integers is slow, the effort spent on optimizing decompression speed can be wasted. To show that it does not have to be so, we (1) vectorize and optimize the intersection of posting lists; (2) introduce the SIMD GALLOPING algorithm. We exploit the fact that one SIMD instruction can compare four pairs of 32‐bit integers at once. We experiment with two Text REtrieval Conference (TREC) text collections, GOV2 and ClueWeb09 (category B), using logs from the TREC million‐query track. We show that using only the SIMD instructions ubiquitous in all modern CPUs, our techniques for conjunctive queries can double the speed of a state‐of‐the‐art approach. Copyright © 2015 John Wiley & Sons, Ltd.
ISSN:0038-0644
1097-024X
DOI:10.1002/spe.2326