Loading…

Lattice Boltzmann Simulations of Cavity Flows on Graphic Processing Unit with Memory Management

Lattice Boltzmann method (LBM) is adopted to compute two and three-dimensional lid driven cavity flows to examine the influence of memory management on the computational performance using Graphics Processing Unit (GPU). Both single and multi-relaxation time LBM are adopted. The computations are cond...

Full description

Saved in:
Bibliographic Details
Published in:Journal of mechanics 2017-12, Vol.33 (6), p.863-871
Main Authors: Hong, P. Y., Huang, L. M., Chang, C. Y., Lin, C. A.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Lattice Boltzmann method (LBM) is adopted to compute two and three-dimensional lid driven cavity flows to examine the influence of memory management on the computational performance using Graphics Processing Unit (GPU). Both single and multi-relaxation time LBM are adopted. The computations are conducted on nVIDIA GeForce Titan, Tesla C2050 and GeForce GTX 560Ti. The performance using global memory deteriorates greatly when multi relaxation time (MRT) LBM is used, which is due to the scheme requesting more information from the global memory than its single relaxation time (SRT) LBM counterpart. On the other hand, adopting on chip memory the difference using MRT and SRT is not significant. Also, performance of LBM streaming procedure using offset reading surpasses offset writing ranging from 50% to 100% and this applies to both SRT and MRT LBM. Finally, comparisons using different GPU platforms indicate that Titan as expected outperforms other devices, and attains 227 and 193 speedup over its Intel Core i7-990 CPU counterpart and four times faster than GTX 560Ti and Tesla C2050 for three dimensional cavity flow simulations respectively with single and double precisions.
ISSN:1727-7191
1811-8216
DOI:10.1017/jmech.2017.70