Loading…

Framework for parallelisation on big data

The parallelisation of big data is emerging as an important framework for large-scale parallel data applications such as seismic data processing. The field of seismic data is so large or complex that traditional data processing software is incapable of dealing with it. For example, the implementatio...

Full description

Saved in:
Bibliographic Details
Published in:PloS one 2019-05, Vol.14 (5), p.e0214044-e0214044
Main Authors: Rahim, Lukman Ab, Kudiri, Krishna Mohan, Bahattacharjee, Shiladitya
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The parallelisation of big data is emerging as an important framework for large-scale parallel data applications such as seismic data processing. The field of seismic data is so large or complex that traditional data processing software is incapable of dealing with it. For example, the implementation of parallel processing in seismic applications to improve the processing speed is complex in nature. To overcome this issue, a simple technique which that helps provide parallel processing for big data applications such as seismic algorithms is needed. In our framework, we used the Apache Hadoop with its MapReduce function. All experiments were conducted on the RedHat CentOS platform. Finally, we studied the bottlenecks and improved the overall performance of the system for seismic algorithms (stochastic inversion).
ISSN:1932-6203
1932-6203
DOI:10.1371/journal.pone.0214044