Back to Search
Start Over
Job schedulers for Big data processing in Hadoop environment: testing real-life schedulers using benchmark programs
- Source :
- Digital Communications and Networks, Vol 3, Iss 4, Pp 260-273 (2017)
- Publication Year :
- 2017
- Publisher :
- KeAi Communications Co., Ltd., 2017.
-
Abstract
- At present, big data is very popular, because it has proved to be much successful in many fields such as social media, E-commerce transactions, etc. Big data describes the tools and technologies needed to capture, manage, store, distribute, and analyze petabyte or larger-sized datasets having different structures with high speed. Big data can be structured, unstructured, or semi structured. Hadoop is an open source framework that is used to process large amounts of data in an inexpensive and efficient way, and job scheduling is a key factor for achieving high performance in big data processing. This paper gives an overview of big data and highlights the problems and challenges in big data. It then highlights Hadoop Distributed File System (HDFS), Hadoop MapReduce, and various parameters that affect the performance of job scheduling algorithms in big data such as Job Tracker, Task Tracker, Name Node, Data Node, etc. The primary purpose of this paper is to present a comparative study of job scheduling algorithms along with their experimental results in Hadoop environment. In addition, this paper describes the advantages, disadvantages, features, and drawbacks of various Hadoop job schedulers such as FIFO, Fair, capacity, Deadline Constraints, Delay, LATE, Resource Aware, etc, and provides a comparative study among these schedulers.
Details
- Language :
- English
- ISSN :
- 23528648
- Volume :
- 3
- Issue :
- 4
- Database :
- Directory of Open Access Journals
- Journal :
- Digital Communications and Networks
- Publication Type :
- Academic Journal
- Accession number :
- edsdoj.1d1a08565cfb40ed89038b37b8664ce8
- Document Type :
- article
- Full Text :
- https://doi.org/10.1016/j.dcan.2017.07.008