Wednesday, April 7, 2021

MapReduce Overview - with Example and Architecture ( an analysis and interpretation )

 


MapReduce Overview

 

* MapReduce is a parallel programming framework for speeding up large scale data processing for computation tasks

 

* MapReduce achieves its performance with minimal movement of data on distributed file systems on Hadoop Clusters to achieve real-time results

 

* There are two major pre-requisites for MapReduce Programming

(1) The first pre-requisite of MapReduce Programming is that tha application must lend itself to parallel programming

(2) The data for the applications can be expressed in terms of key-value pairs

 

* MapReduce Processing is similar to UNIX sequence which is can be expressed in the form of pipe separated values data structure eg UNIX command .

 

grep | sort | count textfile.txt

 

This upper command produces a "wordcount" within the output text document which is referred to as "textfile.txt"

 

* There are three commands in the sequence and they work as follows

(a) grep is a command which is used to find and read a text file and create an intermediate file with one word

(b) sort is a command that works upon an intermediate file and produces an alphabetically sorted list of words

(c) count is a command which works on a sorted list to produce the number of occurrences of each word and display the results to a user in a "word-frequency" pair format

 

For example : Suppose a file "file.txt" contains the following text :

" I stay at Wonderville in the city of Gods . I am going to a picnic near our house . Many of our friends are coming too . You are welcome to join us . We will have fun"

  

The outputs of grep , sort and wordcount command on this text are in the following manner  


* For the sake of simplicity the case taken into account is of a relatively smaller text file . Had the text been very large , then it would have taken the computer a long amount of time to process the longer text document

* In order to process such a file one would take into account the service of Parallel Processing where MapReduce algorithm speeds up the computation process by reading and processing small chunks of file by different computers in parallel mode .

 

* Taking this into consideration , if the same logic could be applied to a file , then it could be said that the file object could be broken down into 100 smaller chunks where each of the chunks could be processed at a separate computer using parallel processing of the requests . The total time taken to process such a file is then minimised to 1/100 of the time that a single computer/ server/processor would have taken to accomplish the task of the file division .

 

* Now after the processing at separate nodes/processors is done separately/ parallely , the results of the computation on smaller chunks are done separately and later on aggregated together to produce a composite result . The results of the outputs from the various chunks are combined by another program called as "Reduce Program "

 

* The Map step distributes the full job into smaller tasks that can be done over separate computers using only a part of the data set . The result of the Map Step can be considered as intermediate results . The Reduce step reads the intermediate results and combines all of the results to produce the aggregated result .


(some results in the above diagram are not in proper order within the "SORT" column)

 

* As the concrete programmatic level breakdown of the logical handling of Mapping and Reducing the requisite steps are out of the context for this article , I am not able to show the background level code of the flow of each of the ensuing steps within the entire process .

 

* However one can only imagine the process through the given example where all the requisite data have been provided in the form of a text file which has all the required data and from the required data formation of the singular data chunks , sorting of the dataset (which is a standard procedure available in all database systems ) based on one or multiple fields can be performed . Therefore the intermediate results should have a key field upon which the sorting operation could be performed .

 

* In order to manage the variety of data structures stored over a file system , data is stored within it in the form of one - key and non-key attribute values . Therefore , data is represented in the form of a key-value pair . Along with that , the intermediate results would be also stored in the form of key-value pair format intermediate results would be also stored in the form of key-value pair format .

 

* Hence , one of the most significant things to remember about the manner of storage of all the input and output data over a "MapReduce Parallel Processing System is that the input data and the output data are all represented in the form of key-value pairs "

 

* A Map step reads all data in the form of key-value pair format . The programmer working upon the storage and managment of the stored data decides upon the characteristics and attributes of the key and value fields .

 

* The Map Step produces results in the form of key-value pair formats . However , the characteristics of the keys produced by the Map step need not be in the same format . Therefore , all this data is called as key2-value2 pairs

 

* The Reduce Step reads the key2-value2 pairs and produces an output using the same keys that were used for reading the data .

 

* The overall process of this entire MapReduce process can be seen in the

following manner :





 

No comments:

Post a Comment