Explain Hadoop Apache Pig architecture

 HDFS is a distributed file system used to store very large data files, operating on commodity hardware clusters. It is tolerant of faults, scalable, and incredibly easy to extend. With HDFS (Hadoop Distributed File Systems), Hadoop comes bundled.

When data exceeds the storage capacity on a single physical machine, splitting it over a number of different machines becomes necessary. A distributed file system is called a file system that handles the storage of specific operations through a network of machines. One such software is HDFS,More info go through Big Data and Hadoop Course.

Apache Pig

Pig is a useful high-level programming language for the study of large sets of data. The pig was a part of the Yahoo! production effort.

Programs need to convert it into a sequence of Map and Reduce stages within a MapReduce system. This is not, however, a programming model that is common to data analysts. So an abstraction called Pig was constructed on top of Hadoop in order to bridge this distance.

Apache Pig helps people to concentrate more on the study of bulk data sets and spend less time writing programs for Map-Reduce. The Pig programming language is designed to operate on any form of data, similar to pigs that eat something. That's the reason for the tag, Pig!

Architecture for Pig in Hadoop 

The pig is composed of two elements:

  • Pig Latin, the language of which you use.

  • A runtime framework for running programs with Pig Latin.

Pig Latin in Hadoop

A Pig Latin program consists of a series of operations or transformations to generate output that applies to the input data. These operations define a data flow that is converted by the Pig execution environment into an executable representation. Underneath, a set of MapReduce jobs that a programmer is unaware of are the products of these transformations. So in a sense, Pig enables the programmer to concentrate on details rather than on the essence of execution.

Pig Latin is a relatively rigid language that uses common data processing keywords such as Enter, Community, and Filter.

Hadoop PIG Tutorial: Introduction, Installation & Example

Architecture PIG Modes for execution:

The Pig has two modes of execution.

  • Local mode: Pig runs on a single JVM in this mode and makes use of the local file system. This mode is only sufficient for small dataset analysis using Pig.

  • Map Reduce mode: A completely distributed cluster MapReduce mode is useful for running Pig on large datasets.

Read the HDFS Process in Apache Pig

The request for data reading is supported by HDFS, NameNode, and DataNode. Let's label the "client" reader. The diagram below shows the process of file reading in Hadoop.

  • A client initiates a read request by calling the FileSystem object 'open method; it is an object of the DistributedFileSystem sort.

  • This object uses the RPC to bind to the namenode and obtains metadata information such as the location of the file blocks. Please note that these addresses belong to the first few file blocks.

  • DataNode addresses with a copy of the block are returned in response to this metadata request.

  • Once the DataNode addresses are obtained, the FSDataInputStream type object is returned to the client. A client invokes the 'read()' method in step 4 shown in the above diagram, which causes DFSInputStream to attach the first DataNode to the first block of a file.

  • Data is read in the form of streams whereby the client invokes the 'read()' method repeatedly. This read() process phase continues until the end of the block is reached.

  • Once the end of a block has been reached, the connection is closed by DFSInputStream and the next DataNode for the next block is located.

  • Once the reading is completed by a recipient, a close() method is named.

How to Install and Download Pig

Shift user to 'hduser' (you can switch to the userID used during your Hadoop setup when using Hadoop configuration). However, check whether you install Hadoop.

Step 1) 

Download the latest Pig stable release from any of the mirror pages available.

Choose the tar.gz (not src.tar.gz) file you want to use.

Step 2)

Navigate to the directory containing the downloaded tar file once the download is complete and transfer the tar to the place where you want Pig to set up. In this scenario, we will turn to /usr/local

Moving to a directory that holds Pig Files.

/usr/local cd

Extract tar file content as shown below.

File sudo -xvf pig-0.12.1.tar.gzz

Step 3. 

Modify ~/.bashrc to add environment variables linked to Pig.

Open the ~/.bashrc file in any text editor of your choice and make the changes below.

PIG HOME=<Pig installation directory> export

Export PATH=$PIG HOME/bin:$HADOOP HOME/bin:$PATH=$PIG HOME/bin:$HADOOP HOME/bin:$PATH=

Step 4)

Now use the below command to find this environment setup.

. ~/.bashrc-c

Step 5)

 To endorse Hadoop 2.2.0 we need to recompile PIG.

To do this, here are the steps.

Go to the home directory for the PIG.

$PIG HOME cd

Installing an Ant

Sudo apt-get ant install

PIG recompile

Clean jar-all sudo ant -Dhadoopversion=23

Please note that multiple components are downloaded in this recompilation process. So you should be linked to the internet by a system.

Also, if this procedure is stuck somewhere and you see no Stepment for more than 20 minutes on the command prompt, then press Ctrl + c and re-run the same command.

It takes 20 minutes in our situation.

Step 6) 

Use the command to verify the installation of Pig

Pig -support to help

Example Script for Pig

In order to find the number of products sold in every region, we will use the PIG.

Input: Our data set for input is a CSV format, SalesJan2009.csv

Step 1) 

Launch Hadoop

$HADOOP HOME/sbin/start-dfs.sh.home

$HADOOP HOME/sbin/start-yarn.sh-yarn.sh.

Step 2)

In Hadoop MapReduce mode, Pig can take a file from HDFS and save the results back to HDFS.

Here the file is in the Input folder. If the file is located in a different region, state the name.

$HADOOP HOME/bin/hdfs dfs -copyFromLocal ~/input/SalesJan2009.csv /hdfs dfs-copyFromLocal ~/input/Sales

Verify whether or not a file is currently being copied.

$HADOOP HOME/bin/hdfs dfs -ls /hdfs dfs-ls

Step 3)

Configuration of Pig

Navigate first, to $PIG HOME/conf

The $PIG HOME/conf cd

Pig.properties sudo cp pig.properties pig.properties.original properties

Use the text editor of your choice to open pig.properties and define a log file route using pig.logfile.

Gedit sudo pig.properties

CurrentPosts

This could be the most exciting article you'll ever read if you live in India and want to earn some extra money quickly.

This file uses the logger to record errors.

Step 4) 

Run the 'pig' command to open the Pig command prompt, which is an interactive Pig question shell.

Pig

Step 5)

In the Grunt Pig command prompt, execute the commands below Pig in sequence.

——A. Load the data-containing file.

Following the order of data-containing file, and then press Enter.

—- B. Data by field party Nation Country

GroupByCountry = Sales GroupTable BY Country;

——C. Generate the resulting string of the form-

> Country name for each tuple in 'GroupByCountry': No of products sold

GENERATE CONCAT((chararray)$0,CONCAT(':',(chararray)COUNT($1));; GENERATE CONCAT((chararray)$0,CONCAT(

Following this order, press Enter.

Oh, — D. Store the Data Flow results on HDFS in the 'pig production sales' directory

STORE CountByCountry INTO USING PigStorage('\t ');' pig output sales'

The execution of this order will take some time. When completed, the following screen should view.

Step 6) 

The outcome can be seen through the command interface as,

$HADOOP HOME/bin/hdfs dfs -cat pig-output-sales/part-r-00000

It is also possible to see results through a web interface as—

Results from a web interface.

Open the web browser at http:/localhost:50070/.

Pick 'Browse the filesystem' now and navigate to /user/hduser/pig-output-sales

Open part-r-00000, openart p

Hadoop PIG Tutorial: Introduction, Installation & Example

Conclusion

I hope you reach to a conclusion about Apache Pig architecture. You can learn more through Big data Hadoop training.


No comments:

Powered by Blogger.