- Talend - Hive
- Talend - Working with Pig
- Talend - Map Reduce
- Hadoop Distributed File System
- Talend - Big Data
- Talend - Handling Job Execution
- Talend - Managing Jobs
- Talend - Context Variables
- Talend - Metadata
- Talend - Job Design
- Components for Data Integration
- Talend - Model Basics
- Talend - Data Integration
- Talend Open Studio
- Talend - Installation
- Talend - System Requirements
- Talend - Introduction
- Talend - Home
Talend Useful Resources
Selected Reading
- Who is Who
- Computer Glossary
- HR Interview Questions
- Effective Resume Writing
- Questions and Answers
- UPSC IAS Exams Notes
Talend - Working with Pig
In this chapter, let us learn how to work with a Pig job in Talend.
Creating a Talend Pig Job
In this section, let us learn how to run a Pig job on Talend. Here, we will process NYSE data to find out average stock volume of IBM.
For this, right cpck Job Design and create a new job – pigjob. Mention the details of the job and cpck Finish.
Adding Components to Pig Job
To add components to Pig job, drag and drop four Talend components: tPigLoad, tPigFilterRow, tPigAggregate, tPigStoreResult, from the pallet to designer window.
Then, right cpck tPigLoad and create Pig Combine pne to tPigFilterRow. Next, right cpck tPigFilterRow and create Pig Combine pne to tPigAggregate. Right cpck tPigAggregate and create Pig combine pne to tPigStoreResult.
Configuring Components and Transformations
In tPigLoad, mention the distribution as cloudera and the version of cloudera. Note that Namenode URI should be “hdfs://quickstart.cloudera:8020” and Resource Manager should be “quickstart.cloudera:8020”. Also, the username should be “cloudera”.
In the Input file URI, give the path of your NYSE input file to the pig job. Note that this input file should be present on HDFS.
Cpck edit schema, add the columns and its type as shown below.
In tPigFilterRow, select the “Use advanced filter” option and put “stock_symbol = = ‘IBM’” in the Filter option.
In tAggregateRow, cpck edit schema and add avg_stock_volume column in output as shown below.
Now, put stock_exchange column in Group by option. Add avg_stock_volume column in Operations field with count Function and stock_exchange as Input Column.
In tPigStoreResult, give the output path in Result Folder URI where you want to store the result of Pig job. Select store function as PigStorage and field separator (not mandatory) as “ ”.
Executing the Pig Job
Now cpck on Run to execute your Pig job. (Ignore the warnings)
Once the job finishes, go and the check your output at the HDFS path you mentioned for storing the pig job result. The average stock volume of IBM is 500.
Advertisements