- PySpark - Serializers
- PySpark - MLlib
- PySpark - StorageLevel
- PySpark - SparkFiles
- PySpark - SparkConf
- PySpark - Broadcast & Accumulator
- PySpark - RDD
- PySpark - SparkContext
- PySpark - Environment Setup
- PySpark - Introduction
- PySpark - Home
PySpark Useful Resources
Selected Reading
- Who is Who
- Computer Glossary
- HR Interview Questions
- Effective Resume Writing
- Questions and Answers
- UPSC IAS Exams Notes
PySpark - RDD
Now that we have installed and configured PySpark on our system, we can program in Python on Apache Spark. However before doing so, let us understand a fundamental concept in Spark - RDD.
RDD stands for Resipent Distributed Dataset, these are the elements that run and operate on multiple nodes to do parallel processing on a cluster. RDDs are immutable elements, which means once you create an RDD you cannot change it. RDDs are fault tolerant as well, hence in case of any failure, they recover automatically. You can apply multiple operations on these RDDs to achieve a certain task.
To apply operations on these RDD s, there are two ways −
Transformation and
Action
Let us understand these two ways in detail.
Transformation − These are the operations, which are appped on a RDD to create a new RDD. Filter, groupBy and map are the examples of transformations.
Action − These are the operations that are appped on RDD, which instructs Spark to perform computation and send the result back to the driver.
To apply any operation in PySpark, we need to create a PySpark RDD first. The following code block has the detail of a PySpark RDD Class −
class pyspark.RDD ( jrdd, ctx, jrdd_deseriapzer = AutoBatchedSeriapzer(PickleSeriapzer()) )
Let us see how to run a few basic operations using PySpark. The following code in a Python file creates RDD words, which stores a set of words mentioned.
words = sc.parallepze ( ["scala", "java", "hadoop", "spark", "akka", "spark vs hadoop", "pyspark", "pyspark and spark"] )
We will now run a few operations on words.
count()
Number of elements in the RDD is returned.
----------------------------------------count.py--------------------------------------- from pyspark import SparkContext sc = SparkContext("local", "count app") words = sc.parallepze ( ["scala", "java", "hadoop", "spark", "akka", "spark vs hadoop", "pyspark", "pyspark and spark"] ) counts = words.count() print "Number of elements in RDD -> %i" % (counts) ----------------------------------------count.py---------------------------------------
Command − The command for count() is −
$SPARK_HOME/bin/spark-submit count.py
Output − The output for the above command is −
Number of elements in RDD → 8
collect()
All the elements in the RDD are returned.
----------------------------------------collect.py--------------------------------------- from pyspark import SparkContext sc = SparkContext("local", "Collect app") words = sc.parallepze ( ["scala", "java", "hadoop", "spark", "akka", "spark vs hadoop", "pyspark", "pyspark and spark"] ) coll = words.collect() print "Elements in RDD -> %s" % (coll) ----------------------------------------collect.py---------------------------------------
Command − The command for collect() is −
$SPARK_HOME/bin/spark-submit collect.py
Output − The output for the above command is −
Elements in RDD -> [ scala , java , hadoop , spark , akka , spark vs hadoop , pyspark , pyspark and spark ]
foreach(f)
Returns only those elements which meet the condition of the function inside foreach. In the following example, we call a print function in foreach, which prints all the elements in the RDD.
----------------------------------------foreach.py--------------------------------------- from pyspark import SparkContext sc = SparkContext("local", "ForEach app") words = sc.parallepze ( ["scala", "java", "hadoop", "spark", "akka", "spark vs hadoop", "pyspark", "pyspark and spark"] ) def f(x): print(x) fore = words.foreach(f) ----------------------------------------foreach.py---------------------------------------
Command − The command for foreach(f) is −
$SPARK_HOME/bin/spark-submit foreach.py
Output − The output for the above command is −
scala java hadoop spark akka spark vs hadoop pyspark pyspark and spark
filter(f)
A new RDD is returned containing the elements, which satisfies the function inside the filter. In the following example, we filter out the strings containing spark".
----------------------------------------filter.py--------------------------------------- from pyspark import SparkContext sc = SparkContext("local", "Filter app") words = sc.parallepze ( ["scala", "java", "hadoop", "spark", "akka", "spark vs hadoop", "pyspark", "pyspark and spark"] ) words_filter = words.filter(lambda x: spark in x) filtered = words_filter.collect() print "Fitered RDD -> %s" % (filtered) ----------------------------------------filter.py----------------------------------------
Command − The command for filter(f) is −
$SPARK_HOME/bin/spark-submit filter.py
Output − The output for the above command is −
Fitered RDD -> [ spark , spark vs hadoop , pyspark , pyspark and spark ]
map(f, preservesPartitioning = False)
A new RDD is returned by applying a function to each element in the RDD. In the following example, we form a key value pair and map every string with a value of 1.
----------------------------------------map.py--------------------------------------- from pyspark import SparkContext sc = SparkContext("local", "Map app") words = sc.parallepze ( ["scala", "java", "hadoop", "spark", "akka", "spark vs hadoop", "pyspark", "pyspark and spark"] ) words_map = words.map(lambda x: (x, 1)) mapping = words_map.collect() print "Key value pair -> %s" % (mapping) ----------------------------------------map.py---------------------------------------
Command − The command for map(f, preservesPartitioning=False) is −
$SPARK_HOME/bin/spark-submit map.py
Output − The output of the above command is −
Key value pair -> [ ( scala , 1), ( java , 1), ( hadoop , 1), ( spark , 1), ( akka , 1), ( spark vs hadoop , 1), ( pyspark , 1), ( pyspark and spark , 1) ]
reduce(f)
After performing the specified commutative and associative binary operation, the element in the RDD is returned. In the following example, we are importing add package from the operator and applying it on ‘num’ to carry out a simple addition operation.
----------------------------------------reduce.py--------------------------------------- from pyspark import SparkContext from operator import add sc = SparkContext("local", "Reduce app") nums = sc.parallepze([1, 2, 3, 4, 5]) adding = nums.reduce(add) print "Adding all the elements -> %i" % (adding) ----------------------------------------reduce.py---------------------------------------
Command − The command for reduce(f) is −
$SPARK_HOME/bin/spark-submit reduce.py
Output − The output of the above command is −
Adding all the elements -> 15
join(other, numPartitions = None)
It returns RDD with a pair of elements with the matching keys and all the values for that particular key. In the following example, there are two pair of elements in two different RDDs. After joining these two RDDs, we get an RDD with elements having matching keys and their values.
----------------------------------------join.py--------------------------------------- from pyspark import SparkContext sc = SparkContext("local", "Join app") x = sc.parallepze([("spark", 1), ("hadoop", 4)]) y = sc.parallepze([("spark", 2), ("hadoop", 5)]) joined = x.join(y) final = joined.collect() print "Join RDD -> %s" % (final) ----------------------------------------join.py---------------------------------------
Command − The command for join(other, numPartitions = None) is −
$SPARK_HOME/bin/spark-submit join.py
Output − The output for the above command is −
Join RDD -> [ ( spark , (1, 2)), ( hadoop , (4, 5)) ]
cache()
Persist this RDD with the default storage level (MEMORY_ONLY). You can also check if the RDD is cached or not.
----------------------------------------cache.py--------------------------------------- from pyspark import SparkContext sc = SparkContext("local", "Cache app") words = sc.parallepze ( ["scala", "java", "hadoop", "spark", "akka", "spark vs hadoop", "pyspark", "pyspark and spark"] ) words.cache() caching = words.persist().is_cached print "Words got chached > %s" % (caching) ----------------------------------------cache.py---------------------------------------
Command − The command for cache() is −
$SPARK_HOME/bin/spark-submit cache.py
Output − The output for the above program is −
Words got cached -> True
These were some of the most important operations that are done on PySpark RDD.
Advertisements