Introduction to Spark Programming

This Introduction to Spark Programming course introduces the Apache Spark distributed computing engine and is suitable for developers, data analysts, architects, technical managers, and anyone who needs to use Spark in a hands-on manner.

The course provides a solid technical introduction to the Spark architecture and how Spark works. It covers the basic building blocks of Spark (e.g., RDDs and the distributed compute engine), as well as higher-level constructs that provide a simpler and more capable interface (e.g., Spark SQL and DataFrames). It also covers more advanced capabilities such as the use of Spark Streaming to process streaming data and provides an overview of Spark GraphX (graph processing) and Spark MLlib (machine learning). Finally, the course explores possible performance issues and strategies for optimization.

The course is very hands-on, with many labs. Participants will interact with Spark through the Spark shell (for interactive, ad hoc processing) as well as through programs using the Spark API.

The Apache Spark distributed computing engine is rapidly becoming a primary tool in the processing and analyzing of large-scale data sets. It has many advantages over existing engines, such as Hadoop, including runtime speeds that are 10-100x faster, as well as a much simpler programming model. After taking this course, you will be ready to work with Spark in an informed and productive manner.

Delegates will learn how to

  • Understand the need for Spark in data processing
  • Understand the Spark architecture and how it distributes computations to cluster nodes
  • Become familiar with basic installation / setup / layout of Spark
  • Use the Spark shell for interactive and ad-hoc operations
  • Understand RDDs (Resilient Distributed Datasets), and data partitioning, pipelining, and computations
  • Understand/use RDD ops such as map(), filter(), reduce(), groupByKey(), join(), etc.
  • Understand Spark's data caching and its usage
  • Write/run standalone Spark programs with the Spark API
  • Use Spark SQL / DataFrames to efficiently process structured data
  • Use Spark Streaming to process streaming (real-time) data
  • Understand performance implications and optimizations when using Spark
  • Become familiar with Spark GraphX and MLlib

Scala Ramp Up

Scala Introduction, Variables, Data Types, Control Flow

The Scala Interpreter

Collections and their Standard Methods (e.g. map())

Functions, Methods, Function Literals

Class, Object, Trait

Introduction to Spark

Overview, Motivations, Spark Systems

Spark Ecosystem

Spark vs. Hadoop

Acquiring and Installing Spark

The Spark Shell

RDDs and Spark Architecture

RDD Concepts, Lifecycle, Lazy Evaluation

RDD Partitioning and Transformations

Working with RDDs - Creating and Transforming (map, filter, etc.)

Key-Value Pairs - Definition, Creation, and Operations

Caching - Concepts, Storage Type, Guidelines

Spark API

Overview, Basic Driver Code, SparkConf

Creating and Using a SparkContext


Building and Running Applications

Application Lifecycle

Cluster Managers

Logging and Debugging

Spark SQL

Introduction and Usage

DataFrames and SQLContext

Working with JSON

Querying - The DataFrame DSL, and SQL

Data Formats

Spark Streaming

Overview and Streaming Basics

DStreams (Discretized Steams)

Architecture, Stateless, Stateful, and Windowed Transformations

Spark Streaming API

Programming and Transformations

Performance Characteristics and Tuning

The Spark UI

Narrow vs. Wide Dependencies

Minimizing Data Processing and Shuffling

Using Caching

Using Broadcast Variables and Accumulators

Spark GraphX Overview (Optional)


Constructing Simple Graphs

GraphX API

Shortest Path Example

MLLib Overview (Optional)


Feature Vectors

Clustering / Grouping, K-Means



Reasonable programming experience.

Program Details
Duration 3 Days
Capacity Max 12 Persons
Training Type Classroom / Virtual Classroom

Can't find what you're looking for? Let us know if you have a query or cannot find what you are looking for.