Apache Spark is an open-source processing engine for the large data sets. It’s the (third) next-generation file processing system. It’s the best tool for big data ecosystem. Unlike Hadoop, Spark can support batch process, streaming, complex iterative and user interactive algorithms quickly. As of now there is no competitor for an Apache spark to analyze quickly, it’s 100 times faster than MapReduce and 10 times more powerful than Hbase. Most of the top level companies sprang up to exploit this new technology. The Future of data processing is Spark, there is no doubt about it.

Compare with Hadoop, Spark is easy to learn. Scala is a functional language, it’s highly recommended to implement Spark Applications. Who knows Java, they can easily learn Scala. If you know Python it’s not a problem you can learn pySpark. You can practice Scala in your commodity system, but 4/8gb ram highly recommended. We provide the best Spark training with tutorials/ materials. Now a days most of the top companies offering the Spark developers with huge packages.. Most of the companies sprang up to exploit this new technology, so It’s the right time to take the training on Apache Spark for better career growth.

apache spark online training

Apache Spark developer training

We are planning to start online spark training in Bangalore. If you are interested please fill the form.

Spark training for all.
Fee: 20,000/- (15,000)* condition (if you do daily tasks 5000 return)
Mode: online
Call: 9247159150 (please whatsapp me)
Spark Training for Non-Hadoop background Students.
Trainer: Sudha
Training Time:  March 10 to May 1- 50 days weekdays. Mon-Fri
Time 6.30 AM- 8.30 AM

To attend paid Training please click on this link:

https://global.gotomeeting.com/join/195482781
(
whatsapp me at 9247159150 to join)

Free training:
Hadoop-FileSystem basics: Feb 19 Sun- 10.00 AM – 1PM

https://global.gotomeeting.com/join/426429053

Please find reviews here

Recorded Spark Demo

Within this time, if you want to learn, just contact me ill send some materials, just learn those.

Course content:

Hadoop Overview

  • Lecture
    • How HDFS read/write the data
    • YARN internal architecture
    • HDFS Internal Architecture .
  • Hands-On
    • HDFS Shell Commands
    • Install Hadoop & Spark in Ubuntu
    • Configure hadoop/spark environment in Eclipse

Hive Overview

  • Lecture
    • How Hive functioning properly
    • Optimize Hive queries
    • Using Sqoop
  • Hands-On
    • Process csv, json data
    • Bucketing, Partitioning tables.
    • Import MySQL/Oracle data using Sqoop

Scala Basics

  • Lecture
    • Functional language
    • Scala Vs Java
  • Hands-On
    • Strings, Numbers
    • List, Array, Map, Set
    • Control Statements, collections
    • Functions, methods
    • Patren matching

Spark Overview

  • Lecture
    • The power of Spark?
    • Spark Ecosystem
    • Spark Components vs Hadoop
  • Hands-On
    • Installation & Eclipse configuration
    • Programs in Command line Interface & Eclipse
    • Process Local, HDFS files

RDD Fundamentals

  • Lecture
    • Purpose and Structure of RDDs
    • Transformations, Actions, and DAG
    • Key-Value Pair RDDs
  • Hands-On
    • Creating RDDs from Data Files
    • Reshaping Data to Add Structure
    • Interactive Queries Using RDDs

SparkSQL and DataFrames

  • Lecture
    • Spark SQL and DataFrame Uses
    • DataFrame / SQL APIs
    • Catalyst Query Optimization
  • Hands-on
    • Creating (CSV, JSON) DataFrames
    • Querying with DataFrame API and SQL
    • Caching and Re-using DataFrames
    • Process Hive data in Spark

Spark DataSet API

  • Lecture
    • Power of Dataset API in Spark 2.0
    • Serialization concept in DataSet
  • Hands-on
    • Creating DataSet API
    • Process CSV, JSON, XML, Text data
    • DataSet Operation

Spark Job Execution

  • Lecture
    • Jobs, Stages, and Tasks
    • Partitions and Shuffles
    • Broadcast Variables and accumulators
    • Job Performance
  • Hands-On
    • Visualizing DAG Execution
    • Observing Task Scheduling
    • Understanding Performance
    • Measuring Memory Usage
    • shared variables usage

Clustering Architecture

  • Lecture
    • Cluster Managers for Spark: Spark Standalone, YARN, and Mesos
    • Understanding Spark on YARN
    • What happened in cluster when you submit a job
  • Hands-On
    • Tracking Jobs through the Cluster UI
    • Understanding Deploy Modes
    • Submit a sample job and monitor job

Spark Streaming

  • Lecture
    • Streaming Sources and Tasks
    • DStream APIs and Stateful Streams
    • Flink Introduction
    • Kafka architecture
  • Hands-On
    • Creating DStreams from Sources
    • Operating on DStream Data
    • Viewing Streaming Jobs in the Web UI
    • Sample Flink Streaming program.
    • Kafka sample program

AWS with Spark

  • Lecture
    • AWS architecture
    • Redshift, EMR and EC2 functionalities
    • How to minimize AWS cost
  • Hands-On
    • Submit a sample jar in AWS Cluster
    • Create a cluster using EMR
    • Read/Write data from Redshift

Advanced concepts in Spark

  • Lecture
    • Memory management in Spark
    • How to optimize Spark Applications
    • Spark how to integrate with other Applications
  • Hands-On
    • Spark with Cassandra Integration
    • Alluxio/Tachyon hands on experience

Sample Spark Project

  • Lecture
    • End to end a project overview
    • Complicated problems in a project
    • Common steps in any project
  • Hands-On
    • Implement Spark SQL Mini project
    • Kafka, Cassandra, Spark Streaming project
    • Pull Twitter data and analyse the data

Important notes:

  • Daily after training assign a task
  • Who completed all these tasks they will get 5000/- money back.
  • After training provide solution to that problem.
  • Minimum 3 months online support & Job Assistance
  • Training in Spark 2.x and spark 1.6.2 in Scala language
  • Excellent Materials all major spark and Scala books
  • Guide to get Cloudera/MapR/Databricks spark certification

Recommendations: To learn Apache Spark, no need to learn Hadoop, but If you have hadoop knowledge, it’s huge plus to implement production level project.
To learn Spark Minimum core java (to learn Scala) and SQL queries knowledge mandatory.
This training intentionally done for non hadoop background students.

If you are interested please fill this form:

Your Name (required)

Your Email (required)

Your Mobile (required)

Your City:

 Online Offline

You have primary knowledge on Hadoop?

Send your query

Include these topics:

Introduction:

  • Why Spark is faster?
  • What is the difference between traditional data processing systems and Spark?
  • Real world use cases
  • Common problems with large scale systems.
  • Using the Spark shell for interactive data analysis
  • Runs on a standalone and multi node cluster
  • Scala/Python programming introduction.
  • Write minimum 10 applications
  • Data Processing in small & large datasets.
  • Practical with real world case studies & datasets
  • CV Building & Job Assistance

Spark Introduction:

  • Spark ecosystems,
  • Hash & Sort based Shuffle,
  • Aggregator,
  • Data Flow in the Framework,
  • The power of RDD,
  • Kyro Serialization importance,
  • Executing Parallel Operations,
  • RDD Persisting/Caching importance,
  • SparkContext importance,
  • Executors,
  • Pipe, Aggregate, fold & glom
  • Shared Variables,

    Broadcast Variables,

    Accumulators.

Resilient Distributed Dataset (RDD).

  • RDD operations (Transactions & Actions)
  • Difference between MapReduce key-value pair and RDD Key-Value pair,
  • Aggregations, Grouping, Joins & sorting data.
  • How RDD process the data,
  • RDD Partitions and Data Locality,
  • RDD Lineage,
  • Garbage collection and Memory Management.

Hadoop with Spark:

  • Brief introduction of HDFS.
  • HDFS Architecture.
  • How HDFS interact with RDD?
  • Setup Hadoop cluster (Psudo/cluster).
  • Configuring & Run Spark on cluster.

Spark Core:

  • How Spark is functioning.
  • Internal architecture of Core.
  • Performance Tuning.
  • Scope and life cycle of variables and methods.
  • Working with Key-Value Pairs.
  • Debugging the application

SparkSQL

  • DataFrame.
  • Process CSV,Json, XML, HQL, text, Logs, oracle, mysql, redshift data.
  • Different ways to create DataFrames
  • Power of Catalyst optimizer
  • Process Hive application in Spark

Spark Streaming

  • Lambda Architecture
  • Integrate with Kafka and cassandra.
  • Sliding window operations.
  • Spark Vs Flink.
  • Driver, worker and receiver Fault tolerance.

Spark Advanced concepts:

  • Elastic MapReduce(EMR)
  • Running jobs on EMR & YARN.
  • Zeppelin.
  • Optimize the RDD performance.
  • Debugging and troubleshooting Spark apps
  • Overview of SparkR.
  • Validate an Application
  • Power of MLlib.

Hands on Experience:

  • Each topic with hands on experience.
  • Create separate aws AWS account for you to run applications on EMR cluster.
  • Apache Spark installation in 2.7.2
  • Implement minimum Scala five sample programs.
  • Implement a applications in Zepplian.
  • Develop at least two applications in Streaming.
  • Develop minimum two applications in SparkSQL.
  • Process Different file formats (Text, Json, CSV, SequenceFiles).
  • Six months support to implement POCs.
  • Support to get Oreilly Apache Spark Developer Certification Program.
  • Excellent material with Exercise and Quiz.

Scala Basics

  1. Installation
  2. REPL
  3. Data Types
  4. Math
  5. If
  6. While & Do-While
  7. For Loops
  8. User Input / Output
  9. Strings
  10. Recursion
  11. Arrays & ArrayBuffer
  12. For – Yield
  13. Classes
  14. Case Class
  15. Companion Objects / Static
  16. Inheritance
  17. Abstract Classes
  18. Traits
  19. Higher Order Functions
  20. Data Structures (Lists Sets Tuple Maps Option)
  21. Collections (map; foreach; filter, flatMap, find)
  22. Closures
  23. File I/O

Basic knowledge of Linux, Hadoop, Scala/Python is required. We provide the best Bigdata training in Bangalore,