
Apache Spark Quick Start Guide
Quickly learn the art of writing efficient big data applications with Apache Spark
By: Akash Grade Shrey Mehrotra
eText | 31 January 2019 | Edition Number 1
At a Glance
eText
$40.69
Instant online reading in your Booktopia eTextbook Library *
Read online on
Not downloadable to your eReader or an app
Why choose an eTextbook?
Instant Access *
Purchase and read your book immediately
Read Aloud
Listen and follow along as Bookshelf reads to you
Study Tools
Built-in study tools like highlights and more
* eTextbooks are not downloadable to your eReader or an app and can be accessed via web browsers only. You must be connected to the internet and have no technical issues with your device or browser that could prevent the eTextbook from operating.
A practical guide for solving complex data processing challenges by applying the best optimizations techniques in Apache Spark.
Key Features
- Learn about the core concepts and the latest developments in Apache Spark
- Master writing efficient big data applications with Spark's built-in modules for SQL, Streaming, Machine Learning and Graph analysis
- Get introduced to a variety of optimizations based on the actual experience
Book Description
Apache Spark is a flexible framework that allows processing of batch and real-time data. Its unified engine has made it quite popular for big data use cases. This book will help you to get started with Apache Spark 2.0 and write big data applications for a variety of use cases.
It will also introduce you to Apache Spark - one of the most popular Big Data processing frameworks. Although this book is intended to help you get started with Apache Spark, but it also focuses on explaining the core concepts.
This practical guide provides a quick start to the Spark 2.0 architecture and its components. It teaches you how to set up Spark on your local machine. As we move ahead, you will be introduced to resilient distributed datasets (RDDs) and DataFrame APIs, and their corresponding transformations and actions. Then, we move on to the life cycle of a Spark application and learn about the techniques used to debug slow-running applications. You will also go through Spark's built-in modules for SQL, streaming, machine learning, and graph analysis.
Finally, the book will lay out the best practices and optimization techniques that are key for writing efficient Spark applications. By the end of this book, you will have a sound fundamental understanding of the Apache Spark framework and you will be able to write and optimize Spark applications.
What you will learn
- Learn core concepts such as RDDs, DataFrames, transformations, and more
- Set up a Spark development environment
- Choose the right APIs for your applications
- Understand Spark's architecture and the execution flow of a Spark application
- Explore built-in modules for SQL, streaming, ML, and graph analysis
- Optimize your Spark job for better performance
Who this book is for
If you are a big data enthusiast and love processing huge amount of data, this book is for you. If you are data engineer and looking for the best optimization techniques for your Spark applications, then you will find this book helpful. This book also helps data scientists who want to implement their machine learning algorithms in Spark. You need to have a basic understanding of any one of the programming languages such as Scala, Python or Java.
Read online on
ISBN: 9781789342666
ISBN-10: 178934266X
Published: 31st January 2019
Format: ePUB
Language: English
Publisher: Packt Publishing
Edition Number: 1
You Can Find This eBook In
This product is categorised by
- Non-FictionComputing & I.T.DatabasesData Capture & Analysis
- Non-FictionComputing & I.T.DatabasesData Mining
- Non-FictionComputing & I.T.Graphical & Digital Media Applications3D Graphics & Modelling
- Non-FictionComputing & I.T.DatabasesDatabase Design & Theory
- Non-FictionComputing & I.T.Computer ScienceHuman-Computer InteractionInformation Architecture
- Non-FictionComputing & I.T.Computer ScienceComputer Architecture & Logic Design