Created by Imtiaz Ahmad | Video: 1280x720 | Audio: AAC 48KHz 2ch | Duration: 06:54 H/M | Lec: 31 | 7.94 GB | Language: English | Sub: English [Auto-generated]
Learn how to slice and dice data using the next generation big data platform - Apache Spark!
What you'll learn
Utilize the most powerful big data batch and stream processing engine to solve big data problems
Master the new Spark Java Datasets API to slice and dice big data in an efficient manner
Build, deploy and run Spark jobs on the cloud and bench mark performance on various hardware configurations
Optimize spark clusters to work on big data efficiently and understand performance tuning
Transform structured and semi-structured data using Spark SQL, Dataframes and Datasets
Implement popular Machine Learning algorithms in Spark such as Linear Regression, Logistic Regression, and K-Means Clustering
Requirements
Some basic Java programming experience is required. A crash course on Java 8 lambdas is included
You will need a personal computer with an internet connection.
The software needed for this course is completely freely and I'll walk you through the steps on how to get it installed on your computer
Description
Apache Spark is the next generation batch and stream processing engine. It's been proven to be almost 100 times faster than Hadoop and much much easier to develop distributed big data applications with. It's demand has sky rocketed in recent years and having this technology on your resume is truly a game changer. Over 3000 companies are using Spark in production right now and the list is growing very quickly! Some of the big names include: Oracle, Hortonworks, Cisco, Verizon, Visa, Microsoft, Amazon as well as most of the big world banks and financial institutions!
In this course you'll learn everything you need to know about using Apache Spark in your organization while using their latest and greatest Java Datasets API. Below are some of the things you'll learn:
How to develop Spark Java Applications using Spark SQL Dataframes
Understand how the Spark Standalone cluster works behind the scenes
How to use various transformations to slice and dice your data in Spark Java
How to marshall/unmarshall Java domain objects (pojos) while working with Spark Datasets
Master joins, filters, aggregations and ingest data of various sizes and file formats (txt, csv, Json etc.)
Analyze over 18 million real-world comments on Reddit to find the most trending words used
Develop programs using Spark Streaming for streaming stock market index files
Stream network sockets and messages queued on a Kafka cluster
Learn how to develop the most popular machine learning algorithms using Spark MLlib
Covers the most popular algorithms: Linear Regression, Logistic Regression and K-Means Clustering
You'll be developing over 15 practical Spark Java applications crunching through real world data and slicing and dicing it in various ways using several data transformation techniques. This course is especially important for people who would like to be hired as a java developer or data engineer because Spark is a hugely sought after skill. We'll even go over how to setup a live cluster and configure Spark Jobs to run on the cloud. You'll also learn about the practical implications of performance tuning and scaling out a cluster to work with big data so you'll definitely be learning a ton in this course. This course has a 30 day money back guarantee. You will have access to all of the code used in this course.
Who this course is for?
Anyone who is a Java developer and want's to add this seriously marketable technology on their resume
Anyone who wants to get into the data science field
Anyone who is interested in into the world of big data
Anyone who wants to implement machine learning algorithms in spark
!.part01.rar.html
!.part02.rar.html
!.part03.rar.html
!.part04.rar.html
!.part05.rar.html
!.part06.rar.html
!.part07.rar.html
!.part08.rar.html
!.part09.rar.html
!.part10.rar.html
!.part11.rar.html
!.part12.rar.html
!.part13.rar.html
发布日期: 2019-08-18