MP4 | Video: h264, 1280x720 | Audio: AAC, 48 KHz, 2 Ch
Genre: eLearning | Language: English + .VTT | Duration: 3 hour | Size: 1.75 GB
Learn Apache Spark's key concepts using real-world examples
What you'll learn
How to create RDD's, Dataframes and Datasets
How to properly use Map, Reduce & Filter
How to Partition RDD's in Distributed Systems
Caching Datasets in Memory to Reduce computations
How to tune Spark Programs
How to run Iterative Algorithms on a cluster
Difference between GroupByKey and ReduceByKey
Requirements
Familiar with Ubuntu
Familiar with Scala
Description
Learn Apache Spark's key concepts using real-world examples. This course goes over everything you need to know to get started using Spark. We start with resilient distributed data-sets and the main transformations and actions that can be performed on them. Then we move on to Advanced Spark concepts such as Partitioning and Persistence. Finally the course ends with Spark's SQL API which includes two data abstractions called Dataframes and Datasets which sit on top of Spark RDD's. They allow for new levels of optimization and SQL querying capabilities.
Who this course is for:
Beginner scala developers curious about data science
发布日期: 2019-08-13