Deploying Spark ML Pipelines in Production on AWSMP4 | Video: AVC 1920x1080 | Audio: AAC 48KHz 2ch | Duration: 23M | 818 MBGenre: eLearning | Language: EnglishTranslating a Spark application from running in a local environment to running on a production cluster in the cloud requires several critical steps, including publishing artifacts, installing dependencies, and defining the steps in a pipeline.
This video is a hands-on guide through the process of deploying your Spark ML pipelines in production.
You’ll learn how to create a pipeline that supports model reproducibility—making your machine learning models more reliable—and how to update your pipeline incrementally as the underlying data change.
Learners should have basic familiarity with the following: Scala or Python; Hadoop, Spark, or Pandas; SBT or Maven; Amazon Web Services such as S3, EMR, and EC2; Bash, Docker, and REST.
Understand how various cloud ecosystem components interact (i.e., Amazon S3, EMR, EC2, and so on)Learn how to architect the components of a cloud ecosystem into an end-to-end model pipelineExplore the capabilities and limitations of Spark in building an end-to-end model pipelineLearn to write, publish, deploy, and schedule an ETL process using Spark on AWS using EMRUnderstand how to create a pipeline that supports model reproducibility and reliability
发布日期: 2017-12-14