CG数据库 >> Monitoring and Improving the Performance of Machine Learning Models

Monitoring and Improving the Performance of Machine Learning ModelsMP4 | Video: AVC 1920x1080 | Audio: AAC 48KHz 2ch | Duration: 35M | 863 MBGenre: eLearning | Language: EnglishIt’s critical to have “humans in the loop” when automating the deployment of machine learning (ML) models.

Why? Because models often perform worse over time.

This course covers the human directed safeguards that prevent poorly performing models from deploying into production and the techniques for evaluating models over time.

We’ll use ModelDB to capture the appropriate metrics that help you identify poorly performing models.

We'll review the many factors that affect model performance (i.

e.

, changing users and user preferences, stale data, etc.

) and the variables that lose predictive power.

We'll explain how to utilize classification and prediction scoring methods such as precision recall, ROC, and jaccard similarity.

We'll also show you how ModelDB allows you to track provenance and metrics for model performance and health; how to integrate ModelDB with SparkML; and how to use the ModelDB APIs to store information when training models in Spark ML.

Learners should have basic familiarity with the following: Scala or Python; Hadoop, Spark, or Pandas; SBT or Maven; cloud platforms like Amazon Web Services; Bash, Docker, and REST.

Learn how to use ModelDB and Spark to track and improve model performance over timeUnderstand how to identify poorly performing models and prevent them from deploying into productionExplore classification and prediction scoring methods for training and evaluating ML modelsManasi Vartak is a PhD student in the Database Group at MIT, where she works on systems for analysis of large scale data.


Monitoring and Improving the Performance of Machine Learning Models的图片1
Monitoring and Improving the Performance of Machine Learning Models的图片2

发布日期: 2017-12-13