CG数据库 >> Master Big Data Ingestion and Analytics with Flume, Sqoop, Hive and Spark

Duration: 5H 40M | Video: h264 1280×720 | Audio: AAC 48kHz 2Ch | MBGenre: eLearning | Language: English | July 2019Complete course on Sqoop, Flume, and Hive: Great for CCA175 and Hortonworks Spark Certification preparationLearnHadoop Distributed File System (HDFS) and commandsLifecycle of Sqoop commandSqoop import command to migrate data from Mysql to HDFSSqoop import command to migrate data from Mysql to HiveUnderstand split-by and boundary queriesUse incremental mode to migrate the data from Mysql to HDFSUsing Sqoop export to migrate data from HDFS to MySQLSpark Data frames – working with diff File Formats & CompressionSpark SQLAboutIn this course, you will start by learning about the Hadoop Distributed File System (HDFS) and the most common Hadoop commands required to work with HDFS.

Then, you’ll be introduced to Sqoop Import, through which will gain knowledge of the lifecycle of the Sqoop command and how to use the import command to migrate data from Mysql to HDFS, and from Mysql to Hive–and much more.

In addition, you will learn about Sqoop Export to migrate data effectively, and about Apache Flume to ingest data.

The section Apache Hive introduces Hive, alongside external and managed tables; working with different files, and Parquet and Avro—and more.

You will learn about Spark Dataframes, Spark SQL and lot more in the last sections.

All the codes and supporting files are available at:FeaturesLearn Sqoop, Flume, and Hive and prepare successfully for CCA175 and the Hortonworks Spark CertificationLearn about the Hadoop Distributed File System (HDFS), and Hadoop commands to work effectively with HDFS


Master Big Data Ingestion and Analytics with Flume, Sqoop, Hive and Spark的图片1