User Tools

Site Tools


start

Welcome to the Effective Data Pipelines Series

(previously Scalable Analytics with Apache Hadoop and Spark)

The six essential courses on the path to scalable data science pipelines nirvana–or at least a good start

Click on the course name for availability and further information. New courses are being added. For best results, courses should be taken in the recommended order (shown below). Courses 1 and (2&3) can be taken out of order. Course 4 builds on courses 1 and (2&3). Course 5 builds-on and assumes competence with topics in courses 4, (3&2), and 1.

NOTE: If the link does not lead you to the class, it has not yet been scheduled. Check back at a future date. Also two new courses in the series are coming in the new year (including Kafka coverage and Data Engineering).

1 Apache Hadoop, Spark, and Kafka Foundations: Effective Data Pipelines - A great introduction to the Hadoop Big Data Ecosystem with Spark and Kafka. A non-programming introduction to Hadoop, Spark, HDFS, MapReduce, and Kafka. (3 hours-1 day)
2 Beginning Linux Command Line for Data Engineers and Analysts: Effective Data Pipelines - Quickly learn the essentials of using the Linux command line on Hadoop/Spark clusters. Move files, run applications, write scripts and navigate the Linux command line interface used on almost all modern analytics clusters. Students can download and run examples on the “Linux Hadoop Minimal” virtual machine, see below. (3 hours-1 day)
3 Intermediate Linux Command Line for Data Engineers and Analysts: Effective Data Pipelines - This course is a continuation of Beginning Linux Command Line for Data Engineers and Analysts covering more advanced topics.
4 Hands-on Introduction to Apache Hadoop and Spark Programming - A hands-on introduction to using Hadoop, Pig, Hive, Sqoop, Spark and Zeppelin notebooks. Students can download and run examples on the “Linux Hadoop Minimal” virtual machine, see below. (6 hours-2 days)
5 Scalable Data Science with Hadoop and Spark - Learn How to Apply Hadoop and Spark tools to Predict Airline Delays. All programming will be done using Hadoop and Spark with the Zeppelin web notebook on a four node cluster. The notebook will be made available for download so student can reproduce the examples. (3 hours-1 day)

Class Notes for Hands-on Introduction to Apache Hadoop and Spark Programming

(Updated 03-June-2019)

Class Notes for Practical Linux Command Line for Data Engineers and Analysts

(Updated 19-Mar-2019)

Zeppelin Notebook for Scalable Data Science with Hadoop and Spark

(Updated 20-Aug-2019)


DOS to Linux and Hadoop HDFS Help:

Linux Hadoop Minimal (LHM) Virtual Machine Sandbox

(Current Version 0.42, 03-June-2019) Not ready for Scalable Data Science with Hadoop and Spark (soon)

Used for Hands-on, Command Line, and Scalable Data Science courses above. Note: This VM can also be used for the Hadoop and Spark Fundamentals: LiveLessons video mentioned below.


Cloudera-Hortonworks HDP Sandbox

The Cloudera-Hortonworks HDP Sandbox, a full featured Hadoop/Spark virtual machine that runs under Docker, VirtualBox, or VMWare. Please see Cloudera/Hortonworks HDP Sandbox for more information. Due to the number of applications the HDP Sandbox can require substantial resources to run.


Zeppelin Web Notebook

For those taking the Scalable Data Science course a 30-day web-based Zeppelin Notebook is available from Basement Supercomputing. Please use the Sign Up Form to get access to the notebook.


Other Resources for all Classes

Contact

For further questions or help with the Linux Hadoop Minimal Virtual Machine please email d...@b...g.com


Unless otherwise noted, all course content, notes, and examples © Copyright Basement Supercomputing 2019, All rights reserved.

start.txt · Last modified: 2020/01/03 17:55 by deadline