This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
linux_hadoop_minimal_installation_instructions [2019/06/11 14:51] deadline added remainder of content |
linux_hadoop_minimal_installation_instructions [2020/05/21 18:46] (current) deadline |
||
---|---|---|---|
Line 1: | Line 1: | ||
- | =====Linux Hadoop Minimal Notes===== | + | =====Linux Hadoop Minimal |
- | **Version | + | **Version:** .42\\ |
- | **Date: June 3, 2019** | + | **Date:** June 3, 2019\\ |
- | Author: Douglas Eadline | + | **Author:** Douglas Eadline\\ |
- | (Email: deadline(you know what goes here)basement-supercomputing.com) | + | **Email:** deadline(you know what goes here)basement-supercomputing.com |
- | Unless otherwise noted, all course content, notes, and examples are | + | **Unless otherwise noted, all course content, notes, and examples are |
- | (c) Copyright Basement Supercomputing 2019, All rights reserved. | + | (c) Copyright Basement Supercomputing 2019, All rights reserved.** |
====What Is This?==== | ====What Is This?==== | ||
The Linux Hadoop Minimal is a virtual machine (VM) that can be used to | The Linux Hadoop Minimal is a virtual machine (VM) that can be used to | ||
- | try the examples presented in the two on-line | + | try the examples presented in the following |
- | | + | |
+ | * [[https:// | ||
- | " | + | It can also be used for the [[https:// |
+ | video tutorial (14+ hours): | ||
- | It can also be used for the examples provided in the companion on-line | + | * [[https:// |
- | video tutorial (14+ hours) | + | |
- | " | + | The machine has many important Hadoop and Spark packages installed and at the same time tries to keep the resource usage as low as possible so the VM can used on most laptops. (See below for resource recommendations) |
- | + | ||
- | The machine has many important Hadoop and Spark packages installed and | + | |
- | at the same time tries to keep the resource usage as low as possible | + | |
- | so the VM can used on most laptops. (See below for resource recommendations) | + | |
To learn more about the course and my other analytics books and videos, go to: | To learn more about the course and my other analytics books and videos, go to: | ||
- | https:// | + | |
PLEASE NOTE: This version of Linux Hadoop Minimal (LHM) is still considered | PLEASE NOTE: This version of Linux Hadoop Minimal (LHM) is still considered | ||
" | " | ||
- | deadline@eadline.org with " | + | deadline(you know what goes here)basement-supercomputing.com with " |
- | ====Student Usage:==== | + | ====Student Usage==== |
- | If you have taken the " | + | If you have taken the " |
- | the NOTES.txt files, examples, and data archive directly to the VM | + | |
- | using "wget" | + | |
- | Zip (zip) format. It is recommended that you either make a new user account | + | |
- | or use the " | + | |
- | this account). | + | |
- | For instance, to download and extract the archive for the " | + | For instance, to download and extract the archive for the " |
wget https:// | wget https:// | ||
- | tar xvzf Hands_On_Hadoop_Spark-V1.5.tgz | + | tar xvzf Hands_On_Hadoop_Spark-V1.5.1.tgz |
Similarly, for the "Linux Command Line" course (do this within the VM) | Similarly, for the "Linux Command Line" course (do this within the VM) | ||
Line 52: | Line 44: | ||
tar xvzf Linux-Command-Line-V1.0.tgz | tar xvzf Linux-Command-Line-V1.0.tgz | ||
- | If you want to move files from your local machine to the VM, then you can use "scp" | + | If you want to move files from your local machine to the VM, then you can use '' |
- | on your host. (scp natively available on Linux and Macintosh systems, it is part of the | + | on your host. ('' |
MobaXterm package on Windows) | MobaXterm package on Windows) | ||
- | scp -P2222 | + | |
- | USERNAME is a valid account on the VM. There is a user account called | + | '' |
- | be used for most of the examples. Therefore the command to copy file (SOURCE-FILE) from your | + | be used for most of the examples. Therefore, the command to copy file ('' |
- | host system to the VM is: | + | host system to the VM is (it places the file in ''/ |
scp -P2222 | scp -P2222 | ||
- | See the "Connect From Your Local Machine to the LHM Sandbox" | + | See the [[#Connect From Your Local Machine to the LHM Sandbox|Connect From Your Local Machine to the LHM Sandbox]] |
- | on using ssh and scp. | + | on using '' |
- | ====USAGE NOTES:==== | + | ====General Usage Notes==== |
- | 1. The Linux Hadoop Minimal includes the following Apache software | + | |
- | CentOS Linux 6.9 minimal | + | 1. The Linux Hadoop Minimal includes the following Apache software. Note: Spark 1.6.3 is installed because later versions need Python 2.7+ (not available in CentOS)\\ |
- | | + | < |
- | | + | CentOS Linux 6.9 minimal |
- | | + | Apache Hadoop 2.8.1 |
- | | + | Apache Pig 0.17.0 |
- | | + | Apache Hive 2.3.2 |
- | | + | Apache Spark 1.6.3 |
- | | + | Apache Derby 10.13.1.1 |
- | | + | Apache Zeppelin 0.7.3 |
+ | Apache Sqoop-1.4.7 | ||
+ | Apache Flume-1.8.0 | ||
+ | </ | ||
- | Spark 1.6.3 is installed because later versions need Python 2.7+ (not available in CentOS) | + | 2. The Linux Hadoop Minimal has been tested with VirtualBox on Linux, MacOS 10.12, and Windows 10 Home addition. It has not been tested with VMware. |
- | 2. The Linux Hadoop Minimal | + | 3. The Linux Hadoop Minimal |
- | Home addition. It has not been tested with VMware. | + | |
- | 3. The Linux Hadoop Minimal Virtual Machine is designed to work on minimal hardware. | + | 4. The above packages |
- | It is recommended at a MINIMUM your system | + | |
- | The VM is set to use 2.5G of memory. This will cause some applications to swap to disk, | + | |
- | but it should | + | |
- | (If you are thinking of using the Hortonworks sandbox then 4+ cores and 16+ GB of memory is | + | ====Installation Steps==== |
- | | + | |
- | 4. The above packages have not been fully tested although all of the examples from the course work. | + | **Step 1:** Download and install VirtualBox for your environment. VirtualBox is freely available. Note: Some windows environments may need the Extension Pack. See the [[https:// |
- | ===Installation Steps:=== | + | **Step 2:** Follow the installation instructions for your Operating System environment. For Red Hat based systems this page, https:// |
+ | If you are using Windows, you will need an "ssh client." | ||
+ | * [[http:// | ||
+ | * [[http:// | ||
- | 1. Download and install VirtualBox for your environment. VirtualBox is freely available. | + | **Step 3:** Make sure hardware virtualization is enabled in your BIOS. |
- | Note: Some windows environments may need the Extension Pack. | + | |
- | | + | **Step 4:** Download the https:// |
- | 2. Follow the installation instructions for your Operating System environment. For Red Hat based systems this | + | |
- | page, https://tecadmin.net/install-oracle-virtualbox-on-centos-redhat-and-fedora, is helpful. With Linux | + | |
- | there is some dependencies on kernel versions and modules that need to be addressed. | + | |
- | If you are using Windows, you will need an "ssh client." | ||
- | They are both freely available at no cost. (MobaXterm is recommended) | ||
- | 1. Putty: http:// | + | **Step 5:** Start the VM. All the essential Hadoop service should be started automatically. |
- | 2. MobaXterm: http://mobaxterm.mobatek.net (provides terminal for ssh sessions and allows remote X Windows session) | + | |
- | 3. Make sure hardware virtualization is enabled in your BIOS. | ||
- | 4. Download | + | ====Connect From Your Local Machine to the LHM Sandbox==== |
- | image and load into VirtualBox. (NOTE newer version may be available.) | + | |
- | + | ||
- | 5. Start the VM. All the essential Hadoop service should be started automatically. | + | |
- | + | ||
- | + | ||
- | ----Connect From Your Local Machine to the LHM Sandbox:---- | + | |
It is possible to login and use the sandbox from the VirtualBox terminal, however, you will have much | It is possible to login and use the sandbox from the VirtualBox terminal, however, you will have much | ||
more flexibility with local terminals. Follow the instructions below for local terminal access. | more flexibility with local terminals. Follow the instructions below for local terminal access. | ||
- | As a test, open a text terminal and connect to the sandbox as the root user with ssh. Macintosh and | + | As a test, open a text terminal and connect to the sandbox as the root user with '' |
- | Linux machines have ssh and terminal installed, for windows see above (Putty or MobaXterm) or this document: | + | Linux machines have '' |
- | https:// | + | |
+ | |||
+ | The root password is: **hadoop** | ||
- | The root password is: hadoop | + | ssh root@127.0.0.1 -p 2222 |
- | | + | You are should now be in the ''/ |
- | You are should now be in the /root directory | + | To confirm all the Hadoop daemons have started enter '' |
- | To confirm all the Hadoop daemons have started enter " | + | < |
- | The results should list the 10 daemons as shown below. (process numbers | + | # jps |
- | will be different)# jps | + | |
1938 NetworkServerControl | 1938 NetworkServerControl | ||
2036 ZeppelinServer | 2036 ZeppelinServer | ||
Line 150: | Line 129: | ||
1841 NodeManager | 1841 NodeManager | ||
2445 Jps | 2445 Jps | ||
+ | </ | ||
- | ----Copying Files Into and Out of the Virtual Machine | + | ====Copying Files In and Out of the Virtual Machine==== |
- | To copy a file from your LOCAL MACHINE into the VM, use the "scp" | + | To copy a file from your LOCAL MACHINE into the VM, use the '' |
- | For instance, to copy the file "SOURCE-FILE" | + | |
- | LOCAL MACHINE to the " | + | |
- | the command places file in / | + | |
scp -P2222 | scp -P2222 | ||
- | To be clear, the above command is run on your LOCAL MACHINE. | + | To be clear, the above command is run on your '' |
- | On Macintosh and Linux systems run this from a terminal. On Windows | + | |
- | run it from MobaXterm. | + | |
- | To copy a file from the VM to your LOCAL MACHINE and place it | + | To copy a file from the VM to your '' |
- | in your current directory use the following. (don't forget the "."): | + | |
scp -P2222 hands-on@127.0.0.1:/ | scp -P2222 hands-on@127.0.0.1:/ | ||
- | To be clear, the above command is run on your LOCAL MACHINE. | + | To be clear, the above command is run on your '' |
+ | |||
+ | On Windows, the data will be placed in the MobaXterm " | ||
- | On Windows, the data will be placed in the MobaXterm " | ||
- | Home Directory." | ||
- | this would be the following: | ||
C: | C: | ||
- | ====Adding Users:==== | + | ====Adding Users==== |
- | As configured, the LHM comes with one general user account. The account is called " | + | |
- | is " | + | |
- | do any administrative work in HDFS. The hdfs account has no password. To become the hdfs user, | + | |
- | log in as root and issue a "su - hdfs" command. | + | |
- | Warning: Running | + | As configured, the LHM comes with one general user account. The account is called **hands-on** and the password is **minimal**. **It is highly recommended that this account be used for the class examples.** Remember you need to be user '' |
- | To add yourself as a user. | + | To add yourself as a user with a different user name follow the following steps. |
- | | + | **Step 1.** As root do the following to create a user and add a password: |
- | | + | < |
- | passwd USERNAME | + | useradd -G hadoop USERNAME |
+ | passwd USERNAME | ||
+ | </ | ||
- | | + | **Step 2.** These steps change to user hdfs and create the user directory in HDFS (as root) |
- | | + | < |
- | hdfs dfs -mkdir / | + | su - hdfs |
- | hdfs dfs -chown USERNAME: | + | hdfs dfs -mkdir / |
- | exit | + | hdfs dfs -chown USERNAME: |
+ | exit | ||
+ | </ | ||
- | c) Logout and login to the new account | + | **Step 3.** Logout and login to the new account |
- | ====Web Access:==== | + | ====Web Access==== |
The various web interfaces shown in class are available using the following URLs. Enter the desired | The various web interfaces shown in class are available using the following URLs. Enter the desired | ||
URL in you local browser and the VM should respond. | URL in you local browser and the VM should respond. | ||
- | + | < | |
- | HDFS web interface: | + | HDFS web interface: |
- | YARN Jobs web Interface: | + | YARN Jobs web Interface: |
- | Zeppelin Web Notebook: | + | Zeppelin Web Notebook: |
+ | </ | ||
The Zeppelin interface is not configured (i.e. it is run in anonymous mode without the need to log-in). | The Zeppelin interface is not configured (i.e. it is run in anonymous mode without the need to log-in). | ||
- | The " | + | The " |
- | The " | + | |
- | and work. | + | |
- | ==== Data into Zeppelin: | + | The '' |
- | If you want to load you own data into a Zeppelin notebook, place the data in the zeppelin account under / | + | ==== Getting Data into Zeppelin==== |
- | Login as root to place data in this account then change the ownership to zeppelin user for example: | + | |
+ | If you want to load you own data into a Zeppelin notebook, place the data in the zeppelin account under '' | ||
# cp DATA / | # cp DATA / | ||
# chown zeppelin: | # chown zeppelin: | ||
- | This location is the default path for the Zeppelin interpreter (run "pwd" | + | This location is the default path for the Zeppelin interpreter (run '' |
- | ==== Database for Sqoop Example:==== | + | ==== Database for Sqoop Example==== |
MySQL has been installed in the VM. The World database used in the Sqoop example from the class | MySQL has been installed in the VM. The World database used in the Sqoop example from the class | ||
- | has been preloaded into MySQL. | + | has been preloaded into MySQL. |
- | ====Log Files:==== | + | ====Log Files==== |
There is currently no logfile management and log directly may fill up and use the sandbox storage. | There is currently no logfile management and log directly may fill up and use the sandbox storage. | ||
- | There is a clean-logs.sh script in / | + | There is a '' |
This script will remove most of the Hadoop/ | This script will remove most of the Hadoop/ | ||
- | =====Stopping and Starting the Hadoop Daemons:===== | + | =====Stopping and Starting the Hadoop Daemons===== |
+ | |||
+ | The Hadoop Daemons are started in the ''/ | ||
+ | is run when the system boots) The actual scripts are in ''/ | ||
+ | simple with no checking. If you are knowledgeable, | ||
+ | for errors and issues. The scripts are run in the following order: | ||
- | The Hadoop Daemons are started in the / | ||
- | is run when the system boots) The actual scripts are in /usr/sbin and are very | ||
- | simple with no checking. If you are knowledgeable, | ||
- | for errors and issues. The scripts are run in the following order | ||
/ | / | ||
/ | / | ||
Line 250: | Line 224: | ||
A corresponding "stop script" | A corresponding "stop script" | ||
- | As mentioned, if all the the scripts are running, the "jps" | + | As mentioned, if all the the scripts are running, the '' |
- | (run as root) should show the following (process numbers will be different) | + | (run as root) should show the following (process numbers will be different). |
- | The RunJar entrees are for the hiveserver2 and hive-metastore processes | + | The RunJar entrees are for the '' |
# jps | # jps | ||
Line 279: | Line 253: | ||
For YARN to be running correctly the following daemons need to be running: | For YARN to be running correctly the following daemons need to be running: | ||
+ | |||
ResourceManager | ResourceManager | ||
JobHistoryServer | JobHistoryServer | ||
Line 289: | Line 264: | ||
A local metadata database (called Derby) is needed for Hive, if | A local metadata database (called Derby) is needed for Hive, if | ||
- | the "NetworkServerControl" | + | the '' |
the derby daemon: | the derby daemon: | ||
Line 295: | Line 270: | ||
/ | / | ||
- | So that Spark can use Hive tables | + | Spark can use Hive tables |
- | To stop and restart the services (in the following order) | + | |
/ | / | ||
Line 302: | Line 276: | ||
/ | / | ||
/ | / | ||
- | |||
Finally, if the Zeppelin web page cannot be reached, the Zeppelin daemon | Finally, if the Zeppelin web page cannot be reached, the Zeppelin daemon | ||
Line 314: | Line 287: | ||
and describe the situation. | and describe the situation. | ||
+ | When the VM is stopped (see below) with '' | ||
- | ====Stopping the VM====To stop the VM, click on " | + | ====Stopping the VM==== |
+ | To stop the VM, click on " | ||
select the "Save State" option. The next time the machine starts it will have all the | select the "Save State" option. The next time the machine starts it will have all the | ||
changes you made. | changes you made. | ||
Line 326: | Line 301: | ||
- | VM Installation Documentation | + | ====VM Installation Documentation==== |
- | ----------------------------- | + | |
+ | Please see ''/ | ||
+ | |||
+ | ====Issues/ | ||
+ | |||
+ | These issues have been addressed in the current version of the VM. Please use the lasted VM and you can avoid these issues. | ||
+ | |||
+ | 1. If you have problems loading the OVA image into VirtualBox, check the MD5 signature of the OVA file. The MD5 signature returned by running the program below should match the signature provided [[https:// | ||
+ | |||
+ | For **Linux** use " | ||
+ | |||
+ | $ md5sum Linux-Hadoop-Minimal-0.42.ova | ||
+ | |||
+ | For **Macintosh** use " | ||
- | Please see /root/Hadoop-Minimal-Install-Notes directory for how the packages were installed. | + | $ md5 Linux-Hadoop-Minimal-0.42.ova |
+ | For **Windows 10** (in PowerShell) use " | ||
- | Issues/ | + | C: |
- | ----------- | + | |
- | 1. Either create your own user account as described above or use the existing " | + | 2. Either create your own user account as described above or use the existing " |
- | The examples will not work if run as the root account. | + | |
- | 2. If zip is not installed on your version of the VM, you can install it by entering | + | 3. If zip is not installed on your version of the VM, you can install it by entering the following, as root, and a " |
- | the following, as root, and a " | + | |
# yum install zip | # yum install zip | ||
Line 351: | Line 337: | ||
| | ||
| | ||
- | 3. In previous versions there is a permission issue in HDFS that prevents Hive jobs | + | |
- | from working. To fix it, perform the following steps: | + | 4. In previous versions there is a permission issue in HDFS that prevents Hive jobs from working. To fix it, perform the following steps: |
- | | + | a) login to the VM as root (pw=" |
ssh root@127.0.0.1 -p 2222 | ssh root@127.0.0.1 -p 2222 | ||
- | | + | b) then change to hdfs user |
su - hdfs | su - hdfs | ||
- | | + | c) fix the permission error: |
hdfs dfs -chmod o+w / | hdfs dfs -chmod o+w / | ||
- | | + | d) Check the result |
hdfs dfs -ls /user/hive | hdfs dfs -ls /user/hive | ||
- | | + | e) The output of the previous command should look like: |
Found 1 items | Found 1 items | ||
| | ||
- | | + | f) Exit out of the hdfs account |
exit | exit | ||
- | | + | g) exit out the root account |
exit | exit |