Community Articles

Find and share helpful community-sourced technical articles.
avatar
Super Guru

Objective

This tutorial walks you through the process of installing the Docker version of the HDP 2.5 Hortonworks Sandbox on a Mac. This tutorial is part one of a two part series. The second article can be found here:HCC Article

Prerequisites

  • You should already have installed Docker for Mac. (Read more here Docker for Mac)
  • You should already have downloaded the Docker version of the Hortonworks Sandbox (Read more here Hortonworks Sandbox)

Scope

This tutorial was tested using the following environment and components:

  • Mac OS X 10.11.6
  • HDP 2.5 on Hortonworks Sandbox (Docker Version)
  • Docker for Mac 1.12.1

NOTE: You should adjust your Docker configuration to provide at least 8GB of RAM. I personally find things are better with 10-12GB of RAM. You can follow this article for more information: https://hortonworks.com/tutorial/sandbox-deployment-and-install-guide/section/3/#for-mac

Steps

1. Ensure the Docker daemon is running. You can verify by typing:

$ docker images

You should see something similar to this:

$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE

If your Docker daemon is not running, you may see the following:

$ docker images
Error response from daemon: Bad response from Docker engine

2. Load the Hortonworks sandbox image into Docker:

$ docker load < HDP_2.5_docker.tar.gz

You should see something similar to this:

$ docker load < HDP_2.5_docker.tar.gz
b1b065555b8a: Loading layer [==================================================>] 202.2 MB/202.2 MB
0b547722f59f: Loading layer [==================================================>] 13.84 GB/13.84 GB
99d7327952e0: Loading layer [==================================================>] 234.8 MB/234.8 MB
294b1c0e07bd: Loading layer [==================================================>] 207.5 MB/207.5 MB
fd5c10f2f1a1: Loading layer [==================================================>] 387.6 kB/387.6 kB
6852ef70321d: Loading layer [==================================================>]   163 MB/163 MB
517f170bbf7f: Loading layer [==================================================>] 20.98 MB/20.98 MB
665edb80fc91: Loading layer [==================================================>] 337.4 kB/337.4 kB
Loaded image: sandbox:latest

3. Verify the image was successfully imported:

$ docker images

You should see something similar to this:

$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
sandbox             latest              fc813bdc4bdd        3 days ago          14.57 GB

4. Start the container: The first time you start the container, you need to create it via the run command. The run command both creates and starts the container.

$ docker run -v hadoop:/hadoop --name sandbox --hostname "sandbox.hortonworks.com" --privileged -d \
-p 6080:6080 \
-p 9090:9090 \
-p 9000:9000 \
-p 8000:8000 \
-p 8020:8020 \
-p 42111:42111 \
-p 10500:10500 \
-p 16030:16030 \
-p 8042:8042 \
-p 8040:8040 \
-p 2100:2100 \
-p 4200:4200 \
-p 4040:4040 \
-p 8050:8050 \
-p 9996:9996 \
-p 9995:9995 \
-p 8080:8080 \
-p 8088:8088 \
-p 8886:8886 \
-p 8889:8889 \
-p 8443:8443 \
-p 8744:8744 \
-p 8888:8888 \
-p 8188:8188 \
-p 8983:8983 \
-p 1000:1000 \
-p 1100:1100 \
-p 11000:11000 \
-p 10001:10001 \
-p 15000:15000 \
-p 10000:10000 \
-p 8993:8993 \
-p 1988:1988 \
-p 5007:5007 \
-p 50070:50070 \
-p 19888:19888 \
-p 16010:16010 \
-p 50111:50111 \
-p 50075:50075 \
-p 50095:50095 \
-p 18080:18080 \
-p 60000:60000 \
-p 8090:8090 \
-p 8091:8091 \
-p 8005:8005 \
-p 8086:8086 \
-p 8082:8082 \
-p 60080:60080 \
-p 8765:8765 \
-p 5011:5011 \
-p 6001:6001 \
-p 6003:6003 \
-p 6008:6008 \
-p 1220:1220 \
-p 21000:21000 \
-p 6188:6188 \
-p 61888:61888 \
-p 2181:2181 \
-p 2222:22 \
sandbox /usr/sbin/sshd -D

Note: Mounting local drives to the sandbox

If you would like to mount local drives on the host to your sandbox, you need to add another -v option to the command above. I typically recommend creating working directories for each of your docker containers, such as /Users/<username>/Development/sandbox or /Users/<username>/Development/hdp25-demo-sandbox. In doing this, you can copy the docker run command above into a script called create_container.sh and you simply change the --name option to be unique and correspond to the directory the script is in.

Lets look at an example. In this scenario I'm going to create a directory called /Users/<username>/Development/hdp25-demo-sandbox where I will create my create_container.sh script. Inside of that script I will have as the first line:

$ docker run -v `pwd`:`pwd` -v hadoop:/hadoop --name hdp25-demo-sandbox --hostname "sandbox.hortonworks.com" --privileged -d \

Once the container is running you will notice the container has /Users/<username>/Development/hdp25-demo-sandbox as a mount. This is similar in nature/concept to the /vagrant mount when using Vagrant. This allows you to easily share data between the container and your host without having to copy the data around.

Once the container is created and running, Docker will display a CONTAINER ID for the container. You should see something similar to this:

fe57fe79f795905daa50191f92ad1f589c91043a30f7153899213a0cadaa5631

For all future container starts, you only need to run the docker start command:

$ docker start sandbox

Notice that sandbox is the name of the container in the run command used above.

If you name the container the same name as the container project directory, like hdp25-demo-sandbox above, it will make it easier to remember what the container name is. However, you can always create a start_container.sh script that includes the above start command. Similarly you can create a stop_container.sh script that stops the container.

5. Verify the container is running:

$ docker ps

You should see something similar to this:

$ docker ps
CONTAINER ID        IMAGE               COMMAND               CREATED             STATUS              PORTS             NAMES

85d7ec7201d8        sandbox             "/usr/sbin/sshd -D"   31 seconds ago      Up 27 seconds       0.0.0.0:1000->1000/tcp, 0.0.0.0:1100->1100/tcp, 0.0.0.0:1220->1220/tcp, 0.0.0.0:1988->1988/tcp, 0.0.0.0:2100->2100/tcp, 0.0.0.0:4040->4040/tcp, 0.0.0.0:4200->4200/tcp, 0.0.0.0:5007->5007/tcp, 0.0.0.0:5011->5011/tcp, 0.0.0.0:6001->6001/tcp, 0.0.0.0:6003->6003/tcp, 0.0.0.0:6008->6008/tcp, 0.0.0.0:6080->6080/tcp, 0.0.0.0:6188->6188/tcp, 0.0.0.0:8000->8000/tcp, 0.0.0.0:8005->8005/tcp, 0.0.0.0:8020->8020/tcp, 0.0.0.0:8040->8040/tcp, 0.0.0.0:8042->8042/tcp, 0.0.0.0:8050->8050/tcp, 0.0.0.0:8080->8080/tcp, 0.0.0.0:8082->8082/tcp, 0.0.0.0:8086->8086/tcp, 0.0.0.0:8088->8088/tcp, 0.0.0.0:8090-8091->8090-8091/tcp, 0.0.0.0:8188->8188/tcp, 0.0.0.0:8443->8443/tcp, 0.0.0.0:8744->8744/tcp, 0.0.0.0:8765->8765/tcp, 0.0.0.0:8886->8886/tcp, 0.0.0.0:8888-8889->8888-8889/tcp, 0.0.0.0:8983->8983/tcp, 0.0.0.0:8993->8993/tcp, 0.0.0.0:9000->9000/tcp, 0.0.0.0:9090->9090/tcp, 0.0.0.0:9995-9996->9995-9996/tcp, 0.0.0.0:10000-10001->10000-10001/tcp, 0.0.0.0:10500->10500/tcp, 0.0.0.0:11000->11000/tcp, 0.0.0.0:15000->15000/tcp, 0.0.0.0:16010->16010/tcp, 0.0.0.0:16030->16030/tcp, 0.0.0.0:18080->18080/tcp, 0.0.0.0:19888->19888/tcp, 0.0.0.0:21000->21000/tcp, 0.0.0.0:42111->42111/tcp, 0.0.0.0:50070->50070/tcp, 0.0.0.0:50075->50075/tcp, 0.0.0.0:50095->50095/tcp, 0.0.0.0:50111->50111/tcp, 0.0.0.0:60000->60000/tcp, 0.0.0.0:60080->60080/tcp, 0.0.0.0:61888->61888/tcp, 0.0.0.0:2222->22/tcp   sandbox

Notice the CONTAINER ID is the shortened version of the ID displayed when you ran the run command.

6. To stop the container Once the container is running, you stop it using the following command:

$ docker stop sandbox

7. To connect to the container You connect to the container via ssh using the following command:

$ ssh -p 2222 root@localhost

The first time you log into the container, you will be prompted to change the root password. The root password for the container is hadoop.

8. Start sandbox services The Ambari and HDP services do not start automatically when you start the Docker container. You need to start the processes with a script.

$ ssh -p 2222 root@localhost
$ /etc/init.d/startup_script start

You should see something similar to this (you can ignore any warnings):

# /etc/init.d/startup_script start
Starting tutorials...                                      [  Ok  ]
Starting startup_script...
Starting HDP ...
Starting mysql                                            [  OK  ]
Starting Flume                                            [  OK  ]
Starting Postgre SQL                                      [  OK  ]
Starting Ranger-admin                                     [  OK  ]
Starting name node                                        [  OK  ]
Starting Ranger-usersync                                  [  OK  ]
Starting data node                                        [  OK  ]
Starting Zookeeper nodes                                  [  OK  ]
Starting Oozie                                            [  OK  ]
Starting Ambari server                                    [  OK  ]
Starting NFS portmap                                      [  OK  ]
Starting Hdfs nfs                                         [  OK  ]
Starting Hive server                                      [  OK  ]
Starting Hiveserver2                                      [  OK  ]
Starting Ambari agent                                     [  OK  ]
Starting Node manager                                     [  OK  ]
Starting Yarn history server                              [  OK  ]
Starting Resource manager                                 [  OK  ]
Starting Webhcat server                                   [  OK  ]
Starting Spark                                            [  OK  ]
Starting Mapred history server                            [  OK  ]
Starting Zeppelin                                         [  OK  ]
Safe mode is OFF
Starting sandbox...
./startup_script: line 97: /proc/sys/kernel/hung_task_timeout_secs: No such file or directory
Starting shellinaboxd:                                     [  OK  ]

9. You can now connect to your HDP instance via a web browser at http://localhost:8888

40,425 Views
Comments
avatar
New Contributor

Can you update the Guide with correct latest steps?

avatar
Explorer

Can you update the guide? The new sandbox 3.0.1 comes with a script but after installing it, I can't access the ports at all. Ambari also cannot start. Something must have gone wrong and it seems no one cares to update the guides.

 

I'm trying to learn hadoop and it's becoming too difficult just get it running.

avatar
New Contributor

How to install docker version of sandbox on Mac for hdp-sandbox 2.6.5? 

I kept getting "502 bad gateway" when try to run Ambari page.

Even when I am on Dashboard page, I end up with many red flags.