Member since
09-24-2015
144
Posts
72
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1316 | 08-15-2017 08:15 AM | |
6157 | 01-24-2017 06:58 AM | |
1619 | 08-03-2016 06:45 AM | |
2914 | 06-01-2016 10:08 PM | |
2502 | 04-07-2016 10:30 AM |
06-01-2016
03:43 AM
I'm a bit confused by SOLR_KERB_PRINCIPAL SOLR_KERB_PRINCIPAL=HTTP/${SOLR_HOST}@EXAMPLE.COM In the instruction, we are creating a service principal "addprinc -randkey solr/horton04.example.com@EXAMPLE.COM". Can't I use this one for above? Do I have to use HTTP?
... View more
04-07-2016
10:30 AM
4 Kudos
Unfortunately, I think can't delete (yet) due to https://issues.apache.org/jira/browse/HDFS-9534
... View more
03-22-2016
02:00 PM
2 Kudos
It can use cache if you use loginUserFromSubject.
... View more
03-17-2016
02:53 PM
Hi @Gerd Koenig Did you fix this issue? I'm having "unable to get client certificate"
... View more
03-16-2016
03:30 AM
2 Kudos
https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html According to above link, it seems it accepts carriage-return-separated list.
... View more
03-02-2016
09:29 AM
1 Kudo
So.. for hardware replacement, don't need to worry about recovery? Just shutdown OS?
... View more
03-02-2016
08:54 AM
2 Kudos
Unlike replacing disk for DataNode, I can't find any information for replacing disk for NodeManager. As HDP's yarn.nodemanager.recovery.enabled = true, my guess is if I stopped a NodeManager while some containers were running, jobs related to these containers would wait until this NodeManager was started, which may not be convenient as it would affect to SLA. If this is true, is there any issue of setting yarn.nodemanager.recovery.enabled = false permanently, so that when NodeManager is stopped, (my expectation is) the container will be created in another NodeManager?
... View more
Labels:
- Labels:
-
Apache YARN
02-23-2016
12:17 AM
6 Kudos
Purpose of this article:
When you install HDP for dev/test environment, you would repeat same commands to set up your host OS. To save time, created a BASH script which helps to set up the host OS (Ubuntu only) and docker image (CentOS).
What this script does:
Install packages on Ubuntu host OS
Set up docker, such as creating image and spawning containers
[Optional] Set up a local repository for HDP (not Ambari) with Apache2
What this script does NOT:
As of this writing, this does not install HDP
Please use Ambari Blueprint if you would like to automate HDP installation as well
This step is NOT for production environment but would be useful to test HA components
Host set up steps:
Install Ubuntu 14.x LTS on your VirtualBox/VMware/Azure/AWS.
It should be easy to deploy Ubuntu VM if you use Azure or AWS.
If you are using VirtualBox/VMWare, you might want to backup Ubuntu installed VM as a template, so that later you can clone.
Login to Ubuntu and become root (sudo -i)
Download script: wget https://raw.githubusercontent.com/hajimeo/samples/master/bash/start_hdp.sh -O ./start_hdp.sh && chmod u+x ./start_hdp.sh
Start the script with Install mode: ./start_hdp.sh -i
Script will ask a few questions such as your choice of guest OS, Ambari version, HDP version etc.. Normally default values should be OK, so you can just keep pressing Enter key.
NOTE: The end of interview, it asks you to save your answer in a text file. You can reuse this file to skip interview when you install a new cluster
After saving your responses, it will ask you "Would you like to start setup this host? [Y]:". If you answer yes, it starts setting up your Ubuntu host OS.
After waiting for while, the scripts finishes, or if there is any error, it stops.
The time would be depending on your choice. If you selected to setup a local repo, downloading repo may take long time.
Once the script completed successfully, your choice of Ambari Server should be installed and running on your specified docker container on port 8080.
NOTE: At this moment, docker containers are installed in a private network, so that you would need to do one of followings ("1" would be the easiest):
Following command creates proxy from your local PC port 18080:
ssh -D 18080 username@ubuntu-hostname
Following command do port forwarding from your localhost:8080 to node1:8080
ssh -L 8080:node1.localdomain:8080 username@ubuntu-hostname
Set up proper proxy, such as squid
If you decided to set up a proxy, installing addon such as "SwitchySharp" would be handy.
Once you confirmed you can use Ambari web interface, please proceed to install HDP.
If you choose to set up a HDP local repository, please replace "
public-repo-1.hortonworks.com" to "dockerhost1.localdomain" (if you used default value)
Private key should be ~/.ssh/id_rsa in any node
After this, the step should be same as installing normal HDP.
NOTE: if you decided to install older Ambari version, there is a known issue AMBARI-8620
Host Start up step
If you shutdown the VM, next time you can just run "./start_hdp.sh -s" which starts up containers, Ambari Server, Ambari Agents and HDP services.
日本語版
... View more
02-11-2016
12:54 AM
Thank you. I didn't know "mapred.capacity-scheduler<queue-name>.supports-priority" Is this supported? I don't find any code matching "supports-priority" in hadoop project...
... View more