Member since 
    
	
		
		
		04-03-2019
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                962
            
            
                Posts
            
        
                1743
            
            
                Kudos Received
            
        
                146
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 14984 | 03-08-2019 06:33 PM | |
| 6167 | 02-15-2019 08:47 PM | |
| 5098 | 09-26-2018 06:02 PM | |
| 12581 | 09-07-2018 10:33 PM | |
| 7439 | 04-25-2018 01:55 AM | 
			
    
	
		
		
		01-25-2017
	
		
		08:44 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							@manikandan ayyasamy /user/hue directory is on HDFS not on local file system, that's the reason you are not able to directly 'cd' to it.  Please try below command(as hue or hdfs user)  hadoop fs -ls /user/hue 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-24-2017
	
		
		09:53 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @Kuldeep Mishra   It looks like problem with "yum" utility while installing package - unzip.  Can you please login to the said node and try installing 'unzip' from backend using below commands?  #Command 1
yum clean all
#Command 2
yum install unzip  Once above command succeeds, please retry an operation from Ambari. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-23-2017
	
		
		05:08 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Baruch AMOUSSOU DJANGBAN - Sure. Please let us know the results 🙂 If it resolves your issue then please accept my answer. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-23-2017
	
		
		04:56 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Baruch AMOUSSOU DJANGBAN  Please login to your Ambari server as root user --> Goto /etc/yum.repos.d/ --> Find any conflicting repositories for HDP-2.5.3 --> Maybe move all HDP-2.5.3 repos to some other location (apart from /etc/yum.repos.d) --> start over the upgrade again --> Ambari should register repository again 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-23-2017
	
		
		04:08 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Nic Hopper  You would need to switch to 'root' user in order to do su without password.  or you can try 'sudo su hive' 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-20-2017
	
		
		06:58 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		4 Kudos
		
	
				
		
	
		
					
							 In previous post we have seen how to Automate HDP installation with Kerberos authencation on single node cluster using Ambari Blueprints.   In this post, we will see how to Deploy multinode node HDP Cluster with Kerberos authentication via Ambari blueprint.  .  Note - For Ambari 2.6.X onwards, we will have to register VDF to register internal repository, or else Ambari will pick up latest version of HDP and use the public repos. please see below document for more information. For Ambari version less than 2.6.X, this guide will work without any modifications.  Document - https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.0.0/bk_ambari-release-notes/content/ambari_relnotes-2.6.0.0-behavioral-changes.html  .  Below are simple steps to install HDP multi node cluster with Kerberos Authentication(MIT KDC) using internal repository via Ambari Blueprints.  .  Step 1: Install Ambari server using steps mentioned under below link  http://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-installation/content/ch_Installing_Ambari.html  .  Step 2: Register ambari-agent manually  Install ambari-agent package on all the nodes in the cluster and modify hostname to ambari server host(fqdn) in /etc/ambari-agent/conf/ambari-agent.ini  .  Step 3: Install and configure MIT KDC  Detailed Steps(Demo on HDP Sandbox 2.4):  3.1 Clone our github repository Ambari server in your HDP Cluster  Note - This script will install and configure KDC on your Ambari Server.  git clone https://github.com/crazyadmins/useful-scripts.git  Sample Output:  [root@sandbox ~]# git clone https://github.com/crazyadmins/useful-scripts.git
Initialized empty Git repository in /root/useful-scripts/.git/
remote: Counting objects: 29, done.
remote: Compressing objects: 100% (25/25), done.
remote: Total 29 (delta 4), reused 25 (delta 3), pack-reused 0
Unpacking objects: 100% (29/29), done.  2. Goto useful-scripts/ambari directory  [root@sandbox ~]# cd useful-scripts/ambari/
[root@sandbox ambari]# ls -lrt
total 16
-rw-r--r-- 1 root root 5701 2016-04-23 20:33 setup_kerberos.sh
-rw-r--r-- 1 root root 748 2016-04-23 20:33 README
-rw-r--r-- 1 root root 366 2016-04-23 20:33 ambari.props
[root@sandbox ambari]#  3. Copy setup_only_kdc.sh and ambari.props to the host where you want to setup KDC Server  4. Edit and modify ambari.props file according to your cluster environment  Note - In case of multinode cluster, Please don't forget to add comma separated list of hosts as a value of KERBEROS_CLIENTS variable(Not applicable for this post).  Sample output for my Sandbox  [root@sandbox ambari]# cat ambari.props
CLUSTER_NAME=Sandbox --> You can skip this variable
AMBARI_ADMIN_USER=admin  --> Not required
AMBARI_ADMIN_PASSWORD=admin --> Note required
AMBARI_HOST=sandbox.hortonworks.com --> Required
KDC_HOST=sandbox.hortonworks.com --> Required
REALM=HWX.COM --> Required
KERBEROS_CLIENTS=sandbox.hortonworks.com --> Not required
##### Notes #####
#1. KERBEROS_CLIENTS - Comma separated list of Kerberos clients in case of multinode cluster
#2. Admin princial is admin/admin and password is hadoop
[root@sandbox ambari]#  5. Start installation by simply executing setup_only_kdc.sh  Notes:  1. Please run setup_only_kdc.sh from KDC_HOST only, you don’t need to setup or configure KDC, this script will do everything for you.  .  Step 4: Configure blueprints  Please follow below steps to create Blueprints  .  4.1 Create hostmap.json(cluster creation template) file as shown below:  Note – This file will have information related to all the hosts which are part of your HDP cluster.  {
 "blueprint": "hdptest",
 "default_password": "hadoop",
 "host_groups": [{
  "name": "kerbnode1",
  "hosts": [{
   "fqdn": "kerbnode1.openstacklocal"
  }]
 }, {
  "name": "kerbnode2",
  "hosts": [{
   "fqdn": "kerbnode2.openstacklocal"
  }]
 }, {
  "name": "kerbnode3",
  "hosts": [{
   "fqdn": "kerbnode3.openstacklocal"
  }]
 }],
 "credentials": [{
  "alias": "kdc.admin.credential",
  "principal": "admin/admin",
  "key": "hadoop",
  "type": "TEMPORARY"
 }],
 "security": {
  "type": "KERBEROS"
 },
 "Clusters": {
  "cluster_name": "kerberosCluster"
 }
}  4.2 Create cluster_config.json(blueprint) file, it contents mapping of hosts to HDP components  {
 "configurations": [{
  "kerberos-env": {
   "properties_attributes": {},
   "properties": {
    "realm": "HWX.COM",
    "kdc_type": "mit-kdc",
    "kdc_host": "kerbnode1.openstacklocal",
    "admin_server_host": "kerbnode1.openstacklocal"
   }
  }
 }, {
  "krb5-conf": {
   "properties_attributes": {},
   "properties": {
    "domains": "HWX.COM",
    "manage_krb5_conf": "true"
   }
  }
 }],
 "host_groups": [{
  "name": "kerbnode1",
  "components": [{
   "name": "NAMENODE"
  }, {
   "name": "NODEMANAGER"
  }, {
   "name": "DATANODE"
  }, {
   "name": "ZOOKEEPER_CLIENT"
  }, {
   "name": "HDFS_CLIENT"
  }, {
   "name": "YARN_CLIENT"
  }, {
   "name": "MAPREDUCE2_CLIENT"
  }, {
   "name": "ZOOKEEPER_SERVER"
  }],
  "cardinality": 1
 }, {
  "name": "kerbnode2",
  "components": [{
   "name": "SECONDARY_NAMENODE"
  }, {
   "name": "NODEMANAGER"
  }, {
   "name": "DATANODE"
  }, {
   "name": "ZOOKEEPER_CLIENT"
  }, {
   "name": "ZOOKEEPER_SERVER"
  }, {
   "name": "HDFS_CLIENT"
  }, {
   "name": "YARN_CLIENT"
  }, {
   "name": "MAPREDUCE2_CLIENT"
  }],
  "cardinality": 1
 }, {
  "name": "kerbnode3",
  "components": [{
   "name": "RESOURCEMANAGER"
  }, {
   "name": "APP_TIMELINE_SERVER"
  }, {
   "name": "HISTORYSERVER"
  }, {
   "name": "NODEMANAGER"
  }, {
   "name": "DATANODE"
  }, {
   "name": "ZOOKEEPER_CLIENT"
  }, {
   "name": "ZOOKEEPER_SERVER"
  }, {
   "name": "HDFS_CLIENT"
  }, {
   "name": "YARN_CLIENT"
  }, {
   "name": "MAPREDUCE2_CLIENT"
  }],
  "cardinality": 1
 }],
 "Blueprints": {
  "blueprint_name": "hdptest",
  "stack_name": "HDP",
  "stack_version": "2.5",
  "security": {
   "type": "KERBEROS"
  }
 }
}  .  Step 5: Create an internal repository map  .  5.1: hdp repository – copy below contents, modify base_url to add hostname/ip-address of your internal repository server and save it in repo.json file.  {
"Repositories" : {
   "base_url" : "http://172.26.64.249/hdp/centos6/HDP-2.5.3.0/",
   "verify_base_url" : true
}
}  .  5.2: hdp-utils repository – copy below contents, modify base_url to add hostname/ip-address of your internal repository server and save it in hdputils-repo.json file.  {
"Repositories" : {
   "base_url" : "http://172.26.64.249/hdp/centos6/HDP-UTILS-1.1.0.20/",
   "verify_base_url" : true
}
}  .  Step 6: Register blueprint with Ambari server by executing below command  curl -H X-Requested-By: ambari -X POST -u admin:admin http://<ambari-server>:8080/api/v1/blueprints/hdptest -d @cluster_config.json  Step 7: Setup Internal repo via REST API.  Execute below curl calls to setup internal repositories.  curl -H "X-Requested-By: ambari"-X PUT -u admin:admin http://<ambari-server-hostname>:8080/api/v1/stacks/HDP/versions/2.5/operating_systems/redhat6/repositories/HDP-2.5 -d @repo.json 
 curl -H "X-Requested-By: ambari"-X PUT -u admin:admin http://<ambari-server-hostname>:8080/api/v1/stacks/HDP/versions/2.5/operating_systems/redhat6/repositories/HDP-UTILS-1.1.0.21 -d @hdputils-repo.json  .  Step 8: Pull the trigger! Below command will start cluster installation.  curl -H "X-Requested-By: ambari" -X POST -u admin:admin http://<ambari-server-hostname>:8080/api/v1/clusters/hdptest -d @hostmap.json  .  You should see that Ambari has already marked Kerberos as enabled and started installing required services:      .  Please feel free to comment if you need any further help on this. Happy Hadooping!!    
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		01-20-2017
	
		
		02:51 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @chennuri gouri shankar  It looks like you have wrong version of tez.tar.gz on HDFS. Can you please verify that? If possible, please try to replace the same with latest version of tez.tar.gz.  Sometimes this kind of issue happens after an upgrade if older TEZ library exists on HDFS. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-20-2017
	
		
		02:46 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Singh Pratap - Please accept appropriate answer. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-20-2017
	
		
		02:16 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							@Lekhraj Prasoya Are you getting this error while starting Oozie service or in Oozie workflow? If you are getting this in Oozie workflow then can you please attach complete Oozie launcher logs?   By looking at the error, It looks like database user permission issue.  Also, Have a look at below URL   http://stackoverflow.com/questions/24166026/hdp-2-0-oozie-error-e0803-e0803-io-error-e0603  Hope this helps.  
						
					
					... View more