Member since 
    
	
		
		
		01-25-2017
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                119
            
            
                Posts
            
        
                7
            
            
                Kudos Received
            
        
                2
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 16857 | 04-11-2017 12:36 PM | |
| 6173 | 01-18-2017 10:36 AM | 
			
    
	
		
		
		03-15-2017
	
		
		04:58 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Hi,  I am trying to enable Kerberos on my cluster (Ambari 2.4.2, HDP 2.5.3, Centos 7.3).  I have started following this guide:  https://community.hortonworks.com/articles/82536/configuring-ambari-and-hadoop-for-kerberos-using-a.html  It also has a video inside.  After "Install Kerberos Client" step, it failed during "Test Kerberos Client" step. It failed with following shell exception:  resource_management.core.exceptions.ExecutionFailed: Execution of '/usr/bin/kinit -c /var/lib/ambari-agent/tmp/kerberos_service_check_cc_bd98b56f3fb825bccff406ea5b89a680 -kt /etc/security/keytabs/kerberos.service_check.031517.keytab mybigdev-031517@hadoopad.local' returned 1. kinit: Preauthentication failed while getting initial credentials  Then I applied another guide:  https://www.ibm.com/support/knowledgecenter/SSPT3X_4.2.0/com.ibm.swg.im.infosphere.biginsights.admin.doc/doc/admin_kerb_activedir.html  I got certificate of Active Directory in to following file:  /etc/pki/ca-trust/source/anchors/activedirectory.pem  run the following commands as root user to trust CA certificate:   update-ca-trust enable
 update-ca-trust extract
update-ca-trust check  then added trust in Java certificate file:      mycert=/etc/pki/ca-trust/source/anchors/activedirectory.pem sudo keytool -importcert -noprompt -storepass changeit -file ${mycert} -alias ad -keystore /etc/pki/java/cacerts   It didn't work. Error was the same. I exit Kerberos wizard for any change and restart ambari-server.  I tried following command:  keytool -importcert -file activedirectory.pem -noprompt -storepass changeit -alias ad -keystore /usr/java/jdk1.8.0_73/jre/lib/security/cacerts  after this I listed certificates in my cert files in both location (/etc/pki/java and /usr/java/jdk...)  My alias was there:  *******  *******  Alias name: ad   Creation date: Mar 15, 2017   Entry type: trustedCertEntry   Owner: CN=hadoopad-HADOOPDC-CA, DC=hadoopad, DC=local   Issuer: CN=hadoopad-HADOOPDC-CA, DC=hadoopad, DC=local  ...  *****  *****  I also tried   addent -password -p ${user} -k 1 -e rc4-hmac  but it didn't change anything  then I uncommented following encryption types entries but it didn't change anything either:  #default_tgs_enctypes = {{encryption_types}}
#default_tkt_enctypes = {{encryption_types}}  Now I need you guys' comments.  Thanks in advance... 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		03-14-2017
	
		
		01:28 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Ye Jun Link is cool! Thank you 🙂 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-14-2017
	
		
		08:26 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi,  I will kerberize my cluster and then install Ranger.  I will connect to Active Directory.  As far as I understand, SSSD is deprecated since Windows Server 2012 R2 so, defining lookup on core-site.xml file seems to be the best way out. Is that correct?  However, this change is made on core-site.xml file of HDFS only. How does other applications use the same lookup? Is HDFS the only level where user authorisation must be checked? Does let's say Hive or HBase, start process with authenticated user then leave the authorization to HDFS? How does it work?  In the blog, some other user commented that we must import the cert from the KDC into the default JDK keystore for LDAP.   keytool -importcert -file rootCA.pem -alias kdc -keystore /usr/java/jdk1.8.0_73/jre/lib/security/cacerts  Is this manual step required for Kerberos, AD integrated clusters?  Thanks in advance... 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-10-2017
	
		
		06:59 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thanks @Jay SenSharma,  Apparently, I lost my attention on the late hours and stuck on Hive View page 🙂  Now I got the reason:  I run ambari-server with root user. Pig View goes to Hive and wants to use hcat user. To make it possible, we set   webhcat.proxyuser.root.hosts  and  webhcat.proxyuser.root.groups  And hcat user needs proxy settings on hdfs to be able to write and read. And that point we set   hadoop.proxyuser.hcat.groups=*  hadoop.proxyuser.hcat.hosts=*  Right? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-09-2017
	
		
		04:09 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi,  When trying to test functionality of Ambari Pig view with a simple job, I got the following exception:  org.apache.ambari.view.utils.ambari.AmbariApiException: {"error":"Unauthorized connection for super-user: hcat from IP 10.0.102.62"}  It was Pig view and I was trying that with admin user. Since Ambari runs by root user I added the entries  hadoop.proxyuser.root.hosts=*  hadoop.proxyuser.root.groups=*  in core-site.xml file.  After getting above exception, I updated value of  hadoop.proxyuser.hcat.hosts  with * from hostname of one server. It worked that way.  Now the question is, what is the relation of pig view with hcat user? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Apache Ambari
- 
						
							
		
			Apache Pig
			
    
	
		
		
		03-08-2017
	
		
		07:15 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thanks @Artem Ervits!  Although I remember that I set all required passwords during the installation, I did set a password for Grafana. Now I can stop and start all services. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-07-2017
	
		
		04:03 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi,  Today I added high availability to my 4 vm servers cluster.  After adding high availability for name node Grafana failed to start.  Same repeated when for adding HA to resource manager.  I started failed services one by one after both occurrences.  After this I wanted to test cluster's health with HA feature and stopped all services, started all services.  In addition to failing Grafana, all of the services have alerts now.  My cluster information:  Ambari 2.4.2  HDP 2.5.3  CentOS 7.3.1611  Grafana start errors:  stderr:  Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_grafana.py", line 69, in <module>
    AmsGrafana().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_grafana.py", line 50, in start
    create_ams_datasource()
  File "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_grafana_util.py", line 261, in create_ams_datasource
    (response.status, response.reason, data))
resource_management.core.exceptions.Fail: Ambari Metrics Grafana data source creation failed. POST request status: 401 Unauthorized 
{"message":"Invalid username or password"}  stdout:  2017-03-07 18:38:36,664 - Execute['/usr/sbin/ambari-metrics-grafana start'] {'not_if': "ambari-sudo.sh su ams -l -s /bin/bash -c 'test -f /var/run/ambari-metrics-grafana/grafana-server.pid && ps -p `cat /var/run/ambari-metrics-grafana/grafana-server.pid`'", 'user': 'ams'}
2017-03-07 18:38:37,924 - Checking if AMS Grafana datasource already exists
2017-03-07 18:38:37,925 - Connecting (GET) to bigdev1:3000/api/datasources
2017-03-07 18:38:38,006 - Http response: 401 Unauthorized
2017-03-07 18:38:38,006 - Error checking for Ambari Metrics Grafana datasource. Will attempt to create.
2017-03-07 18:38:38,006 - Generating datasource:
{
  "name": "AMBARI_METRICS",
  "type": "ambarimetrics",
  "access": "proxy",
  "url": "http://bigdev3:6188",
  "password": "",
  "user": "",
  "database": "",
  "basicAuth": false,
  "basicAuthUser": "",
  "basicAuthPassword": "",
  "withCredentials": false,
  "isDefault": true,
  "jsonData": {}
}
2017-03-07 18:38:38,007 - Connecting (POST) to bigdev1:3000/api/datasources
2017-03-07 18:38:38,101 - Http response: 401 Unauthorized
2017-03-07 18:38:48,111 - Connection to Grafana failed. Next retry in 10 seconds.
2017-03-07 18:38:48,112 - Connecting (POST) to bigdev1:3000/api/datasources
2017-03-07 18:38:48,176 - Http response: 401 Unauthorized  Does anybody know the underlying reason of it?  Any comments appreciated... 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Hortonworks Data Platform (HDP)
			
    
	
		
		
		03-02-2017
	
		
		04:33 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thank you for the explanation @Matt Foley. 🙂 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-01-2017
	
		
		01:58 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @Matt Foley,  I have read your article.  Still have some questions in my mind.  We are planning to connect each server in the cluster to two different switches using two different ethernet cards.   Let's call these two networks "(local) cluster network" and "internal network"  Your article led me to cover most of the things for Hadoop services. Now we are planning to use one of these servers for third party and our custom applications. You mentioned in your article that different services may use different methods for binding to non-default eth. So am I supposed to find which way I can bind them for each service?  And I have two other general questions about your article;  When explaining key parameters for key questions, you touch on binding address:  " ..'Bind Address Properties' provides optional additional addressing information which is used..."  How is service bound to preferred interface? Am I supposed to enter IP address of my server's preferred network interface? Otherwise, I understand 0.0.0.0 notion.  On "Theory and Operations" part, you mention hostnames and DNS servers. You suggest using the same hostname for both interfaces and/or network. Then on 7th practise example you say if we will use hosts files instead of DNS servers (likely for our case. We will not have DNS server for cluster network) hostnames (which also suggested to be identical) would lead to have host files only for one interface on servers. And you went on indicating the ability of clients on using different hosts files. How can they have different hosts files if host names will be the same? Is the following a right example of what you mean?  One of the servers' hosts file:  ...  namenode2 192.168.1.2  ...  client's hosts file:  ...  namenode2 10.0.9.42  ... 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-28-2017
	
		
		10:22 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thanks @Matt Foley,  Your article is the exact one I need.  I am keeping your offer for folllow-up questions though 🙂 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
        













