Member since 
    
	
		
		
		12-10-2016
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                24
            
            
                Posts
            
        
                0
            
            
                Kudos Received
            
        
                1
            
            
                Solution
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 1469 | 02-15-2017 06:32 AM | 
			
    
	
		
		
		04-12-2017
	
		
		01:24 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Howdy guys, I have visibility of several webui's Ranger, Atlas and WebHDFS.  The cluster is Kerberized and has HA on the NameNode (Ambari-2.4.2 + HDP-2.5.3)  There is a firewall between knox and both the namenodes and my declaration.   <gateway>
...
<!-- required for HA -->
<provider>
<role>ha</role>
<name>HaProvider</name>
<enabled>true</enabled>
<param name="WEBHDFS" value="maxFailoverAttempts=3;failoverSleep=1000;maxRetryAttempts=300;retrySleep=1000;enabled=true" /> </provider>
</gateway>
<service>
<role>WEBHDFS</role>
<url>http://ACTIVENN:50070/webhdfs</url>
<url>http://STANDBYNN:50070/webhdfs</url>
</service>
<service>
<role>HDFSUI</role>
<url>http://ACTIVENN:50070</url>
</service>
<service>
<role>NAMENODE</role>
<url>hdfs://ha-name-for-cluster</url>
</service>
  Note: The log page on the last tabs says "unauthorized dr.who 403"   I noticed a similar problem here:   https://issues.apache.org/jira/browse/KNOX-626  https://community.hortonworks.com/content/kbentry/67710/unable-to-access-datanode-tab-from-namenode-ui.html   What are the reasons why there might be missing tab data? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
 - 
						
							
		
			Apache Knox
 
			
    
	
		
		
		04-11-2017
	
		
		06:57 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I discovered that specifically it is 2.6.6-2.6.9 you can't use 2.7.x - it failed when looking for the 2.6.x86_64.1.0 binary or some such. Hence I needed to install 2.6.9 from source and use virtualenv to handle 2.7 with 2.6.9 installed. Theres plenty of docs on how to use virtualenv to handle dual python environments.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-09-2017
	
		
		12:55 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Worth knowing that now there is no need for the "expect" statement now with the following attributes that can be added to the sync-ldap request:  --ldap-sync-admin-name=admin --ldap-sync-admin-password=secret
 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-24-2017
	
		
		02:24 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thanks Deepesh, confusion mittigated 🙂 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-24-2017
	
		
		12:35 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 
	Hello team,  
	The documentation for KDC installation on HDP-2.5.3 includes instructions for SLES12 which suggests using: 
 rckrb5kdc start 
rckadmind start 
chkconfig rckrb5kdc on 
chkconfig rckadmind on
  However, SLES 12.1 does not have the required rckrb5kdc.service or rckadmind.service files required to enable systemd to process the chkconfig commands above. It does however have kadmind.service and krb5kdc.service allowing me to run:  chkconfig krb5kdc on 
chkconfig kadmind on  However now I wonder if rckrb5kdc and rckadmind are now deprecated? Thoughts? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
 - 
						
							
		
			Hortonworks Data Platform (HDP)
 
			
    
	
		
		
		02-23-2017
	
		
		08:42 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hello team,  I have a knox target that has JDBC connected to it. However although I'm happily connecting interactively, using command arguments for the url seem to be providing grief. I'm using Beeline 1.2.1000.2.5.3.0-37 could this be a bug?  This works:  [theuser@knoxhost ~]$ beeline
Beeline version 1.2.1000.2.5.3.0-37 by Apache Hive
beeline> !connect jdbc:hive2://knoxhost.domain.com:8443/default;ssl=true;transportMode=http;httpPath=gateway/default/hive theuser secret
Connecting to jdbc:hive2://knoxhost.domain.com:8443/default;ssl=true;transportMode=http;httpPath=gateway/default/hive
Connected to: Apache Hive (version 1.2.1000.2.5.3.0-37)
Driver: Hive JDBC (version 1.2.1000.2.5.3.0-37)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://knoxhost.domain.com>
  This does not work:  [theuser@knoxhost ~]$ beeline -u 
'jdbc:hive2://knoxhost.domain.com:8443/default;transportMode=http;ssl=true'
 -n theuser -p password (or -w password.txt)
Connecting to jdbc:hive2://knoxhost.domain.com:8443/default;transportMode=http;ssl=true
17/02/23 19:11:01 [main]: ERROR jdbc.HiveConnection: Error opening session
org.apache.thrift.transport.TTransportException: HTTP Response code: 404
  at org.apache.thrift.transport.THttpClient.flushUsingHttpClient(THttpClient.java:262)
  at org.apache.thrift.transport.THttpClient.flush(THttpClient.java:313)
  at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:73)
  at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:62)
  at org.apache.hive.service.cli.thrift.TCLIService$Client.send_OpenSession(TCLIService.java:154)
  at org.apache.hive.service.cli.thrift.TCLIService$Client.OpenSession(TCLIService.java:146)
  at org.apache.hive.jdbc.HiveConnection.openSession(HiveConnection.java:552)
  at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:170)
  at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105)
  at java.sql.DriverManager.getConnection(DriverManager.java:664)
  at java.sql.DriverManager.getConnection(DriverManager.java:208)
  at org.apache.hive.beeline.DatabaseConnection.connect(DatabaseConnection.java:146)
  at org.apache.hive.beeline.DatabaseConnection.getConnection(DatabaseConnection.java:211)
  at org.apache.hive.beeline.Commands.connect(Commands.java:1190)
  at org.apache.hive.beeline.Commands.connect(Commands.java:1086)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)
  at org.apache.hive.beeline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:52)
  at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:990)
  at org.apache.hive.beeline.BeeLine.initArgs(BeeLine.java:715)
  at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:777)
  at org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:491)
  at org.apache.hive.beeline.BeeLine.main(BeeLine.java:474)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)
  at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
  at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
Error:
 Could not establish connection to 
jdbc:hive2://knoxhost.domain.com:8443/default;transportMode=http;ssl=true:
 HTTP Response code: 404 (state=08S01,code=0)
Beeline version 1.2.1000.2.5.3.0-37 by Apache Hive
0: jdbc:hive2://knoxhost.domain.com (closed)> 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
 - 
						
							
		
			Apache Hive
 
			
    
	
		
		
		02-19-2017
	
		
		10:31 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I don't have Oozie HA and I've got the problem also. All hostgroups are failing to be substituted. I'm using an external Postgresql 9.2 database, are their known issues with this? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-19-2017
	
		
		11:58 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I'm marking this as Solved thanks Jay. Technically this is not the right answer but certainly helped me get closer to an outcome I can use. Seems restarting the Openstack instance jiggles the sockets and allows Python to find the FQDN. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-19-2017
	
		
		07:48 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Okay that's a start:  >>> import socket
>>> print socket.getfqdn();
host0141  So somehow we don't have python finding a FQHN...   [centos@host0141 ~]$ hostname -f
host0141.domain.com
[centos@host0141 ~]$ python<<<"import socket;print socket.getfqdn();"
host0141  So it seems that socket.getfqdn() is the culprit. I'm using openstack, I'm wondering if it's a delay in registering hosts, shortly after their deletion and recreation... thanks for helping JanSenSharma   I restarted and boom! I now have a FQHN coming from the socket function. Seems I need to restart the host to freshen the sockets after being spawned.  
						
					
					... View more