Member since 
    
	
		
		
		10-20-2015
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                92
            
            
                Posts
            
        
                79
            
            
                Kudos Received
            
        
                9
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 5873 | 06-25-2018 04:01 PM | |
| 8459 | 05-09-2018 05:36 PM | |
| 3264 | 03-16-2018 04:11 PM | |
| 9103 | 05-18-2017 12:42 PM | |
| 8118 | 03-28-2017 06:42 PM | 
			
    
	
		
		
		01-05-2017
	
		
		04:30 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		3 Kudos
		
	
				
		
	
		
					
							 
	That stack trace error in beeline seems clear to me:  
 org.apache.thrift.transport.TTransportException: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
  To fix you need to know what java beeline is using.  Do a ps -ef | grep beeline to see.  Like so..  root@chupa1 ~]# ps -ef | grep beeline
root      4239  4217  2 16:20 pts/0    00:00:01 /usr/jdk64/jdk1.8.0_77/bin/java -Xmx1024m -Dhdp.version=2.5.0.0-1133 -Djava.net.preferIPv4Stack=true -Dhdp.version=2.5.0.0-1133 -Dhadoop.log.dir=/var/log/hadoop/root -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.5.0.0-1133/hadoop -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.5.0.0-1133/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.5.0.0-1133/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx1024m -Xmx1024m -Djava.util.logging.config.file=/usr/hdp/2.5.0.0-1133/hive/conf/parquet-logging.properties -Dlog4j.configuration=beeline-log4j.properties -Dhadoop.security[.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /usr/hdp/2.5.0.0-1133/hive/lib/hive-beeline-1.2.1000.2.5.0.0-1133.jar org.apache.hive.beeline.BeeLine  Based on my output I would import my knox trust certificate to the cacert that my beeline client is using in my case   /usr/jdk64/jdk1.8.0_77/jre/lib/security/cacert  The import now would look like  keytool -import-trustcacerts -keystore /usr/jdk64/jdk1.8.0_77/jre/lib/security/cacert -storepass changeit -noprompt -alias knox -file /tmp/knox.crt  and restart beeline client to move past the error.  The issue here is definitely with SSL. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-04-2017
	
		
		09:57 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 I was unable to find a way around this. The NameNode just gives admin rights to the system user name which started its process, by default hdfs user.  You can also give others superuser permissions with dfs.permissions.superusergroup and dfs.cluster.administrators.  It seems ranger doesn't disallow superusers unless in the case of KMS encrypted zones.  In terms of KMS I can see there is a blacklist mechanism to disallow superuser.  I don't think there is a similar feature for Ranger itself. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-04-2017
	
		
		06:07 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I see.  So you want to remove privileges from Hadoop Super User?  I think there are ways around this but not recommended.  Let me do a bit more research on this. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-03-2017
	
		
		11:29 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							@Avijeet Dash I don't necessarily agree with your statement. Maybe I am missing something here.  "even if a directory is protected for a user/group - hdfs can always access it."    If you have kerberos enabled and you set the permissions of the directories correctly even hdfs user wouldn't have access unless specified in ranger.   http://hortonworks.com/blog/best-practices-in-hdfs-authorization-with-apache-ranger/ 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-03-2017
	
		
		10:46 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 You may find this useful as well for future.  https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients  Also, be advised ranger only works with hiveserver2. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-27-2016
	
		
		07:51 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		3 Kudos
		
	
				
		
	
		
					
							 PROBLEM:  Some users may be associated to many groups causing a very long list of groups to be passed through the Rest APIs headers in Ranger and KMS.   ERROR:   error log from /var/log/ranger/kms/kms.log  2016-12-01 14:04:12,048 INFO Http11Processor - Error parsing HTTP request header 
Note: further occurrences of HTTP header parsing errors will be logged at DEBUG level. 
java.lang.IllegalArgumentException: Request header is too large 
at org.apache.coyote.http11.InternalInputBuffer.fill(InternalInputBuffer.java:515) 
at org.apache.coyote.http11.InternalInputBuffer.fill(InternalInputBuffer.java:504) 
at org.apache.coyote.http11.InternalInputBuffer.parseHeader(InternalInputBuffer.java:396) 
at org.apache.coyote.http11.InternalInputBuffer.parseHeaders(InternalInputBuffer.java:271) 
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1007) 
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:625) 
at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:316) 
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) 
at java.lang.Thread.run(Thread.java:745) 
2016-12-01 14:04:12,074 INFO Http11Processor - Error parsing HTTP request header 
Note: further occurrences of HTTP header parsing errors will be logged at DEBUG level. 
java.lang.IllegalArgumentException: Request header is too large 
at org.apache.coyote.http11.InternalInputBuffer.fill(InternalInputBuffer.java:515) 
at org.apache.coyote.http11.InternalInputBuffer.fill(InternalInputBuffer.java:504) 
at org.apache.coyote.http11.InternalInputBuffer.parseHeader(InternalInputBuffer.java:396) 
at org.apache.coyote.http11.InternalInputBuffer.parseHeaders(InternalInputBuffer.java:271) 
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1007) 
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:625) 
at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:316) 
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) 
at java.lang.Thread.run(Thread.java:745)   
ROOT CAUSE:  Rest API calls being passed with large headersizes in this case users with large amount of groups that exceed the webservers maxHttpHeaderSize.   SOLUTION:    
In Ambari go to Ranger Admin->config->Advanced Tab->Custom ranger-admin-site->Add Property. Put ranger.service.http.connector.property.maxHttpHeaderSize in Key field and provide the required value for maxHttpHeaderSize attribute in Value field.   
Save the changes and then go to Ranger KMS->config->Advanced Tab->Custom ranger-kms-site->Add Property. Put ranger.service.http.connector.property.maxHttpHeaderSize in Key field and provide the required value for maxHttpHeaderSize attribute in Value field.   
Save the changes and restart all Ranger and Ranger KMS services.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		12-26-2016
	
		
		04:21 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 It seems that HDFS is not synching your groups.  Try restarting the cluster to see if that helps. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-26-2016
	
		
		02:03 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 
	Hi 
	@Sami Ahmad,  It isn't the krb5.conf file that is corrupt but more the information that Ambari has in the database to manage your krb5.conf file. From what I am seeing above there isn't a configuration version selected and therefore Ambari is unable to find the configuration data.  In my cluster I have a version selected for each which should be the last version.  Here is what mine looks like. Notice the latest selected versions.  
 ambari=> select * from clusterconfigmapping where type_name = 'krb5-conf' or type_name = 'kerberos-env' order by version_tag desc;
 cluster_id |  type_name   |     version_tag      | create_timestamp | selected | user_name 
------------+--------------+----------------------+------------------+----------+-----------
          2 | krb5-conf    | version1478018911089 |    1478018910394 |        1 | admin
          2 | kerberos-env | version1478018911089 |    1478018910391 |        1 | admin
          2 | kerberos-env | version1477959455789 |    1477959455113 |        0 | admin
          2 | krb5-conf    | version1477959455789 |    1477959455120 |        0 | admin
          2 | kerberos-env | version1477959390268 |    1477959389823 |        0 | admin
          2 | krb5-conf    | version1477959390268 |    1477959389814 |        0 | admin
          2 | krb5-conf    | version1477956530144 |    1477956529438 |        0 | admin
          2 | kerberos-env | version1477956530144 |    1477956529436 |        0 | admin
          2 | krb5-conf    | version1477687536774 |    1477687536111 |        0 | admin
          2 | kerberos-env | version1477687536774 |    1477687536113 |        0 | admin
          2 | krb5-conf    | version1             |    1477680416621 |        0 | admin
          2 | kerberos-env | version1             |    1477680416662 |        0 | admin
(12 rows)
 This command will show me what Ambari thinks my latest version is and the content.
 [root@chupa1 /]# /var/lib/ambari-server/resources/scripts/configs.sh -u admin -p admin get localhost myclustername krb5-conf
USERID=admin
PASSWORD=admin
########## Performing 'GET' on (Site:krb5-conf, Tag:version1478018911089)
"properties" : {
"conf_dir" : "/etc",
"content" : "[libdefaults]\n renew_lifetime = 7d\n forwardable= true\n default_realm = {{realm|upper()}}\n ticket_lifetime = 48h\n dns_lookup_realm = false\n dns_lookup_kdc = false\n #default_tgs_enctypes = {{encryption_types}}\n #default_tkt_enctypes ={{encryption_types}}\n\n{% if domains %}\n[domain_realm]\n{% for domain in domains.split(',') %}\n {{domain}} = {{realm|upper()}}\n{% endfor %}\n{%endif %}\n\n[logging]\n default = FILE:/var/log/krb5kdc.log\nadmin_server = FILE:/var/log/kadmind.log\n kdc = FILE:/var/log/krb5kdc.log\n\n[realms]\n {{realm}} = {\n admin_server = {{admin_server_host|default(kdc_host, True)}}\n kdc = chupa1.openstacklocal\n }\n\n{# Append additional realm declarations below dav#}",
"domains" : "",
"manage_krb5_conf" : "true"
}
 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-25-2016
	
		
		08:09 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 SYMPTOM:   Knox logs are filling up disk space  ROOT CAUSE:
Kerberos debug is turned on by default causing the gateway.out file to grow rapidly.  RESOLUTION:
To turn off kerberos debug logging.   
1. Go to Ambari. KNOX -> Configs-> Advanced gateway-site   
2. Change parameter sun.security.krb5.debug from true to false.   
3. Restart Knox. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		12-25-2016
	
		
		08:06 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 SYMPTOM: 
Hbase is giving an error KeyValue size too large when inserting large data sizes.  ERROR:   0java.lang.IllegalArgumentException: KeyValue size too large 
at org.apache.hadoop.hbase.client.HTable.validatePut(HTable.java:1521) 
at org.apache.hadoop.hbase.client.BufferedMutatorImpl.validatePut(BufferedMutatorImpl.java:147) 
at org.apache.hadoop.hbase.client.BufferedMutatorImpl.doMutate(BufferedMutatorImpl.java:134) 
at org.apache.hadoop.hbase.client.BufferedMutatorImpl.mutate(BufferedMutatorImpl.java:105) 
at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1050) 
at org.apache.hadoop.hbase.rest.RowResource.update(RowResource.java:229) 
at org.apache.hadoop.hbase.rest.RowResource.put(RowResource.java:318) 
at sun.reflect.GeneratedMethodAccessor62.invoke(Unknown Source) 
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
at java.lang.reflect.Method.invoke(Method.java:497) 
at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) 
at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205) 
at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) 
at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288) 
at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:134) 
at   ROOT CAUSE:
hbase.client.keyvalue.maxsize is set too low  RESOLUTION:
Set hbase.client.keyvalue.maxsize=0 This will allow the client key value to be allowed.  Just be careful with this as too large of a keyvalue >1-2gb could have performance implications. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
 
         
					
				













