Member since 
    
	
		
		
		08-10-2016
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                170
            
            
                Posts
            
        
                14
            
            
                Kudos Received
            
        
                6
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 23108 | 01-31-2018 04:55 PM | |
| 5522 | 11-29-2017 03:28 PM | |
| 2546 | 09-27-2017 02:43 PM | |
| 2999 | 09-12-2016 06:36 PM | |
| 2660 | 09-02-2016 01:58 PM | 
			
    
	
		
		
		02-01-2018
	
		
		03:49 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 So it's part of the question.  I found the logs  /var/log/hadoop-kms/kms-localhost.2018-01-31.log  Caused by: java.lang.IllegalArgumentException: Invalid rule: hdfs/ip-172-31-10-196.us-west-2.compute.internal@MYREALM.INTERNAL,
        spark/ip-172-31-10-196.us-west-2.compute.internal@MYREALM.INTERNAL,
        yarn/ip-172-31-10-196.us-west-2.compute.internal@MYREALM.INTERNAL,
        HTTP/ip-172-31-10-196.us-west-2.compute.internal@MYREALM.INTERNAL
        at org.apache.hadoop.security.authentication.util.KerberosName.parseRules(KerberosName.java:331)
        at org.apache.hadoop.security.authentication.util.KerberosName.setRules(KerberosName.java:397)
        at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.init(KerberosAuthenticationHandler.java:210)
        ... 31 more    Looks like my rules that were badly written caused the issue. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-01-2018
	
		
		03:46 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							@Robert Levas   I'm going to give you the answer because I found this article you wrote about rule syntax and clearly that's my issue.. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-01-2018
	
		
		03:45 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @vperiasamy thanks for your response.  Proxy user is set to *    <property>
    <name>hadoop.kms.proxyuser.hdfs.hosts</name>
    <value>*</value>
  </property>
  <property>
    <name>hadoop.kms.proxyuser.hdfs.groups</name>
    <value>*</value>
  </property>
  <property>
    <name>hadoop.kms.proxyuser.hdfs.users</name>
    <value>*</value>
  </property>
  <property>
    <name>hadoop.kms.proxyuser.hive.groups</name>
    <value>*</value>
  </property>
  <property>
    <name>hadoop.kms.proxyuser.HTTP.groups</name>
    <value>*</value>
  </property>
  <property>
    <name>hadoop.kms.proxyuser.HTTP.users</name>
    <value>*</value>
  </property>
  <property>
    <name>hadoop.kms.proxyuser.hive.users</name>
    <value>*</value>
  </property>
  <property>
    <name>hadoop.kms.proxyuser.hive.hosts</name>
    <value>*</value>
  </property>
  <property>
    <name>hadoop.kms.proxyuser.HTTP.hosts</name>
    <value>*</value>
  </property>
<br>  Looks like I followed an article that was wrong.( @Sindhu )  Here's the log I found callilng out that the hadoop.kms.authentication.kerberos.name.rules are wrong  /var/log/hadoop-kms/kms-localhost.2018-01-31.log  Caused by: java.lang.IllegalArgumentException: Invalid rule: hdfs/ip-172-31-10-196.us-west-2.compute.internal@MYREALM.INTERNAL,
        spark/ip-172-31-10-196.us-west-2.compute.internal@MYREALM.INTERNAL,
        yarn/ip-172-31-10-196.us-west-2.compute.internal@MYREALM.INTERNAL,
        HTTP/ip-172-31-10-196.us-west-2.compute.internal@MYREALM.INTERNAL
        at org.apache.hadoop.security.authentication.util.KerberosName.parseRules(KerberosName.java:331)
        at org.apache.hadoop.security.authentication.util.KerberosName.setRules(KerberosName.java:397)
        at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.init(KerberosAuthenticationHandler.java:210)
        ... 31 more
<br> 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-01-2018
	
		
		02:08 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I followed this article.  It tell you how to configure KMS.  That is what I followed immediately before getting the 404.  Is it possible that by following that aricle I'm making KMS crash and hence the 404?  How would I look at the error log for KMS.  It seems to be a web app but I can't seem to find a log for it. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-01-2018
	
		
		01:38 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Robert Levas   <property>
    <name>hadoop.security.key.provider.path</name>
    <value>kms://http@ip-172-31-10-196.us-west-2.compute.internal:9700/kms</value>
  </property>
  Looks valid...  what logs can I check? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-31-2018
	
		
		07:52 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I'm running in a kerberized cluster.  I try to run any spark job and I get the following:  [spark_remote@ip-172-31-10-196 ~]$ spark-submit --class org.apache.spark.examples.SparkPi --master yarn-cluster /usr/lib/spark/examples/jars/spark-examples.jar
Warning: Master yarn-cluster is deprecated since 2.0. Please use master "yarn" with specified deploy mode instead.
18/01/31 19:42:18 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/01/31 19:42:20 INFO RMProxy: Connecting to ResourceManager at ip-172-31-10-196.us-west-2.compute.internal/172.31.10.196:8032
18/01/31 19:42:20 INFO Client: Requesting a new application from cluster with 0 NodeManagers
18/01/31 19:42:20 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (11520 MB per container)
18/01/31 19:42:20 INFO Client: Will allocate AM container, with 1408 MB memory including 384 MB overhead
18/01/31 19:42:20 INFO Client: Setting up container launch context for our AM
18/01/31 19:42:20 INFO Client: Setting up the launch environment for our AM container
18/01/31 19:42:20 INFO Client: Preparing resources for our AM container
18/01/31 19:42:20 INFO HadoopFSCredentialProvider: getting token for: hdfs://ip-172-31-10-196.us-west-2.compute.internal:8020/user/spark_remote
18/01/31 19:42:20 INFO DFSClient: Created HDFS_DELEGATION_TOKEN token 20 for spark_remote on 172.31.10.196:8020
Exception in thread "main" java.io.IOException: java.lang.reflect.UndeclaredThrowableException
	at org.apache.hadoop.crypto.key.kms.KMSClientProvider.addDelegationTokens(KMSClientProvider.java:888)
	at org.apache.hadoop.crypto.key.KeyProviderDelegationTokenExtension.addDelegationTokens(KeyProviderDelegationTokenExtension.java:86)
	at org.apache.hadoop.hdfs.DistributedFileSystem.addDelegationTokens(DistributedFileSystem.java:2234)
	at org.apache.spark.deploy.yarn.security.HadoopFSCredentialProvider$$anonfun$obtainCredentials$1.apply(HadoopFSCredentialProvider.scala:52)
	at org.apache.spark.deploy.yarn.security.HadoopFSCredentialProvider$$anonfun$obtainCredentials$1.apply(HadoopFSCredentialProvider.scala:49)
	at scala.collection.immutable.Set$Set1.foreach(Set.scala:94)
	at org.apache.spark.deploy.yarn.security.HadoopFSCredentialProvider.obtainCredentials(HadoopFSCredentialProvider.scala:49)
	at org.apache.spark.deploy.yarn.security.ConfigurableCredentialManager$$anonfun$obtainCredentials$2.apply(ConfigurableCredentialManager.scala:82)
	at org.apache.spark.deploy.yarn.security.ConfigurableCredentialManager$$anonfun$obtainCredentials$2.apply(ConfigurableCredentialManager.scala:80)
	at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
	at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
	at scala.collection.Iterator$class.foreach(Iterator.scala:893)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
	at scala.collection.MapLike$DefaultValuesIterable.foreach(MapLike.scala:206)
	at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
	at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
	at org.apache.spark.deploy.yarn.security.ConfigurableCredentialManager.obtainCredentials(ConfigurableCredentialManager.scala:80)
	at org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:389)
	at org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:832)
	at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:170)
	at org.apache.spark.deploy.yarn.Client.run(Client.scala:1109)
	at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1168)
	at org.apache.spark.deploy.yarn.Client.main(Client.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:775)
	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.reflect.UndeclaredThrowableException
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1713)
	at org.apache.hadoop.crypto.key.kms.KMSClientProvider.addDelegationTokens(KMSClientProvider.java:870)
	... 31 more
Caused by: org.apache.hadoop.security.authentication.client.AuthenticationException: Authentication failed, status: 404, message: Not Found
	at org.apache.hadoop.security.authentication.client.AuthenticatedURL.extractToken(AuthenticatedURL.java:275)
	at org.apache.hadoop.security.authentication.client.PseudoAuthenticator.authenticate(PseudoAuthenticator.java:77)
	at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:131)
	at org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:214)
	at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:131)
	at org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:215)
	at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:288)
	at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.getDelegationToken(DelegationTokenAuthenticator.java:169)
	at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:373)
	at org.apache.hadoop.crypto.key.kms.KMSClientProvider$2.run(KMSClientProvider.java:875)
	at org.apache.hadoop.crypto.key.kms.KMSClientProvider$2.run(KMSClientProvider.java:870)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
	... 32 more
  The message is odd, after changing the principle of KMS  Authentication failed, status: 404, message: Not Found  Any hints of where to look would be appreciated.. .there isn't anything in the KDC log:  Jan 31 14:49:54 ip-172-31-11-134.us-west-2.compute.internal krb5kdc[9279](info): TGS_REQ (2 etypes {18 17}) 172.31.10.196: ISSUE: authtime 1517428183, etypes {rep=18 tkt=18 ses=18}, spark_remote/ip-172-31-10-196.us-west-2.compute.internal@DATAPASSPORT.INTERNAL for yarn/ip-172-31-10-196.us-west-2.compute.internal@MYREALM.INTERNAL
Jan 31 14:49:54 ip-172-31-11-134.us-west-2.compute.internal krb5kdc[9279](info): closing down fd 11
Jan 31 14:49:54 ip-172-31-11-134.us-west-2.compute.internal krb5kdc[9279](info): TGS_REQ (2 etypes {18 17}) 172.31.10.196: ISSUE: authtime 1517428183, etypes {rep=18 tkt=18 ses=18}, spark_remote/ip-172-31-10-196.us-west-2.compute.internal@DATAPASSPORT.INTERNAL for hdfs/ip-172-31-10-196.us-west-2.compute.internal@MYREALM.INTERNAL
Jan 31 14:49:54 ip-172-31-11-134.us-west-2.compute.internal krb5kdc[9279](info): closing down fd 11
 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		01-31-2018
	
		
		07:13 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Created symlink of ranger kms conf to core site and hdfs site is a vagues statement.  Could you explain a little more... I know how to create a symlink, but I don't know what you mean by "Created symlink of ranger kms conf to core site and hdfs site"  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-31-2018
	
		
		04:55 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 rebooted all services that had keytabs, and then I was able to connect.  There error stopped.  Thanks for the responses. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-31-2018
	
		
		02:10 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Harald Berghoff  Here's the output:  [root@ec2-user]# nslookup this
Server:         192.168.1.100
Address:        192.168.1.100#53
Non-authoritative answer:
Name:   this.server.fqdn.compute.internal
Address: 172.31.10.196
[root@ec2-user]# nslookup this.server.fqdn 
Server:         192.168.1.100
Address:        192.168.1.100#53
** server can't find this.server.fqdn: NXDOMAIN
[root@ip-172-31-10-196 ec2-user]# nslookup this
Server:         192.168.1.100
Address:        192.168.1.100#53
Non-authoritative answer:
Name:   this.server.fqdn.compute.internal
Address: 172.31.10.196
[root@ec2-user]# nslookup 192.168.1.100
Server:         192.168.1.100
Address:        192.168.1.100#53
Non-authoritative answer:
100.1.168.192.in-addr.arpa      name = ip-192.168.1.100.us-west-2.compute.internal.
Authoritative answers can be found from:
    Obviously this output is obstificated... I"m happy to share the real output privately if that helps.  I'm running in amazon on a EC2 cluster. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-30-2018
	
		
		11:27 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thanks, I saw your other post @Geoffrey Shelton Okot and I did make sure the host file was empty. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
        













