Member since
09-05-2016
33
Posts
8
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1907 | 10-15-2019 09:05 PM | |
3171 | 08-30-2018 11:56 PM | |
10232 | 10-05-2016 07:07 AM | |
2361 | 09-29-2016 01:28 AM |
10-15-2019
09:05 PM
The problem was iptables on the data nodes. Once these were flushed the following command worked a treat. hadoop distcp -D ipc.client.fallback-to-simple-auth-allowed=true hdfs://active_namenode:8020/tmp/test.txt swebhdfs://active_namenode:50470/tmp/test.txt Apologies for my confusion.
... View more
10-14-2019
04:25 PM
Hi,
I have an issue with distcp authentication between a kerberos secured cluster (HDP2.6.1.0-129) and an unsecured cluster. (HDP3.0.1.0-187)
The compatibility matrix for the versions says they should interoperate
I am running the following from the secure cluster:
hadoop distcp -D ipc.client.fallback-to-simple-auth-allowed=true swebhdfs://FQDN(secure cluster):50470/tmp/test.sh swebhdfs://FQDN(insecure cluster):50470/tmp/test.sh
Both ends have TLS enabled.I have a truststore configured and working.
I can see packets outbound from the client and arriving at the server using tcpdump
The error returned by the distcp command above is:
19/10/15 09:58:48 ERROR tools.DistCp: Exception encountered org.apache.hadoop.security.AccessControlException: Authentication required at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:460) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:114) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:750) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:592) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:622) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:618) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getDelegationToken(WebHdfsFileSystem.java:1524) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getDelegationToken(WebHdfsFileSystem.java:333) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getAuthParameters(WebHdfsFileSystem.java:557) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toUrl(WebHdfsFileSystem.java:578) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractFsPathRunner.getUrl(WebHdfsFileSystem.java:852) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:745) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:592) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:622) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:618) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:1004) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:1020) at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57) at org.apache.hadoop.fs.Globber.glob(Globber.java:252) at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1696) at org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:77) at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86) at org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:398) at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:190) at org.apache.hadoop.tools.DistCp.execute(DistCp.java:155) at org.apache.hadoop.tools.DistCp.run(DistCp.java:128) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.tools.DistCp.main(DistCp.java:462)
The namenode log on the client shows successful authentication to kerberos.
The server hdfs namenode log shows the following warning:
WARN namenode.FSNamesystem (FSNamesystem.java:getDelegationToken(5611)) - trying to get DT with no secret manager running
Has anyone come across this issue before?
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
10-29-2018
12:17 AM
@Mike Wong I had an issue similar when I enabled SSL on HDFS. Map Reduce and YARN. Symptoms: I could connect to HDFS rest interface from the command line internal to the cluster using curl I could run the UI for HDFS from internal to the cluster using xwindows enabled putty an XMing OpenSSL form internal returned 0 for the HDFS connection No external connection to the could be made using curl/openssl/telnet to HDFS The ResourceManager and JobHistory UI's both worked fine We had a firewall but even when disabled the connection to HDFS failed The issue was that we have 'internal network IP addresses (infiniband)' and externally accessible IP addresses. The HDFS when enabling https had bound to the internal address and wasn't listening on the external address - hence the connection refused. The solution was to add the
dfs.namenode.https-bind-host=0.0.0.0 property to get the service to listen across all network interfaces. Might be worth checking if you (or anybody else that gets a connection refused errors) that have multiple network interfaces, to which the port is binding to.
netstat -nlp | grep 50470
tcp 0 0 <internal IP address>:50470 0.0.0.0:* LISTEN 23521/java
netstat -nlp | grep 50070
tcp 0 0 0.0.0.0:50070 0.0.0.0:* LISTEN 23521/java
... View more
08-30-2018
11:56 PM
As far as I know there is no way around the issue except to get the example.com name added to the certificate as a subject alt name. TLS implementations have become more strictly enforced over the years. As anyone who has configured against NiFi will attest. Adding the subject alt name shouldn't be an issue except for the fact that the certificates are self signed instead of having a issuer chain. This means that all relying applications would need to update their trust stores with the new certificate/s. Your AD team should be using a chain rather than self signed because this will also cause problems when the self signed certificates expire.
... View more
11-14-2016
01:29 AM
1 Kudo
Hi All, I have a question regarding the common name attribute of key stores when enabling SSL for HDFS,MapReduce and Yarn along with Ranger HDFS plugin. I have read articles that state that the common name needs to be identical across the cluster for the keystores for the xasecure.policymgr.clientssl.keystore key store. This key store seems to be responsible for protecting the Ranger console so when not set to the FQDN of the console server breaks the Ranger console. https://community.hortonworks.com/articles/16373/ranger-ssl-pitfalls.html If this configuration is set then the journal nodes complain about the common name in the certificate not being the same as the hostname. Additionally, the configuration item common.name.for.certificate in hdfs-site.xml and the "Common Name for Certificate" value in the ranger plug-in policies implies that a single common name is required in all keystores across the cluster. response={"httpStatusCode":400,"statusCode":1,"msgDesc":"Unauthorized access. No common name for certificate set. Please check your service config","messageList":[{"name":"OPER_NOT_ALLOWED_FOR_ENTITY","rbKey":"xa.error.oper_not_allowed_for_state","message":"Operation not allowed for entity"}]}, serviceName=<clusterName>_hadoop
If the key stores have the FQDN set then the name nodes throw an SSLException. javax.net.ssl.SSLHandshakeException: java.security.cert.CertificateException: No name matching <hostname> found
I am probably missing something. Do I need two distinct sets of keystores? One with an identical common name and one set with the FQDN for each host? Or should I be using a common subject alt name in the certificates?
... View more
Labels:
- Labels:
-
Apache Ranger
10-05-2016
07:58 AM
Knox_sample also works now
... View more
10-05-2016
07:07 AM
@mrizvi I downloaded the 2.5 sandbox and got the same issue as you describe. The problem seems to be the directory for previous deployments can't be deleted and this causes the service for the topologies to fail to start. I eventually got mine working by moving all of the topology xml files out of /usr/hdp/current/knox-server/conf/topologies and restarting knox. It automatically populates the default, knoxsso and admin files back into the folder. I was able to list the files using the default topology. I moved the knox_sample.xml back into the folder and did another restart. It failed to start due to the temp folder being unable to be deleted. So I catted the knox_sample.xml into another file knox_sample2.xml and restarted again. I was able to list the files through knox_sample2. It is more of a work around than anything else. I don't know why the temp folder can't be deleted. I couldn't delete the folder manually and when I tried I got an invalid argument error rmdir /var/lib/knox/data-2.5.0.0-1245/deployments/knoxsso.topo.157239f6c28/%2Fknoxauth/META-INF/temp/jsp rmdir: failed to remove `jsp/': Invalid argument
... View more
09-30-2016
08:36 AM
It's a different error from the gateway logs in the attached log file for the link I posted. So it probably isn't your issue.
This command requires a running instance of Knox to be present on the same machine. It will execute a test to make sure all services are accessible through the gateway URLs. Errors are reported and suggestions to resolve any problems are returned. JSON formatted. I'm confused now. This implies that knox isn't running. But the results for the user auth test say that it is. I'm running 2.4 so I will try to setup 2.5 over the weekend for myself. Do you have ranger enabled with knox? Can you post the gateway.log error with debug enabled for when you make your curl request? Cheers
... View more
09-30-2016
08:03 AM
@mrizvi Quite alright. Maybe a long shot but I found this post with a similar issue to the one you are experiencing. service unavailable
... View more
09-30-2016
01:56 AM
Might be a good idea to check if you have any knox zombies running. They can hang on to files and prevent deletes. ps -ef | grep -i knox
... View more