Member since
09-29-2015
123
Posts
216
Kudos Received
47
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
9014 | 06-23-2016 06:29 PM | |
3074 | 06-22-2016 09:16 PM | |
6133 | 06-17-2016 06:07 PM | |
2797 | 06-16-2016 08:27 PM | |
6559 | 06-15-2016 06:44 PM |
09-30-2024
07:40 AM
非安全集群被阻止rpc通信,使用webhdfs协议,hadoop distcp -D ipc.client.fallback-to-simple-auth-allowed=true webhdfs://nn1:50070/foo/bar hdfs://nn2:8020/bar/foo
... View more
07-15-2024
01:53 AM
Hi team, I want to configuration "yarn-user/hdp01-node.lab.contoso.com@LAB.CONTOSO.COM" to "yarn-user" "yarn-user/hdp02-node.lab.contoso.com@LAB.CONTOSO.COM" to "yarn-user" "yarn-user/hdp03-node.lab.contoso.com@LAB.CONTOSO.COM" to "yarn-user" Please given a rule advidor. Thanks
... View more
06-17-2016
07:24 PM
@manichinnari555, I'm glad to hear this helped. I believe setting at table creation time should be sufficient.
... View more
12-30-2016
08:20 PM
@Chris Nauroth Then there must be an issue if after 7 hours there is still no reduction in HDFS storage and it still keeps the /trash/finalized folder...
... View more
07-11-2016
01:30 PM
Has there been any progress so far on that issue... I've tried so many approach that I've resorted to making this script that checks the node status every minute...
... View more
10-27-2017
03:45 PM
I ran into this exact issue and didn't see a resolution here but wanted to update the thread for anyone that comes looking in the future: I am setting up HDF on an Azure IaaS cluster and had the same issue of Zookeeper unable to bind to the port. In my case I believe it was cloud network configuration that was blocking communication. Switching to using internal IPs for my VMs inside of /etc/hosts for all my nodes (rather than the public IPs I was using before) solved the issue.
... View more
06-11-2016
08:46 PM
1 Kudo
Hello @Thiago. It is possible to achieve communication across secured and unsecured clusters. A common use case for this is using DistCp for transfer of data between clusters. As mentioned in other answers, the configuration property ipc.client.fallback-to-simple-auth-allowed=true tells a secured client that it may enter a fallback unsecured mode when the unsecured server side fails to satisfy authentication. However, I recommend not setting this in core-site.xml, and instead setting it on the command line invocation specifically for the DistCp command that needs to communicate with the unsecured cluster. Setting it in core-site.xml means that all RPC connections for any application are eligible for fallback to simple authentication. This potentially expands the attack surface for man-in-the-middle attacks. Here is an example of overriding the setting on the command line while running DistCp: hadoop distcp -D ipc.client.fallback-to-simple-auth-allowed=true hdfs://nn1:8020/foo/bar hdfs://nn2:8020/bar/foo The command must be run while logged into the secured cluster, not the unsecured cluster. This is adapted from one of my prior answers: https://community.hortonworks.com/questions/294/running-distcp-between-two-cluster-one-kerberized.html
... View more
06-09-2016
10:55 AM
And learned some new things as well. Never knew that Hadoop can go directly to LDAP as well. Also static mapping is interesting.
... View more
06-10-2016
07:25 AM
I have 3 Journal Noeds in my cluster, but they don't seem to fail.
... View more
06-13-2016
06:13 AM
@ScipioTheYounger, I expect this is similar to another question you asked.
https://community.hortonworks.com/questions/35574/switch-namenode-ha-zookeeper-access-from-no-securi.html I'll repeat the same information here for simplicity. change ha.zookeeper.acl in core-site.xml to this: <property>
<name>ha.zookeeper.acl</name>
<value>sasl:nn:rwcda</value>
</property> Then, you'd want to run the following to reformat ZooKeeper for NameNode HA, which would reinitialize the znode used by NameNode HA to coordinate automatic failover. hdfs zkfc -formatZK -force The tricky part, as you noticed, is getting that command to authenticate with SASL. The ZooKeeper and SASL guide in the Apache documentation discusses implementation and configuration of SASL in ZooKeeper in detail. For this particular command, you can use this procedure. First, create a JAAS configuration file at /etc/hadoop/conf/hdfs_jaas.conf: Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
useTicketCache=false
keyTab="/etc/security/keytabs/nn.service.keytab"
principal="nn/<HOST>@EXAMPLE.COM";
}; Note that the will be different depending on the NameNode hostnames in your environment. Likewise, you'll need to change EXAMPLE.COM to the correct Kerberos realm. Next, edit /etc/hadoop/conf/hadoop-env.sh, and add the following line to enable SASL when running the zkfc command. export HADOOP_ZKFC_OPTS="-Dzookeeper.sasl.client=true -Dzookeeper.sasl.client.username=zookeeper -Djava.security.auth.login.config=/etc/hadoop/conf/hdfs_jaas.conf -Dzookeeper.sasl.clientconfig=Client ${HADOOP_ZKFC_OPTS}" Then, run the "hdfs zkfc -formatZK -force" command.
... View more