Member since
05-09-2016
421
Posts
54
Kudos Received
32
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2519 | 04-22-2022 11:31 AM | |
2270 | 01-20-2022 11:24 AM | |
2140 | 11-23-2021 12:53 PM | |
2863 | 02-07-2018 12:18 AM | |
4713 | 06-08-2017 09:13 AM |
06-20-2016
05:08 PM
@Colton Rodgers
Can you provide your hadoop.proxyusers.* properties settings?
... View more
06-17-2016
05:07 PM
@Anshul Sisodia Ideally you should not worry about connecting to active RM. The failover provider class takes care of that. https://hadoop.apache.org/docs/r2.7.2/hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html#RM_Failover
... View more
06-17-2016
04:05 PM
@Manikandan Durairaj If you have not enabled RM HA then 8050 is correct value. Please share log of failed job when you use 8050 port.
... View more
06-17-2016
12:42 PM
1 Kudo
In order to distcp between two HDFS HA cluster (for example A and B), using nameservice id or to setup falcon clusters having namenode ha, these settings are needed.
Assuming nameservice for cluster A and B is HAA and HAB respectively.
One need to set following properties in hdfs-site.xml
Add value of the nameservices of both clusters in dfs.nameservices. This needs to be done in both the clusters. dfs.nameservices=HAA,HAB Add property dfs.internal.nameservices
In cluster A:
dfs.internal.nameservices = HAA
In cluster B:
dfs.internal.nameservices = HAB
Add dfs.ha.namenodes.<nameservice>. dfs.ha.namenodes.HAB=nn1,nn2 dfs.ha.namenodes.HAA=nn1,nn2 Add property dfs.namenode.rpc-address.<nameservice>.<nn>. dfs.namenode.rpc-address.HAB.nn1 = <NN1_fqdn>:8020 dfs.namenode.rpc-address.HAB.nn2 = <NN2_fqdn>:8020 dfs.namenode.rpc-address.HAA.nn1 = <NN1_fqdn>:8020 dfs.namenode.rpc-address.HAA.nn2 = <NN2_fqdn>:8020
Add property dfs.client.failover.proxy.provider.<nameservice> In cluster A
dfs.client.failover.proxy.provider.HAB = org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider In cluster B
dfs.client.failover.proxy.provider.HAA = org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider Restart HDFS service.
Once complete you will be able to run the distcp command using the nameservice similar to:
hadoop distcp hdfs://HAA/tmp/file1 hdfs://HAB/tmp/
... View more
Labels:
06-15-2016
12:30 PM
@Mukesh Burman I guess you can accept my answer. 🙂
... View more
06-15-2016
05:53 AM
1 Kudo
@Mukesh Burman See if this is what you are looking for. https://cwiki.apache.org/confluence/display/AMBARI/Installing+ambari-agent+on+target+hosts
... View more
06-14-2016
07:34 PM
@kavitha velaga Have you set hadoop.proxyuser.ambari-server.hosts and hadoop.proxyuser.ambari-server.groups to * in core-site? Also please share your tez view configuration.
... View more
06-14-2016
07:09 PM
@Anshul Sisodia Are you using RM HA? It makes a bit difference.
... View more
06-14-2016
01:23 PM
1 Kudo
@Sri BandaruDid you get the solution for this? If not can you share your PIG view configuration.
... View more