Member since
05-09-2016
421
Posts
54
Kudos Received
32
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2501 | 04-22-2022 11:31 AM | |
2252 | 01-20-2022 11:24 AM | |
2114 | 11-23-2021 12:53 PM | |
2850 | 02-07-2018 12:18 AM | |
4684 | 06-08-2017 09:13 AM |
07-24-2016
09:15 AM
Hi @yzheng Thank you for your response. I found 2 extra activemq jar in /usr/hdp/current/oozie-client directory. Removing them solved the the problem. Rahul
... View more
07-12-2016
01:16 PM
@Niraj Parmar Yes you can. More details will help you in getting more accurate answers.
... View more
11-20-2016
11:46 PM
@Shihab, how did you solve this issue ? what clean-up is required so kafka does not need authorization ? i'm getting the same issue .. I'm on HDP 2.4. thanks for your help in advance.
... View more
06-05-2018
05:46 PM
It worked for me. Thanks!
... View more
02-24-2017
03:49 AM
I tried again using the hortonworks sandbox's ip address, this time it says Connection refused (tried port 2222, failed with same error) scp C/RXIE/Learning/Github/spark-master/examples/src/main/python/streaming/kafka_wordcount.py root@192.168.128.119:/root
... View more
06-22-2016
05:42 PM
@rmolina The webhcat response time did end up seeming to be the result of the issue. I do believe that being able to block off a certain amount of memory for WebHCat specifically ( I think this may already be available with templeton.mapper.memory.mb ), but that is just the mapper memory and I haven't looked too much farther into it. When there are no other users using the cluster, the Pig GUI view will run fine, but as that is not going to be the case for most Prod clusters that we deploy, I think that being able to set a reserve specifically in the WebHCat-Env or WebHCat-Site could prove to be useful in making sure the resources are properly allocated.
... View more
06-17-2016
12:42 PM
1 Kudo
In order to distcp between two HDFS HA cluster (for example A and B), using nameservice id or to setup falcon clusters having namenode ha, these settings are needed.
Assuming nameservice for cluster A and B is HAA and HAB respectively.
One need to set following properties in hdfs-site.xml
Add value of the nameservices of both clusters in dfs.nameservices. This needs to be done in both the clusters. dfs.nameservices=HAA,HAB Add property dfs.internal.nameservices
In cluster A:
dfs.internal.nameservices = HAA
In cluster B:
dfs.internal.nameservices = HAB
Add dfs.ha.namenodes.<nameservice>. dfs.ha.namenodes.HAB=nn1,nn2 dfs.ha.namenodes.HAA=nn1,nn2 Add property dfs.namenode.rpc-address.<nameservice>.<nn>. dfs.namenode.rpc-address.HAB.nn1 = <NN1_fqdn>:8020 dfs.namenode.rpc-address.HAB.nn2 = <NN2_fqdn>:8020 dfs.namenode.rpc-address.HAA.nn1 = <NN1_fqdn>:8020 dfs.namenode.rpc-address.HAA.nn2 = <NN2_fqdn>:8020
Add property dfs.client.failover.proxy.provider.<nameservice> In cluster A
dfs.client.failover.proxy.provider.HAB = org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider In cluster B
dfs.client.failover.proxy.provider.HAA = org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider Restart HDFS service.
Once complete you will be able to run the distcp command using the nameservice similar to:
hadoop distcp hdfs://HAA/tmp/file1 hdfs://HAB/tmp/
... View more
Labels:
11-30-2016
12:17 PM
https://issues.apache.org/jira/browse/AMBARI-17339
... View more
05-03-2017
04:21 PM
Very good article Rahul. Quick question: Does the table have to be partitioned? I'm trying to replicate a non-partitioned table with UI and I'm getting an exception. default/FalconWebException:FalconException:java.net.URISyntaxException:Partition Details are missing. How can I replicate this table using the UI?
... View more
05-31-2016
09:43 AM
1 Kudo
Hi All, I discovered that the issue was with my below configuration: <name>oozie.service.HadoopAccessorService.hadoop.configurations</name><value>*=/etc/hadoop/conf,nr1.hwxblr.com:8020=/etc/primary_conf/conf,nr3.hwxblr.com:8030=/etc/primary_conf/conf,nr21.hwxblr.com:8020=/etc/hadoop/conf,nr23.hwxblr.com:8030=/etc/hadoop/conf</value> Instead of 8030 port it should be 8050. Thanks @Kuldeep Kulkarni for finding this.
... View more
- « Previous
- Next »