Member since
05-09-2016
291
Posts
53
Kudos Received
32
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
100 | 04-22-2022 11:31 AM | |
161 | 01-20-2022 11:24 AM | |
219 | 11-23-2021 12:53 PM | |
1119 | 02-07-2018 12:18 AM | |
2363 | 06-08-2017 09:13 AM |
07-21-2016
10:25 AM
show create table t11; What you get in output?
... View more
07-21-2016
10:04 AM
@payal patel What you see when you run show create table <tablename>; ?
... View more
07-20-2016
08:29 PM
This was confirmed as bug and fixed in Ambari 2.4 https://issues.apache.org/jira/browse/AMBARI-17339
... View more
07-20-2016
11:36 AM
It is throwing error for user mgrabowski Have you allowed this user to submit jobs to yarn or is this user part of allowed groups?
... View more
07-19-2016
03:43 PM
Falcon server is throwing NoSuchMethodError for activemq on start: Tried removing all data from /hadoop/falcon/embeddedmq/data and /hadoop/falcon/store but error persists: error log says Exception in thread "main" java.lang.NoSuchMethodError: org.apache.activemq.transport.TransportFactory.bind OS Ubuntu 14.04
... View more
Labels:
- Labels:
-
Apache Falcon
07-14-2016
06:31 AM
@Roberto Sancho Check this https://community.hortonworks.com/questions/40121/about-hue-access-hdfs-ha.html
... View more
07-13-2016
06:26 AM
@Divakar Annapureddy
I doubt log4j will work with hdfs. Try setting the file location on native linux path, something like /var/log/spark/spark.log
... View more
07-12-2016
07:10 PM
1 Kudo
@sidharth mishra I find http://stackoverflow.com/questions/6153560/hbase-client-connectionloss-for-hbase-error will be helpful in your case.
... View more
07-12-2016
01:16 PM
@Niraj Parmar Yes you can. More details will help you in getting more accurate answers.
... View more
07-06-2016
07:52 PM
@ScipioTheYounger should I just pick one for source cluster such as in the following: https://community.hortonworks.com/questions/9416/falcon-with-ha-resource-manager.html <interface type="execute" endpoint="RM1:8050" version="2.2.0" /> Yes that is correct. How do we define "hive.metastore.kerberos.principal" Refer https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.2/bk_installing_manually_book/content/configuring_for_secure_clusters_falcon.html
... View more
07-05-2016
07:33 AM
@Anshul Sisodia As Kuldeep pointed out when you have your RM in HA mode, yarn.resourcemanager.address value is not used to bind the port. Instead it uses port specified in yarn.resourcemanager.address.{rm1,rm2} for respective RMs. By default the value is 8032. Also Ambari do not set this by default.
... View more
07-01-2016
12:00 PM
@Roberto Sancho If you have already started zookeeper rest service then by default port will be 9998 on host running rest service. Refer this https://github.com/apache/zookeeper/tree/trunk/src/contrib/rest for installing and starting rest.
... View more
07-01-2016
05:38 AM
@Shihab Can you try deleting the topic test1. Replace localhost with zookeeper host in below command. bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic test1
... View more
06-30-2016
10:20 AM
2 Kudos
@Saurabh KumarCan you verify that you see same as in below screenshots?
Step 2 Step 3 Step 4
... View more
06-29-2016
12:53 PM
Check this https://docs.hortonworks.com/HDPDocuments/Ambari-2.2.2.0/bk_ambari_views_guide/content/_setup_WebHCat_proxy_user_pig_view.html
... View more
06-29-2016
12:39 PM
When you boot sandbox it says Press any key to enter menu. Press any key and then start from step 2
... View more
06-29-2016
11:11 AM
@mayki wogno
You need to place <ACL owner="falcon"group="hadoop" permission="0755"/> before <schema location="hcat" provider="hcat"/> Ordering is important.
Also remove <table uri="catalog:falcon_landing_db:summary_table#ds=${YEAR}-${MONTH}"/> from source cluster. It is not required.
... View more
06-29-2016
10:33 AM
1 Kudo
1. Go to grub menu 2. Select line starting with kernel and press e 3. In next screen add s at end of line and press enter. 4. In next screen press b which will log you in as root without password. 5. Change password using passwd.
... View more
06-28-2016
05:18 PM
1 Kudo
In this case below command should work for you. scp /cygdrive/c/Users/rnkumashi/Downloads/sample.txt root@192.168.228.128:/root
... View more
06-28-2016
04:54 PM
Hi @Ravikumar Kumashi Can you check how is your vm network is configured? Ensure that vm network is set to NAT and port forwarding is configured. If your network is configured to host only then you need to give ip instead of localhost and remove -P 2222 from command. Can you paste the output of ifconfig from vm?
... View more
06-28-2016
01:25 PM
1 Kudo
@Ravikumar Kumashi
Try this command instead.
scp -P 2222 C:/Users/rnkumashi/Downloads/sample.txt root@localhost:/root However you need something like cygwin to make scp work from windows machine. You have to run the command from windows, not sandbox. This is a good document to get you started http://hortonworks.com/hadoop-tutorial/learning-the-ropes-of-the-hortonworks-sandbox/
... View more
06-20-2016
05:08 PM
@Colton Rodgers
Can you provide your hadoop.proxyusers.* properties settings?
... View more
06-17-2016
05:07 PM
@Anshul Sisodia Ideally you should not worry about connecting to active RM. The failover provider class takes care of that. https://hadoop.apache.org/docs/r2.7.2/hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html#RM_Failover
... View more
06-17-2016
04:05 PM
@Manikandan Durairaj If you have not enabled RM HA then 8050 is correct value. Please share log of failed job when you use 8050 port.
... View more
06-17-2016
12:42 PM
1 Kudo
In order to distcp between two HDFS HA cluster (for example A and B), using nameservice id or to setup falcon clusters having namenode ha, these settings are needed.
Assuming nameservice for cluster A and B is HAA and HAB respectively.
One need to set following properties in hdfs-site.xml
Add value of the nameservices of both clusters in dfs.nameservices. This needs to be done in both the clusters. dfs.nameservices=HAA,HAB Add property dfs.internal.nameservices
In cluster A:
dfs.internal.nameservices = HAA
In cluster B:
dfs.internal.nameservices = HAB
Add dfs.ha.namenodes.<nameservice>. dfs.ha.namenodes.HAB=nn1,nn2 dfs.ha.namenodes.HAA=nn1,nn2 Add property dfs.namenode.rpc-address.<nameservice>.<nn>. dfs.namenode.rpc-address.HAB.nn1 = <NN1_fqdn>:8020 dfs.namenode.rpc-address.HAB.nn2 = <NN2_fqdn>:8020 dfs.namenode.rpc-address.HAA.nn1 = <NN1_fqdn>:8020 dfs.namenode.rpc-address.HAA.nn2 = <NN2_fqdn>:8020
Add property dfs.client.failover.proxy.provider.<nameservice> In cluster A
dfs.client.failover.proxy.provider.HAB = org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider In cluster B
dfs.client.failover.proxy.provider.HAA = org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider Restart HDFS service.
Once complete you will be able to run the distcp command using the nameservice similar to:
hadoop distcp hdfs://HAA/tmp/file1 hdfs://HAB/tmp/
... View more
- Find more articles tagged with:
- How-ToTutorial
- namenode-ha
- solutions
Labels:
06-15-2016
12:30 PM
@Mukesh Burman I guess you can accept my answer. 🙂
... View more
06-15-2016
05:53 AM
1 Kudo
@Mukesh Burman See if this is what you are looking for. https://cwiki.apache.org/confluence/display/AMBARI/Installing+ambari-agent+on+target+hosts
... View more
06-14-2016
07:34 PM
@kavitha velaga Have you set hadoop.proxyuser.ambari-server.hosts and hadoop.proxyuser.ambari-server.groups to * in core-site? Also please share your tez view configuration.
... View more