Member since
01-04-2016
409
Posts
313
Kudos Received
35
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5704 | 01-16-2018 07:00 AM | |
1885 | 09-13-2017 06:17 PM | |
3746 | 09-13-2017 05:58 AM | |
2384 | 08-28-2017 07:16 AM | |
4155 | 05-11-2017 11:30 AM |
12-28-2016
04:28 PM
@Ashnee Sharma There was an issue and for that you submitted a question separately. It is good to document here as well, for other sake that may be encounter a similar problem. Please post it. I found it. Based on the original response, you encountered an issue, then you asked this question: https://community.hortonworks.com/questions/74245/how-to-disable-pagination-for-ambari-ldap.html
... View more
11-17-2016
10:54 AM
1 Kudo
Thanks everyone. I have resolved this issue with following option. 1) Took hive database backup from mysql. 2) Removed the hive from that node. 3) Added new node and hive installed on new node. 4) Restore the hive database to mysql. And it's works. 🙂
... View more
11-02-2016
06:46 PM
There is also a good amount of detailing covering all of the knobs and dials related to configuring the Capacity Scheduler here. Note that in the latest versions of Ambari there is a Capacity Scheduler View where you can graphically configure the queues instead of getting into the weeds of the XML.
... View more
10-26-2016
08:11 PM
1 Kudo
@Ashnee Sharma Take a look this HCC post https://community.hortonworks.com/questions/7165/how-to-copy-hdfs-file-to-aws-s3-bucket-hadoop-dist.html It outlines options to move hdfs data to s3
... View more
11-08-2016
07:56 AM
1 Kudo
Output :- hdfs getconf -confkey "dfs.namenode.https-address" 0.0.0.0:50470 I have changed 0.0.0.0 to hostname in hdfs-site.xml and issue is resolved.
... View more
10-13-2016
10:48 AM
1 Kudo
Thanks got the solution. Done following steps:- Copied the mysql-connector.jar and issue is resolved cp -r /usr/lib/hive/lib/mysql-connector-java.jar /usr/share/java/
... View more
04-05-2017
03:12 PM
@ Constantin Stanca I thought the proper way to do the maintenance on the data node is to decommission it, so it can do the following tasks:
Data Node - safely replicates the
HDFS data to other DNs Node Manager - stop accepting new job
requests Region Server - turns on drain mode In a urgent situation, I could agree on your suggestion. However, please advise me the right approach in a scenario where you have luxury to choose the maintenance window.
... View more
10-11-2016
01:23 AM
4 Kudos
@Smart Solutions The two main options for replicating the HDFS structure are Falcon and distcp. The distcp command is not very feature rich, you give it a path in the HDFS structure and a destination cluster and it will copy everything to the same path on the destination. If the copy fails, you will need to start it again, etc. Another method for maintaining a replica of your HDFS structure is Falcon. There are more data movement options and you can more effectively manage the lifecycle of all of the data on both sides. If you're moving Hive table structures, there is some more complexity to making sure the tables are created on the DR side, but moving the actual files is done the same way You excluded distcp as an option. As such, I suggest to look at Falcon. Check this: http://hortonworks.com/hadoop-tutorial/mirroring-datasets-between-hadoop-clusters-with-apache-falcon/ +++++++ if any response addressed your question, please vote and accept best answer.
... View more