Member since
01-25-2017
119
Posts
7
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
13788 | 04-11-2017 12:36 PM | |
4160 | 01-18-2017 10:36 AM |
11-06-2017
09:09 AM
@Deepesh, @Tamil Selvan K thank you both for your responses. And I thought it was installed as another component like Spark2 came with Spark 1.6 etc. So that means, Hive 2 will replace old 1.2, right? And what about other components listed in my question? How am I supposed to add them?
... View more
11-03-2017
02:37 PM
Hello, I have upgraded HDP from 2.5.3 to 2.6.2.14 Getting benefits of Hive 2.1.0 was one of the motivation points in my upgrade plan. However, even i kept myself optimistic every step i did not see it Hive 2 was not there as upgrade finalised eventually. Furthermore, it is not there as I want to add new services. What am i missing? Info: Centos 7.3.1611 Cluster of 4 VMs Ambari was upgraded from 2.4.2 to 2.5.2 Services able to add: Ranger 0.7.0, Ranger KMS 0.7.0, Druid 0.10.1, Storm 1.1.0 Services that i expected to see but missing: Apache Calcite 1.2.0, Apache DataFu 1.3.0, Apache Hive 2.1.0, Apache Phoenix 4.7.0, Cascading 3.0.0, Hue 2.6.1
... View more
Labels:
- Labels:
-
Apache Hive
08-02-2017
10:45 AM
Is there any config parameter to make it shorter failing over to rm2? Is below config parameter effective on this procedure? yarn.resourcemanager.connect.retry-interval.ms=30000
... View more
07-26-2017
08:47 AM
You are awesome enough to thank so much! 🙂 I was expecting just to see if the behaviour I see is normal but your explanation to me like teaching to fish instead of giving it. I have learned the procedure instead, how it worked. Thanks again! 😄
... View more
07-25-2017
01:21 PM
Follow-up comment... Any comments?
... View more
07-21-2017
09:02 AM
Can you give some details? Saying multi-level, do you try to import files from multiple folders? Can you give sample directory or directories?
... View more
07-21-2017
08:56 AM
Hello, I am checking JMX metrics with a period of time to monitor the cluster health. When I try to check my monitoring platform I saw that it is too late to update. The case is a dead datanode. I stop one of the datanode services on Ambari and expect to see below data to change from 0 to 1: http://namenodeaddress:50070/jmx?qry=Hadoop:service=NameNode,name=FSNamesystemState {
"name" : "Hadoop:service=NameNode,name=FSNamesystemState",
"modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
...
"NumDeadDataNodes" : 0,
...
...
} It was updated 6 minutes later. It is a very long time to take an action. However when I start the service again, it is updated from 1 to 0 as soon as service was started. Can someone check it for me if this is the normal update time? PS: I know Ambari is faster to detect. Probably it uses another method to detect dead nodes. I need to check this to continue parsing other metrics. Thanks in advance.
... View more
Labels:
- Labels:
-
Apache Hadoop
05-25-2017
04:24 PM
1 Kudo
Hello, I have the same exact question but I couldn't link or associate listed variations of deep learning applications with the use of HDP. How am I supposed to run any of them on my Hadoop cluster? In which practical, infrastructural way do DL applications and Hadoop applications get related? Thanks in advance...
... View more
04-19-2017
01:24 PM
Thank you 🙂
... View more
04-18-2017
09:31 AM
Hi, To clarify the question I will illustrate the case; Lets name datanodes like; [dnode1, dnode2, dnode3, dnode4, dnode5, dnode6, dnode7, dnode8, dnode9] I don't want one block to be replicated among dnode1, dnode2 and dnode3 because I have to turn off these 3 at once in case of maintenance. Is there any such replication setting in hdfs so as to specify replication targets instead of random nodes? Like, replication group definition?
... View more
Labels:
- Labels:
-
Apache Hadoop