Member since
10-24-2015
207
Posts
18
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4444 | 03-04-2018 08:18 PM | |
4333 | 09-19-2017 04:01 PM | |
1809 | 01-28-2017 10:31 PM | |
977 | 12-08-2016 03:04 PM |
01-24-2017
08:17 PM
We have a HDP 2.4.2 cluster experiencing TEZ 3297 issues with AM deadlock. Support is suggesting me to either upgrade to 2.5.0 or do a hotfix. On what basis do i decide which to chose when i have time constraints. Suppose, when you have exisiting jobs in 2.4.2 and u upgrade to 2.5, then to check if the same jobs are running the same way without issues in 2.5.0... usually how long is the validation period? to say that there are no issues with the jobs... any suggestions? Thanks.
... View more
Labels:
01-20-2017
09:00 PM
Hi, I am using a HDP 2.4.2 cluster, since yesterday few of my datanodes heap size has been up continuously even when there are no jobs running. I have seen this article: https://community.hortonworks.com/articles/74076/datanode-high-heap-size-alert.html but not sure where exactly to make these changes?? do i go to hadoop-env template on ambari and change the HADOOP_DATANODE_OPTS=... And once i do it, do i need to restart the HDFS service?? Thanks.
... View more
Labels:
- Labels:
-
Apache Hadoop
01-20-2017
08:42 PM
@jss @Rahul Buragohain I have the same issue with my HDP 2.4.2... where exactly do i change these parameters?? I see them in hadoop-env template with: SHARED_HADOOP_NAMENODE_OPTS="-server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile={{hdfs_log_dir_prefix}}/$USER/hs_err_pid%p.log -XX:NewSize={{namenode_opt_newsize}} -XX:MaxNewSize={{namenode_opt_maxnewsize}} -Xloggc:{{hdfs_log_dir_prefix}}/$USER/gc.log-`date +'%Y%m%d%H%M'` -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Xms{{namenode_heapsize}} -Xmx{{namenode_heapsize}} -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT"
export HADOOP_NAMENODE_OPTS="${SHARED_HADOOP_NAMENODE_OPTS} -XX:OnOutOfMemoryError=\"/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node\" -Dorg.mortbay.jetty.Request.maxFormContentSize=-1 ${HADOOP_NAMENODE_OPTS}"
export HADOOP_DATANODE_OPTS="-server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/$USER/hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -Xloggc:/var/log/hadoop/$USER/gc.log-`date +'%Y%m%d%H%M'` -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms{{dtnode_heapsize}} -Xmx{{dtnode_heapsize}} -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT ${HADOOP_DATANODE_OPTS}" If this is the file, should i just add the mentioned parameters in the HADOOP_DATANODE_OPTS ?? and do i need to restart the hdfs service? Thanks.
... View more
01-20-2017
08:04 PM
@Sagar Shimpi I have this issue in my HDP 2.4.2 cluster since midnight because I see some are high on Datanode heap size for more than 10 hours now. I see your resolution but can you be more specific where to change these parameters? should i change them in hadoop-env.sh? and how?
... View more
01-17-2017
11:52 PM
Hi all, I am trying to upgrade from HDP 2.1(ambari 1.6) to 2.4.2(ambari 2.2.2.0) ... I see that in the instructions it says i have to contact hortonworks support for this:
Minor
HDP 2.1
HDP 2.2 or 2.4
Contact Hortonworks Support for information on performing a HDP
2.1 to HDP 2.2 or 2.4 upgrade.
Does anybody have instructions or should i just contact the support? Thanks.
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
01-10-2017
09:41 PM
@Shihab That worked for me. Thanks so much. I also had to delete /system/diskbalancer.id to run it successfully. But for some reason I have to do this for every rebalancer I run.
... View more
01-09-2017
07:12 PM
@Kuldeep Kulkarni I see the error in HDP 2.4.2, Ambari 2.2.2.0 version as well. HA is enabled. But as per the fix, it should be resolved in this version of ambari right... ?? Do you know why this error could be persisting ??
... View more
12-26-2016
10:39 PM
Hi, I would like to understand how inter process communication between hadoop nodes happen? I know that it uses Remote procedure call but would like to know if it needs passwordless ssh ? If it doesn't need passwordless ssh as well set up, then how does RPC work, especially when data is written on one of the datanodes and in turn that data node writes the same data(replication) to another datanode? How exactly this works when passwordless ssh is set up? Thanks for your answers.
... View more
Labels:
- Labels:
-
Apache Hadoop
12-19-2016
09:49 PM
Recently i heard Hortonworks doesnt suppor Hue anymore.... is that true?
... View more
Labels:
- Labels:
-
Cloudera Hue
12-14-2016
03:20 PM
Thanks for the reply. I can wait until the n/w issue is resolved. But do you think I can create a repo of these rpm's downloaded individually and store them in the same repo server which is already accessible? Is it a good idea? Thanks...
... View more