Member since
04-04-2018
80
Posts
32
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
8441 | 10-28-2017 05:13 AM |
04-14-2016
07:01 AM
@Kuldeep Kulkarni How to find which disk is marked as bad?
... View more
04-14-2016
06:33 AM
1 Kudo
@Kuldeep Kulkarni and @Sagar Shimpi Issue has been resolved by changing below parameter in yarn-site.xml yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage Previously it was 90% I changed it to 99 Now Job is in running state. Could you please shed some light on this parameter.
... View more
04-13-2016
02:36 PM
@Sagar Shimpi yarn.scheduler.capacity.maximum-am-resource-percent=0.2 Kindly find attached file for reference. yarn-site.xml mapred-site.xml
... View more
04-13-2016
12:59 PM
2 Kudos
Hi Team, Job hang while importing tables via sqoop shown following message in web UI. ACCEPTED: waiting for AM container to be allocated, launched and register with RM Kindly suggest.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache YARN
03-09-2016
08:24 AM
2 Kudos
Hi Team, I am trying to upload a file that more that 64MB(approximately 1GB) via the File Browser in HUE. The file uploaded and subsequently gets truncated to 64MB. But from command line it sucessfully uploaded. Is there any configuration to change in HUE? Any ideas of what is happening and what I have to do to resolve this?
... View more
Labels:
- Labels:
-
Cloudera Hue
03-08-2016
02:49 PM
1 Kudo
Thanks Dave Issue has been resolved after make Active NameNode where hue installed.
... View more
03-08-2016
02:15 PM
2 Kudos
Hi Team, I am facing an issue webhdfs exception at file browser while accessing via hue. Error - Max retries exceeded with url: /webhdfs/v1/user/hdfs?doas=hdfs&user.name=hue&op=GETFILESTATUS (Caused by <class 'socket.error'>: [Errno 111] Connection refused) Kindly check the attached screenshot for reference.
... View more
Labels:
- Labels:
-
Cloudera Hue
03-03-2016
07:09 AM
1 Kudo
@Alan Gates This is continued from previous post: I have made required changes in hive-site.xml on datanode, but when i restarted hive service from ambari changes are not reflecting in hive-site.xml it takes previous working configuration.
... View more
03-01-2016
01:23 PM
1 Kudo
@Neeraj Sabharwal At present, I have hdp 2.3 cluster. Hive,mysql,hue and metastore installed on namenode.
All these are available incuding mysql and hive metadata (copied from namenode) on one of the datanode from cluster for redundancy purpose of hive. Now when namenode server goes down then how can i link new host to hive metadata which i have copied.?
How can i do this via ambari as well as without ambari?
... View more
03-01-2016
07:42 AM
@Neeraj Sabharwal how to do manually? In absence of Ambari
... View more