Member since
10-24-2015
171
Posts
379
Kudos Received
23
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2700 | 06-26-2018 11:35 PM | |
4403 | 06-12-2018 09:19 PM | |
2913 | 02-01-2018 08:55 PM | |
1475 | 01-02-2018 09:02 PM | |
6858 | 09-06-2017 06:29 PM |
07-12-2018
08:40 PM
Are you running Distributed shell application ? Default client timeout for Dshell is 600 secs. You can extend client timeout using "-timeout <milliseconds>" in application launch command.
... View more
06-29-2018
09:21 PM
2 Kudos
@kanna k , you can find out location of standby namenode logs from hadoop-env.sh. Look for HADOOP_LOG_DIR value to find out correct location of the log . Example: export HADOOP_LOG_DIR=/var/log/hadoop/$USER In this example, standby namenode log will be present at /var/log/hadoop/hdfs dir.
... View more
06-26-2018
11:35 PM
3 Kudos
@Kumar Veerappan, is umask set properly in your cluster ? Refer to below article for details. https://community.hortonworks.com/content/supportkb/150234/error-path-disk3hadoopyarnlocalusercachesomeuserap.html
... View more
06-26-2018
09:57 PM
1 Kudo
Here's one more good thread for HDFS small file problem. https://community.hortonworks.com/questions/167615/what-is-small-file-problem-in-hdfs.html
... View more
06-25-2018
10:43 PM
I have a yarn service app which has two components Master and Worker. I restarted Yarn services and launched the yarn service app. Here, I'm noticing that the app launched by Yarn only get Master component. It did not start any worker node. Can someone please explain why could this situation happen and how to recover from this ?
... View more
Labels:
- Labels:
-
Apache YARN
06-12-2018
09:19 PM
2 Kudos
Zookeeper.out file contains log for zookeeper server. you can refer to below thread to enable log rotating for zookeeper. This way you can avoid too big log files. https://community.hortonworks.com/questions/39282/zookeeper-log-file-not-rotated.html
... View more
02-01-2018
08:55 PM
1 Kudo
@Michael Bronson , Missing data block can be related to data corruption. Use 'hdfs fsck <path> -list-corruptfileblocks -files -locations' to find out which replicas got corrupted. Secondly, In order to fix issue, you can delete the corrupted blocks using 'hdfs fsck / -delete' I hope you find below thread useful for handing missing blocks. https://community.hortonworks.com/questions/17917/best-way-of-handling-corrupt-or-missing-blocks.html
... View more
01-05-2018
07:41 PM
2 Kudos
@pbarna, you can set mapreduce.job.queuename=myqueue for mapred job. https://community.hortonworks.com/content/supportkb/49658/how-to-specify-queue-name-submitting-mapreduce-job.html
... View more
01-02-2018
09:02 PM
1 Kudo
There are multiple ways you can perform various operations on HDFS. You can choose any of the below approach as per your need. 1) Command Line Most of users use command line to interact with HDFS. HDFS CLI is easy to use. Its easy to automate with scripts. However, HDFS CLI need hdfs client installed on the host. 2) Java Api If you are familiar with Java and Apache Apis, You can choose to use Java api to communicate with HDFS Cluster. 3) Webhdfs This is rest api way of accessing HDFS. This approach does not require hdfs client to be installed on host. You can use this api to connect to remote HDFS cluster too.
... View more
12-22-2017
07:34 PM
1 Kudo
@Amod Kulkarni , this issue should be related to mismatch in scala version. Few relevant links: https://stackoverflow.com/questions/25089852/what-is-the-reason-for-java-lang-nosuchmethoderror-scala-predef-arrowassoc-upo https://issues.apache.org/jira/browse/SPARK-5483
... View more