Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2093 | 06-15-2020 05:23 AM | |
| 17442 | 01-30-2020 08:04 PM | |
| 2254 | 07-07-2019 09:06 PM | |
| 8718 | 01-27-2018 10:17 PM | |
| 4913 | 12-31-2017 10:12 PM |
04-23-2018
07:39 PM
Dear Geoffrey , we do resboot twice before weeks , but this not help ( when we reboot we do actually remount , about dfs.datanode.failed.volumes.tolerated" set it to 1 , we want to set it to 0 ( we not want loose one disk )
... View more
04-23-2018
05:09 PM
we have ambari cluster HDP version 2.6.0.1 we have issues on worker02 according to the log - hadoop-hdfs-datanode-worker02.sys65.com.log, 2018-04-21 09:02:53,405 WARN checker.StorageLocationChecker (StorageLocationChecker.java:check(208)) - Exception checking StorageLocation [DISK]file:/grid/sdc/hadoop/hdfs/data/
org.apache.hadoop.util.DiskChecker$DiskErrorException: Directory is not writable: /grid/sdc/hadoop/hdfs/data note - from ambari GUI we can see that Data-node on worker02 is down we can see from the log - Directory is not writable: /grid/sdc/hadoop/hdfs/data the follwing: STARTUP_MSG: Starting DataNode
STARTUP_MSG: user = hdfs
STARTUP_MSG: host = worker02.sys65.com/23.87.23.126
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.7.3.2.6.0.3-8
STARTUP_MSG: build = git@github.com:hortonworks/hadoop.git -r c6befa0f1e911140cc815e0bab744a6517abddae; compiled by 'jenkins' on 2017-04-01T21:32Z
STARTUP_MSG: java = 1.8.0_112
************************************************************/
2018-04-21 09:02:52,854 INFO datanode.DataNode (LogAdapter.java:info(47)) - registered UNIX signal handlers for [TERM, HUP, INT]
2018-04-21 09:02:53,321 INFO checker.ThrottledAsyncChecker (ThrottledAsyncChecker.java:schedule(107)) - Scheduling a check for [DISK]file:/grid/sdb/hadoop/hdfs/data/
2018-04-21 09:02:53,330 INFO checker.ThrottledAsyncChecker (ThrottledAsyncChecker.java:schedule(107)) - Scheduling a check for [DISK]file:/grid/sdc/hadoop/hdfs/data/
2018-04-21 09:02:53,330 INFO checker.ThrottledAsyncChecker (ThrottledAsyncChecker.java:schedule(107)) - Scheduling a check for [DISK]file:/grid/sdd/hadoop/hdfs/data/
2018-04-21 09:02:53,331 INFO checker.ThrottledAsyncChecker (ThrottledAsyncChecker.java:schedule(107)) - Scheduling a check for [DISK]file:/grid/sde/hadoop/hdfs/data/
2018-04-21 09:02:53,331 INFO checker.ThrottledAsyncChecker (ThrottledAsyncChecker.java:schedule(107)) - Scheduling a check for [DISK]file:/grid/sdf/hadoop/hdfs/data/
2018-04-21 09:02:53,405 WARN checker.StorageLocationChecker (StorageLocationChecker.java:check(208)) - Exception checking StorageLocation [DISK]file:/grid/sdc/hadoop/hdfs/data/
org.apache.hadoop.util.DiskChecker$DiskErrorException: Directory is not writable: /grid/sdc/hadoop/hdfs/data
at org.apache.hadoop.util.DiskChecker.checkAccessByFileMethods(DiskChecker.java:124)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:99)
at org.apache.hadoop.hdfs.server.datanode.StorageLocation.check(StorageLocation.java:128)
at org.apache.hadoop.hdfs.server.datanode.StorageLocation.check(StorageLocation.java:44)
at org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker$1.call(ThrottledAsyncChecker.java:127)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2018-04-21 09:02:53,410 ERROR datanode.DataNode (DataNode.java:secureMain(2691)) - Exception in secureMain
org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed volumes - current valid volumes: 4, volumes configured: 5, volumes failed: 1, volume failures tolerated: 0
at org.apache.hadoop.hdfs.server.datanode.checker.StorageLocationChecker.check(StorageLocationChecker.java:216)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2583)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2492)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2539)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2684)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2708)
2018-04-21 09:02:53,411 INFO util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 1
2018-04-21 09:02:53,414 INFO datanode.DataNode (LogAdapter.java:info(47)) - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at worker02.sys65.com/23.87.23.126
************************************************************/
<br> we checked that: 1. all files and folders under - /grid/sdc/hadoop/hdfs/ are with hdfs:hadoop , and that is OK 2. disk - sdc is read and write (rw,noatime,data=ordered) , and that is OK we suspect that Hard Disk has gone bad , in this case how we check that? please advice what the other options to resolve this issue ?
... View more
Labels:
04-18-2018
02:00 PM
hi, the problem is that this API not works on HDP - 2.6.4 , and ambari 2.6.1 , we installed many clusters but when we try this API on mentions version , then API not set the repo from some unclear reason
... View more
04-16-2018
02:44 PM
@Jay do you have suggestion how to install Master + worker + kafka on single node ?
... View more
04-16-2018
12:32 PM
I ask this because during downgrade we get - "On host master02.sys72.com role JOURNALNODE in invalid state". so I was thinking if it safemode wrong state , do you have idea why - JOURNALNODE in invalid state ?
... View more
04-16-2018
12:16 PM
dose during HDP upgrade to 2.6.4 version safemode should be ON ? , or OFF
... View more
Labels:
04-16-2018
08:03 AM
@Jay your steps 1 and 2 are correct , we want to install the "master" + "worker" + "kafka" on one single linux node ,
... View more
04-16-2018
06:01 AM
@Jay , we have some problem when we deploy OVA template , can you get you assit regarding that - https://community.hortonworks.com/questions/186221/cant-deploy-sandbox-hdp-264-ova-template-from-vsph.html
... View more
04-15-2018
11:57 AM
@Jay just to be clear I want to install the sandbox on my existing Linux redhat machine ( I am already configured the host-name and IP on that local machine ) , so this is relevant for me ?
... View more
04-15-2018
11:37 AM
@Jay regarding the sendbox from where to download the BP file? , as you know we separate the worker from the master machine so how to integrate them on one machine )
... View more