Member since
10-21-2015
59
Posts
31
Kudos Received
16
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3098 | 03-09-2018 06:33 PM | |
2786 | 02-05-2018 06:52 PM | |
13976 | 02-05-2018 06:41 PM | |
4396 | 11-30-2017 06:46 PM | |
1666 | 11-22-2017 06:20 PM |
12-22-2016
06:52 PM
1 Kudo
@Namit Maheshwari yes, it is possible - couple of ways you can do this are: 1. You can do it from JAVA code by using JMX Apis. 2. You can connect to namenode:httpport;/jmx and get a json based response. 3. You can configure a Namenode/Datanode Metrics log file which will dump into a file which you can parse I am probably missing some more ways :), hope this helps you.
... View more
12-19-2016
06:39 PM
@Asma Dhaouadi Thanks for the screen shots. It looks like your datanode is not fully operational. Can you please check the Namenode UI or Ambari (if you are using one to make sure that your datanode is up). if you have Ambari, log in and check the HDFS status page. If you are using HDFS without Ambari, connect to the Namenode UI http port ( 50070 : if you are using a version less that Hadoop 3.0) and you can connect to Datanode HTTP port at DatanodeIP:50075 ( again assuming that you are not running hadoop 3.0)
... View more
12-12-2016
01:07 AM
@slachterman Thanks for sharing your experience. @Gerd Koenig Sorry to hear that this happening quite often. This might be an issue in Ranger as mentioned by @slachterman If you have enough details, please feel free to open an Apache Ranger JIRA so that Ranger team gets a chance to look at this.
... View more
12-09-2016
06:44 PM
5 Kudos
@Huahua Wei I am going to presume that the intend of you question is to reduce the downtime of Namenode while you have HA for the clients. I will answer that first and follow up with the specific answer to SafeMode question. The whole point of HA is to minimize the downtime of Namenodes. Here is what you should do. You should fail over to the Standby namenode and generally it should be very fast, maximum couple of retries from clients and they will be able to continue working against the cluster without any detectable loss of availability. That is, most of your YARN jobs should be fine. If this is not happening, some of your configs for Zookeeper or HDFS is not optimal. If you are specifically asking about Safemode, Safemode is used to make sure that enough datanodes and blocks from those datanodes have reported in. You can set a lower threshold for this -- The namenode when it meets that threshold will exit the safemode. There is a slider for this threshold in the Ambari UI for HDFS settings. If you really want to tell namenode to ignore this whole waiting for datanodes and block reports -- you are free to do so, there is a command line option which allows you to run "Safemode exit". Both of these are not really good things for your cluster. Spending a bit of time when the cluster is starting up for the first time is actually a good idea, so that you know that your data and nodes are in good shape. But yes, you can reduce the Safemode waiting time via config or via command line.
... View more
12-09-2016
06:27 PM
@Gerd Koenig Does this happen often or just one off ? Generally this would mean the writing application did not sync the data completely to HDFS. So looks like you have an incomplete JSON and Hive is not able to parse it.
... View more
12-06-2016
01:46 AM
As far as I know, we have not backported this change to HDP 2.1 or 2.4.2. There is nothing technically preventing us from doing so; Disk balancer does not depend on any of the newer 3.0 features. > , if i need to rebalance the disks, do i really need to decommission it or just stop the Datanode and start after a while...?? Stopping datanodes does not change the disk usage, so if one disk is over utilized, some writes will fail, that is when the datanode picks that disk(it generally uses a round-robin allocation scheme). So you need to make sure data is similarly distributed in each of the disks. That is what diskBalancer does for you, it computes how much to move based on each disk type ( Disk, SSD etc). When you decommission a node all the blocks of this machine is moved to other nodes. Then you can go into this node and make sure all the data disks are empty and add the node back. Then you will need to run balancer -- not disk balancer -- to move data back to this node. Sorry it is so painful.
... View more
12-05-2016
05:51 PM
@PJ Depending on your situation there are many solutions. This was a fundamental issue in HDFS for a long time. Very recently we have fixed this issue, there is a new tool called DiskBalancer -- which ships with trunk. If you want to see the details of the design and fix -- please look at https://issues.apache.org/jira/browse/HDFS-1312 . It essentially allows you to create a plan file -- That describes how data will be moved from disk to disk and then you can ask a datanode to execute it. Unfortunately, this tool is yet to be shipped as part of HDP. Soon we will be shipping it. Presuming you are running an older version of HDFS, and you have many datanodes in the cluster, you can decommission this full node and re-add them. However the speed of HDFS replication is throttled, so if you want this to happen fast, you might have to set these parameters in your cluster.
dfs.namenode.replication.work.multiplier.per.iteration = 10 dfs.namenode.replication.max-streams = 50 dfs.namenode.replication.max-streams-hard-limit = 100 Last, the one that I would least advise you to do, I am writing this down for the sake of completeness, is to follow what the apache documentation suggests. https://wiki.apache.org/hadoop/FAQ#On_an_individual_data_node.2C_how_do_you_balance_the_blocks_on_the_disk.3F Please note, this is a dangerous action and unless you really know what you are doing this can lead to data loss. So please, please make sure that you can restore the machine to earlier state if you really decide to go this route.
... View more
11-14-2016
05:16 PM
This issue is fixed in Apache. https://issues.apache.org/jira/browse/HDFS-9184 . However this might be in a future apache release as this has been fixed very recently.
... View more
10-20-2016
06:26 PM
From the logs it looks like the issue is that you are running out of dispatcher threads on the balancer.
16/10/1916:34:12 WARN balancer.Dispatcher:No mover threads available: skip moving blk_1457593679_384005217 with size=104909643from X.X.X.X:1019:DISK to X.X.X.X:1019:DISK through X.X.X.X:1019 The issue is due to this line
16/10/1916:34:11 INFO balancer.Balancer: dfs.balancer.dispatcherThreads =200(default=200) Please increase that to a large value ( I don't know the size of your cluster or the datanodes config) something like this is what I would do -Ddfs.balancer.moverThreads=10000 -Ddfs.balancer.dispatcherThreads=10000 Thanks Anu
... View more
10-07-2016
06:42 PM
> or at least co-locate two JN with NN, and the other JN on an unrelated host. That is definitely bad advice. We have 3 JNs so that we get high availability by writing to a majority. However if we keep 2 JNs in the same machine, then you will lose both at the same time. That is what we want to avoid in the first place.
... View more
- « Previous
- Next »