Member since
10-07-2015
17
Posts
7
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2787 | 09-15-2021 03:40 AM | |
4784 | 02-28-2019 03:31 PM | |
56249 | 12-02-2016 03:04 AM | |
5977 | 11-23-2016 08:15 AM |
09-15-2021
03:40 AM
1 Kudo
Often this happens as there is a "hidden" character at the end of the file or folder name. For example a line break (\n, \r, etc). If you list the files you can get a clue that is the case as usually the output will look strange with an extra line or something there. You can try running a few commands like the following to see if it matches a file: hdfs dfs -ls $'/path/to/folder\r' hdfs dfs -ls $'/path/to/folder\n' hdfs dfs -ls $'/path/to/folder\r\n' If any of those match, then you can delete the incorrect one with a similar command. If you get no luck with that, then pipe the ls output into "od -c" and it will show the special characters hdfs dfs -ls /path/to/folder | od -c
... View more
02-28-2019
03:31 PM
Without the stack trace, we are going to have a hard time to pin down what is going wrong. With 5.3.1 being pretty old, it could easily be a bug. I wonder if this is (top answer) is causing the stack trace to be suppressed: https://stackoverflow.com/questions/2295015/log4j-not-printing-the-stacktrace-for-exceptions Could be worth a NN restart wth "-XX:-OmitStackTraceInFastThrow" added to the JVM options to see if we get a stack trace out, if you are able to schedule a restart. It your key concern is getting the missing blocks back then you should be able to copy the block file and its corresponding meta file to another DN. The block will be in a given subfolder under its disk, eg: /data/hadoop-data/dn/current/BP-1308070615-172.22.131.23-1533215887051/current/finalized/subdir0/subdir0/blk_1073741869 In that example "subdir0/subdir0" - it does not matter which node you copy it to or which disk, but you must ensure the sub folders are maintained. Then restart the target DN and see if the block moves from missing to not missing when it checks in. I'd suggest trying this with one block to start with to ensure it works OK.
... View more
02-28-2019
03:37 AM
Do you get a full strack trace in the Namenode log at the time of the error in the datanode? From the message in the DN logs, it is the NN that is throwing the NullPointerException, so there appears to be something in the block report it does not like. Have all these files with missing blocks got replication factor of 1 or have they a replication factor 3?
... View more
02-09-2017
06:59 AM
You can put the s3 credentials in the s3 URI, or you can just pass the parameters on the command line, which is what I prefer, eg: hadoop fs -Dfs.s3a.access.key="" -Dfs.s3a.secret.key="" -ls s3a://bucket-name/ Its also worth knowing that if you run the command like I have given above, it will override any other settings that are defined in the cluster config, such as core-site.xml etc.
... View more
12-02-2016
03:04 AM
2 Kudos
Can you check your /etc/hadoop/conf/yarn-site.xml and ensure the following two parameters are set: <property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
... View more
11-30-2016
01:26 PM
1 Kudo
Looks like the history directory permissions are wrong. Can you try running this command to reset the permissions and then try running the job again: sudo -u hdfs hadoop fs -chmod -R 1777 /user/history
... View more
11-30-2016
03:35 AM
1 Kudo
It looks like the nodemanager process is stopped for some reason. Can you try starting it, and also restart the resource manager too: sudo service hadoop-yarn-nodemanager start sudo service hadoop-yarn-resourcemanager restart Then see if your job will run?
... View more
11-29-2016
04:46 AM
From the screen shot showing "all applications", it seems there are no "active nodes". For yarn applications to run, there is a resource manager, that accepts the jobs and then allocates containers on the node managers on the cluster. In your case it looks like no node managers are running, and therefore the jobs cannot be assigned and start running. Are you running this job on the quickstart VM? Can you check the logs for the node manager and resource manager in /var/log/hadoop-yarn and see if there are any relevant errors there?
... View more
11-23-2016
08:15 AM
2 Kudos
Hi, It you set the balancer bandwidth in Cloudera manager, then when the datanodes are started, they will have that bandwidth seting for balancing operations. However, using the command line it is possible to change the bandwidth while the datanodes or balancer is running, without restarting them. If you do this, you just need to remember that if you restart any datanodes, the bandwidth setting will revert back to that set in Cloudera Manager and you will need to run the command again. To set the balancer bandwidth from the command line and without a restart you can run the following: sudo -u hdfs hdfs dfsadmin -setBalancerBandwidth 104857600 If you have HA setup for HDFS, the above command may fail, and you should check which is the active namenode and run the command like follows (substituting the correct hostname for activeNamenode below): sudo -u hdfs hdfs dfsadmin -fs hdfs://activeNamenode:8020/ -setBalancerBandwidth 104857600 To check this command worked, the following log entries should appear within the Datanode log files: INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeCommand action: DNA_BALANCERBANDWIDTHUPDATE INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Updating balance throttler bandwidth from 10485760 bytes/s to: 104857600 bytes/s.
... View more