Member since
08-16-2016
48
Posts
9
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5120 | 12-28-2018 10:21 AM | |
6089 | 08-28-2018 10:58 AM | |
3360 | 10-18-2016 11:08 AM | |
3984 | 10-16-2016 10:13 AM |
03-28-2019
01:25 PM
Can you provide more information on reporting load (for low-latency operations) issue when we have datanode with 100T+ storage? We need archive node for HDFS storage only purpose. No Yarn/spark running on it. It will only storage data based on storage migration policy. Node's network/storage IO bandwidth is considered be able to handle the larger storage size.
... View more
01-10-2019
06:30 PM
Without much context, you should go to YARN --> Resource Manager web UI, find the failed job corresponding to the distcp, and drill into it to find the failed reduce task. You should be able to find out more there in the log.
... View more
12-31-2018
08:37 PM
Thank you for that insight. I will mark your original post as accepted and maybe update the post later if we have any new information to share.
... View more
08-29-2018
03:29 AM
@Matt_ I can give you two easy steps , it may reduce your burden 1. To list the valid kerberos principal
$ cd /var/run/cloudera-scm-agent/process/<pid>-hdfs-DATANODE
$ klist -kt hdfs.keytab
## The klist command will list the valid kerbros principal in the following format "hdfs/<NODE_FQDN>@<OUR_REALM>"
2. to kinit with the aboev listed full path
$ kinit -kt hdfs.keytab <copy paste the any one of the hdfs principal from the above klist>
... View more
08-02-2018
10:38 AM
Besides Hadoop and JVM, would you please also check the hardware? Specifically, the JN's volume may be slow (check JN log message which may indicate a slow write), or network connection.
... View more
07-30-2018
07:48 PM
Have you followed the solution made above? Depending on where you are trying to write into your cluster, unless you have full access to communicating with all your DataNode hosts and its ports, you will face this error.
... View more
07-06-2018
02:04 PM
Hi rlopez, You might try this command to test your configuration: $ hadoop jar <hadoop-common jar> org.apache.hadoop.security.HadoopKerberosName rlopez@PRE.FINTONIC.COM Replace <hadoop-common jar> with your hadoop-common library installation path, for example, /opt/cloudera/parcels/CDH/lib/hadoop/hadoop-common-2.6.0-cdh5.15.1.jar You would then get the following output: 18/07/06 14:02:05 INFO util.KerberosName: No auth_to_local rules applied to rlopez@PRE.FINTONIC.COM Name: rlopez@PRE.FINTONIC.COM to rlopez@PRE.FINTONIC.COM
... View more
07-06-2018
10:39 AM
Just like to follow up. It was later determined to be caused by HDFS-11445. The bug was fixed in CDH 5.12.2, CDH 5.13.1 or above.
... View more
06-15-2018
07:57 AM
Cloudera does not ship Apache Hadoop. That said, the Apache Hadoop convenience binary does not ship with windows native libraries, only Linux ones.
... View more
12-29-2017
09:49 AM
Sorry for the late reply; glad you did it and now you know it was the perfect solution. Cheers!
... View more