Member since
07-31-2013
1924
Posts
462
Kudos Received
311
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2128 | 07-09-2019 12:53 AM | |
| 12446 | 06-23-2019 08:37 PM | |
| 9560 | 06-18-2019 11:28 PM | |
| 10523 | 05-23-2019 08:46 PM | |
| 4895 | 05-20-2019 01:14 AM |
06-20-2018
12:30 AM
we are using CDH 5.14.0,I found our components [hdfs,yarn,hbase] would restart because of the same issue. the exception like this : java.io.IOException: Cannot run program "stat": error=11, Resource temporarily unavailable at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048) at org.apache.hadoop.util.Shell.runCommand(Shell.java:551) at org.apache.hadoop.util.Shell.run(Shell.java:507) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:789) at org.apache.hadoop.fs.HardLink.getLinkCount(HardLink.java:218) at org.apache.hadoop.hdfs.server.datanode.ReplicaInfo.breakHardLinksIfNeeded(ReplicaInfo.java:265) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.java:1177) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.java:1148) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.<init>(BlockReceiver.java:210) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:675) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:169) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:106) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:246) at java.lang.Thread.run(Thread.java:748) Caused by: java.io.IOException: error=11, Resource temporarily unavailable at java.lang.UNIXProcess.forkAndExec(Native Method) at java.lang.UNIXProcess.<init>(UNIXProcess.java:247) at java.lang.ProcessImpl.start(ProcessImpl.java:134) at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029) ... 13 more 2018-06-20 02:05:54,797 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is out of memory. Will retry in 30 seconds. java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:717) at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:154) at java.lang.Thread.run(Thread.java:748) alse,I noted cloudera manager help us set the ulimit. here is our config: if [ $(id -u) -eq 0 ]; then # Max number of open files ulimit -n 32768 # Max number of child processes and threads ulimit -u 65536 # Max locked memory ulimit -l unlimited fi ps: our machine is 72c 250g. could you help me that what the reason causes create native thread failed?
... View more
06-15-2018
07:57 AM
Cloudera does not ship Apache Hadoop. That said, the Apache Hadoop convenience binary does not ship with windows native libraries, only Linux ones.
... View more
06-09-2018
09:45 PM
It is a recommendation based on the fact that active and standby are merely states of the NameNode and not different daemons. The NameNode doesn't check it's own hardware to be the same as other NameNodes if that's what you are worried about.
... View more
06-09-2018
07:33 AM
2 Kudos
The username "dr.who" is the default identity of anyone connecting to HDFS or YARN via the web server (REST) APIs in an unsecured, non-kerberos cluster where a connecting user's identity cannot be authentically determined. If your cluster is exposed to the internet, and/or you are unable to recognize any of these jobs, I'd recommend shutting down the service immediately and investigating a possible external attack. I'd also strongly recommend following: https://blog.cloudera.com/blog/2017/01/how-to-secure-internet-exposed-apache-hadoop/ and securing your cluster in such a deployment.
... View more
06-03-2018
09:51 PM
thankyou while saving i did not give hdfs ://cloudera..... instead i gave , count.reduceByKey(_+_).saveAsTextFile("/home/cloudera/Documents/re.txt") it saved the file in re.txt directory in hdfs..
... View more
05-24-2018
06:07 PM
@Harsh J : Could you please respond? It's a production cluster and it is disturbing our workflows when we run into this error
... View more
05-23-2018
01:26 AM
Thanks, I indeed end up using Maven and plugins.d folder on Flume. Forgot to update the topic, thank you guys for the help!
... View more
05-18-2018
07:18 AM
Hi @Harsh J It's only in one NodeManager, its happen suddenly without any upgrade in CDH 5.12.0 and even if I upgrade to 5.14.2 the issue persist.. Anyway your solution has resolve the issue. Thank you.
... View more