Member since
07-31-2013
1924
Posts
462
Kudos Received
311
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1968 | 07-09-2019 12:53 AM | |
| 11853 | 06-23-2019 08:37 PM | |
| 9135 | 06-18-2019 11:28 PM | |
| 10110 | 05-23-2019 08:46 PM | |
| 4569 | 05-20-2019 01:14 AM |
02-26-2017
08:07 AM
I manage to solve by adding mapred-site.xml at the oozie server under /etc/hadoop/conf and overwriting the submit replication
... View more
12-15-2016
06:07 AM
If you use 5.9+ or can upgrade to it, add the disk to configuration, and use this feature: http://blog.cloudera.com/blog/2016/10/how-to-use-the-new-hdfs-intra-datanode-disk-balancer-in-apache-hadoop/
... View more
12-08-2016
10:37 PM
1 Kudo
From the stray quote in your exception, it would seem like something in your client program (the com.install4j.runtime.launcher.UnixLauncher class invocation script, the com.attivio.commandline.CommandLineRunner, etc.) has incorrectly set the JDK used system property of "https.protocols" to a value of "TLSv1" with the quotes partially included. if there is an override of "https.protocols" required on your JDK (such as is needed for using TLSv1.2 in JDK7 environments [1]) then please modify it to pass the value correctly without the quote wrapped around it. For ex. this is a good way to set it: -Dhttps.protocols=TLSv1.2 But this isn't gonna work: -Dhttps.protocols="TLSv1.2" From your error, if I were to guess, it seems to instead be set to something like: -Dhttps.protocols="TLSv1,2", which gives rise to the literal error of java.lang.IllegalArgumentException: "TLSv1 [1] - https://blogs.oracle.com/java-platform-group/entry/diagnosing_tls_ssl_and_https X-Refs: http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/classes/sun/net/www/protocol/https/HttpsClient.java#l161 and http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/classes/sun/security/ssl/ProtocolVersion.java#l164
... View more
11-18-2016
07:18 AM
Harsh, In this thread you stated "but do look into if your users have begun creating too many tiny files as it may hamper their job performance with overheads of too many blocks (and thereby, too many mappers)." Too may tiny files is in the eye of the beholder if those files are what get you paid. I'm also seeing a block issue on wo of our nodes, but a rebalance to 10% has no effect. I've rebalanced to 8% and it improves, but I suspect we're running into a small files issue.
... View more
11-18-2016
03:00 AM
We are about to perform planned upgrade of the cluster to 5.9 thus the problem will be solved. It is great to know that the issue will become irrelevant. Thanks, Harsh. A fantastic clarification!
... View more
11-18-2016
02:37 AM
No. See doc for usage: http://archive.cloudera.com/cdh5/cdh/5/hbase/book.html#_export
... View more
11-17-2016
09:58 PM
Thanks. I'll try your advice first. And create a new topic if still can not solved.
... View more
11-14-2016
03:34 AM
It should go to both of those valves with the same value currently, until HDFS-10289 gets done in future.
... View more
11-11-2016
04:58 PM
Thank you!
... View more