Member since
01-24-2014
101
Posts
32
Kudos Received
18
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
28199 | 02-06-2017 11:23 AM | |
6991 | 11-30-2016 12:56 AM | |
7893 | 11-29-2016 11:57 PM | |
3728 | 08-16-2016 11:45 AM | |
3736 | 05-10-2016 01:55 PM |
01-17-2022
01:04 AM
@Mayarn, as this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post.
... View more
02-10-2019
11:39 PM
Hello, Thanks for your responce, Where ever i am checking the HBCK on that particular server we have HBase master role and from Gateway node also i can see same error. Any other suggessions wolud be appriciated. Thanks.
... View more
04-04-2017
01:50 AM
How did u solved it ??? Which things one has to check ?
... View more
03-02-2017
07:23 AM
2 Kudos
Bit late to reply, but if the cluster is secure, try pointing the hbase configuration to the spark driver and executor classpath explicitly using 'spark.executor.extraClassPath' and 'spark.driver.extraClassPath'. Also make sure that the host from where you are running the spark command has the gateway role added. Example: $ pyspark --jars /opt/cloudera/parcels/CDH/jars/spark-examples-1.6.0-cdh5.7.3-hadoop2.6.0-cdh5.7.3.jar,/opt/cloudera/parcels/CDH/jars/hbase-examples-1.2.0-cdh5.7.3.jar --conf "spark.executor.extraClassPath=/etc/hbase/conf/" --conf "spark.driver.extraClassPath=/etc/hbase/conf/"
... View more
12-08-2016
10:37 PM
1 Kudo
From the stray quote in your exception, it would seem like something in your client program (the com.install4j.runtime.launcher.UnixLauncher class invocation script, the com.attivio.commandline.CommandLineRunner, etc.) has incorrectly set the JDK used system property of "https.protocols" to a value of "TLSv1" with the quotes partially included. if there is an override of "https.protocols" required on your JDK (such as is needed for using TLSv1.2 in JDK7 environments [1]) then please modify it to pass the value correctly without the quote wrapped around it. For ex. this is a good way to set it: -Dhttps.protocols=TLSv1.2 But this isn't gonna work: -Dhttps.protocols="TLSv1.2" From your error, if I were to guess, it seems to instead be set to something like: -Dhttps.protocols="TLSv1,2", which gives rise to the literal error of java.lang.IllegalArgumentException: "TLSv1 [1] - https://blogs.oracle.com/java-platform-group/entry/diagnosing_tls_ssl_and_https X-Refs: http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/classes/sun/net/www/protocol/https/HttpsClient.java#l161 and http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/classes/sun/security/ssl/ProtocolVersion.java#l164
... View more
12-05-2016
10:49 AM
Glad to help! Max compaction size config(hbase.hstore.compaction.max.size), *edit* looks like instead of the default you are setting that to 512MB. Yes that certainly is at least part of the issue. that effectively means that compaction will ignore any storefile larger than 512MB. I'm unsure what that will do to the ability to split when necessary. It's not something we set on our clusters. Leaving here for others: If you are relying on hbase to do the weekly major compact(hbase.hregion.majorcompaction), there is a difference in behavior between a externally initiated compaction and a internal system one. The system initiated compaction(hbase.hregion.majorcompaction) seems to trigger only a minor compaction when over the max number of regions a minor will consider (hbase.hstore.compaction.max). I am guessing this is due to a desire to not impact the system with a very long running major compaction. In your case, you will be constantly triggering only a minor compaction of that many stores every time Hbase considers that region for compaction. (hbase.server.compactchecker.interval.multiplier multiplied by hbase.server.thread.wakefrequency) This is especially true if you generate more hfiles than (hbase.hstore.compaction.max) in the time it takes to do (hbase.server.compactchecker.interval.multiplier multiplied by hbase.server.thread.wakefrequency + compact time). Externally initiated compaction, either through hbase shell or through the API, sets the compaction priority to high and does not consider (hbase.hstore.compaction.max).
... View more
11-30-2016
12:56 AM
There a few solutions out there for windows hadoop client, I haven't tried any of them so, will defer to the community at large for specifics. One elegant/interesting approach that I will point out is the possibility of a flume agent for windows. Tried and true method: Off the top of my head, you can use file transfer method of your choice to get the files from the windows machine to a linux machine (SFTP, Samba, etc), and then use your favorite HDFS loading command/process to get the files into HDFS (hdfs dfs -copyFromLocal, flume, etc.)
... View more
11-29-2016
11:57 PM
you might try adding the following to core-site.xml seems like the error is talking about the root group. hadoop.proxyuser.root.groups=*
hadoop.proxyuser.root.hosts=* If you are running cloudera manager, you need to add those in cloudera manager itself, the config is not in the traditional space, but instead in a seperate folder managed by cloudera for each individual service. Confusingly, the config is in the traditional space is for the "gateway".
... View more