Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2725 | 04-27-2020 03:48 AM | |
| 5285 | 04-26-2020 06:18 PM | |
| 4450 | 04-26-2020 06:05 PM | |
| 3576 | 04-13-2020 08:53 PM | |
| 5380 | 03-31-2020 02:10 AM |
08-16-2017
01:26 PM
@Darko Milovanovic Good to know that the link was useful to you and it resolved your issue. It will be great if you can mark this thread as "Accepted" (Answered) that way it will be very easy for other HCC users to quickly browse the answered threads.
... View more
08-16-2017
09:52 AM
@Kunal Gaikwad Are you sure that your DB is running on SSL port and you really want your connection to the DB to be secure? If not then you can remove the "?ssl=true" option from the connection url and also from the DB side also we need to verufy if it allows insecure (non SSL) connection or not? .
... View more
08-16-2017
09:37 AM
@Kunal Gaikwad Looks like the issue is because you are using Postgres with SSL option and hence the Nifi truststore should have the certificates of the postgres certificates imported to it. That seems to be causing the following error. (SSL error: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target): {} - I think you should import Portgres DB certificate to Nifi Truststore. https://community.hortonworks.com/articles/886/securing-nifi-step-by-step.html (This is little OLD article though) nifi.security.truststore=<path_to_truststore_file> . Or else you will need to import the portgresql certificates to the "$JAVA_HOME/jre/lib/security/cacerts", Here the JAVA_HOME is the java which Nifi is using.
... View more
08-16-2017
08:23 AM
@Gulshan Agivetova Good to know that setting the -XX:+UseG1GC for hive.tez.java.opts worked. - It will be really great if you can mark this thread as "Accepted" (Answered) so that it will be very useful for HCC users to quickly browse for the solutions.
... View more
08-16-2017
05:59 AM
@tulasi kumar guthurti I will suggest to use the same JAR (spark-assembly-1.6xxxxx.jar) version on the driver side that you are using on the server side to see if it goes well.
... View more
08-16-2017
05:40 AM
@tulasi kumar guthurti This usually happens if you have a conflicting version of JAR that contains the class "org.apache.spark.sql.hive.execution.InsertIntoHiveTable" - So can you please check the classpath of spark-submit in client mode to see from which jar is it taking that class? . Spark 1.6 and 2.0 jars are not mixed. For example, the "spark-assembly-1.6*xxx.jar" (spark) and "spark-hive_2.xxxx.jar" (spark2) both contains this class. So please make sure that you are not mixing the claspath to include both the JARs. You can check the "--jars" parameter values. .
... View more
08-16-2017
05:16 AM
2 Kudos
@Gulshan Agivetova Please check and share the values of the following parameters "tez.am.launch.cmd-opts" and "hive.tez.java.opts" they should not be conflicting. Specially the GC options. Try this: You should get rid of the "-XX:+UseParallelGC" option (remove this option) from the "tez.am.launch.cmd-opts" and "hive.tez.java.opts" properties at the very first place and then restart it. This is because "-XX:+UseG1GC and -XX:+UseParallelGC" Should never be used together."
... View more
08-15-2017
04:52 PM
1 Kudo
@Darko Milovanovic Your issue looks similar to : https://community.hortonworks.com/questions/120861/ambari-agent-ssl-certificate-verify-failed-certifi.html So please check if you using Python version "python-2.7.5" or higher, if yes then you should try to either downgrade the python version to lower than python-2.7.5 as it causes this issue. [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579) (OR) Else you will need to following the steps
mentioned in the following doc to fix the "certificate verify failed
(_ssl.c" issue while using RHEL7: Controlling and troubleshooting
certificate verification https://access.redhat.com/articles/2039753#controlling-certificate-verification-7 .
... View more
08-12-2017
07:18 PM
@Arsalan Siddiqi Please check if your "hbase-site.xml" file is present in the Atlas Classpath (conf) dir and the following properties inside your hbase-site.xml are correctly set. hbase.zookeeper.quorum=zoo1,zoo2,zoo3
zookeeper.znode.parent=/what/ever<br> . If the "zookeeper.znode.parent" is not correctly set then it can cause "NullPointerException" Following link described the details: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.6/bk_command-line-installation/content/config-atlas-to-use-hbase.html .
... View more
08-12-2017
07:06 PM
@Arsalan Siddiqi The root cause of the NullPointerException on the Zookeeper: Caused by: java.lang.NullPointerException
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.getMetaReplicaNodes(ZooKeeperWatcher.java:395)
at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:553)
at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:61) . It usually happens when the Zookeeper Parent is not defined properly. (i mean empty). Please double check your Zookeeper URL. Please check your "hbase-site.xml" location if it is present in the classpath and has correct ZK url. The ZK URL should include the children of hbase znode.
... View more