Member since
05-08-2018
15
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3126 | 08-01-2022 03:24 PM |
08-12-2022
04:49 AM
To solve "unable to find valid certification path to requested target" I just import the certificate to java and restart the Zeppeling Server. ### LINUX LIST CERT cd /usr/lib/jvm/java-11-openjdk-11.0.15.0.10-2.el8_6.x86_64/bin ./keytool -list -keystore /usr/lib/jvm/java-11-openjdk-11.0.15.0.10-2.el8_6.x86_64/lib/security/cacerts ### LINUX IMPORT CERT ./keytool --import --alias keystore_cloudera --file /var/lib/cloudera-scm-agent/agent-cert/cm-auto-global_cacerts.pem -keystore /usr/lib/jvm/java-11-openjdk-11.0.15.0.10-2.el8_6.x86_64/lib/security/cacerts
... View more
08-09-2022
05:36 AM
My currently interpreter configuration: jdbc2 %jdbc2 Properties name value common.max_count 1000 default.completer.schemaFilters default.completer.ttlInSeconds 120 default.driver org.postgresql.Driver default.password default.precode default.splitQueries false default.statementPrecode default.url jdbc:postgresql://localhost:5432/ default.user gpadmin hive.driver org.apache.hive.jdbc.HiveDriver hive.url jdbc:hive2://SERVER01.LOCAL.NET:2181,SERVER02.LOCAL.NET:2181,SERVER03.LOCAL.NET:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2 hive.user hive zeppelin.jdbc.auth.type zeppelin.jdbc.concurrent.max_connection 10 zeppelin.jdbc.concurrent.use true zeppelin.jdbc.interpolation false zeppelin.jdbc.keytab.location /var/tmp/zeppelin.keytab zeppelin.jdbc.maxConnLifetime -1 zeppelin.jdbc.maxRows 1000 zeppelin.jdbc.principal zeppelin/SERVER01.LOCAL.NET@LOCAL.NET
... View more
08-09-2022
05:23 AM
Hello @jagadeesan , We did all the configurations, but now we are geting the error below. May I create some entry for trustStore? %jdbc2(hive) select current_user() Error: Also, could not send response: org.apache.thrift.transport.TTransportException: javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
... View more
08-03-2022
05:11 AM
Hello @jagadeesan , Thanks for your reply. On my fresh new CDP 7.1.7 cluster I havn't the interpreter %jdbc. I only have %livy, %angular and %md intepreters installed. Can I create new one? Could you share the correct way to create the %jdbc interpreter on CDP? I would like to load into the hive a spark dataframe. It's will be possible with this %jdbc interpreter? Tks!
... View more
08-01-2022
04:18 PM
Any update?
... View more
08-01-2022
03:24 PM
I fix this issue copying: cp /etc/hive/conf/hive-site.xml /etc/spark/conf
... View more
08-01-2022
03:28 AM
Hello, I would like to create Hive tables using Zeppeling and I found the the below document that use HWC: https://docs.cloudera.com/cdp-private-cloud-base/7.1.6/integrating-hive-and-bi/topics/hive-hwc-reading.html I'm not sure what means "livy configuration file"? Is "%livy2" a new Interpreter? May I create a new Interpreter with only indicated configurations? Thanks!
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Spark
-
Apache Zeppelin
07-30-2022
10:21 AM
Hi @jagadeesan , Thanks for you reply. How can I manually import/create trustore in zeppelin? When I rearranged the services, installing Zeppeling Server/Livy Server at the same host, the certification error stopped appearing. I would like to configure the services on separate hosts. Could I relocate Zeppeling back to the other host and see if the error happened again? Thanks!
... View more
07-30-2022
09:51 AM
Hello, We would like to create a Hive table in the ussign pyspark dataframe cluster. We have the script below, which has run well several times in the past on the same cluster. After some configuration changes in the cluster, the same script is showing the error below. We were unable to identify what changes are made to the cluster to trigger this error in this script (we rearange some services at cluster, etc) The simple script is: # pyspark --master=yarn data = [("Java", "20000"), ("Python", "100000"), ("Scala", "3000")] rdd = spark.sparkContext.parallelize(data) dfFromRDD1 = rdd.toDF(columns) dfFromRDD1.printSchema() dfFromRDD1.show() from pyspark.sql import SQLContext from pyspark.sql import HiveContext sqlContext = HiveContext(sc) dfFromRDD1.registerTempTable("evento_temp") sqlContext.sql("use default").show() ERROR: Hive Session ID = bd9c459e-1ec8-483e-9543-c1527b33feec 22/07/30 13:55:45 WARN metastore.PersistenceManagerProvider: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored 22/07/30 13:55:45 WARN util.DriverDataSource: Registered driver with driverClassName=org.apache.derby.jdbc.EmbeddedDriver was not found, trying direct instantiation. 22/07/30 13:55:46 WARN util.DriverDataSource: Registered driver with driverClassName=org.apache.derby.jdbc.EmbeddedDriver was not found, trying direct instantiation. 22/07/30 13:55:46 WARN metastore.MetaStoreDirectSql: Self-test query [select "DB_ID" from "DBS"] failed; direct SQL is disabled javax.jdo.JDODataStoreException: Error executing SQL query "select "DB_ID" from "DBS"". at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:543) ..... at java.base/java.lang.Thread.run(Thread.java:829) NestedThrowablesStackTrace: java.sql.SQLSyntaxErrorException: Table/View 'DBS' does not exist. at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source) at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown Source) at org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown Source) sqlContext.sql("CREATE TABLE IF NOT EXISTS evento STORED AS parquet as SELECT * from evento_temp").show() ERROR: 22/07/29 17:07:08 WARN Datastore.Schema: The MetaData for "org.apache.hadoop.hive.metastore.model.MStorageDescriptor" is specified with a foreign-key at class level yet no "table" is defined. All foreign-keys at this level must have a table that the FK goes to. 22/07/29 17:07:08 WARN Datastore.Schema: The MetaData for "org.apache.hadoop.hive.metastore.model.MStorageDescriptor" is specified with a foreign-key at class level yet no "table" is defined. All foreign-keys at this level must have a table that the FK goes to. 22/07/29 17:07:08 WARN metastore.PersistenceManagerProvider: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored 22/07/29 17:07:08 WARN metastore.PersistenceManagerProvider: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored 22/07/29 17:07:08 WARN metastore.HiveMetaStore: Location: file:/home/usr_cmteste3/spark-warehouse/evento specified for non-external table:evento 22/07/29 17:07:09 WARN scheduler.TaskSetManager: Lost task 1.0 in stage 3.0 (TID 4, <<HOST>>, executor 2): org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: Mkdirs failed to create file:/home/usr_cmteste3/spark-warehouse/evento/.hive-staging_hive_2022-07-29_17-07-08_935_7404207232723330868-1/-ext-10000/_temporary/0/_temporary/attempt_202207291707093395760670811853018_0003_m_000001_4 (exists=false, cwd=file:/data05/yarn/nm/usercache/usr_cmteste3/appcache/application_1659116901602_0017/container_e67_1659116901602_0017_01_000003) at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:282)
... View more
Labels:
07-29-2022
11:43 AM
I fixed this issue configuring Zeppelin and Live at the same host. Is there any other way to fix that?
... View more