Member since
06-13-2017
45
Posts
2
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1992 | 01-24-2019 06:22 AM | |
3188 | 08-06-2018 12:05 PM |
08-30-2018
07:10 AM
Hi all,
I'd like to connect to hive with keberos enabled via JDBC from windows. MIT is installed and ticket tis reterieved.
The smoke test java code is
public class HiveJDBC2 {
private static String driverName = "org.apache.hive.jdbc.HiveDriver";
public static void main(String[] args) throws SQLException, IOException, ClassNotFoundException {
try {
Class.forName(driverName);
} catch (ClassNotFoundException e) {
e.printStackTrace();
System.exit(1);
}
System.setProperty("java.security.auth.login.config", "gss-jaas.conf");
System.setProperty("sun.security.jgss.debug", "true");
System.setProperty("javax.security.auth.useSubjectCredsOnly", "false");
System.setProperty("java.security.krb5.conf", "krb5.conf");
Connection con = DriverManager.getConnection("jdbc:hive2://10.2.29.102:10000/default;principal=hive/lhq0363.abcd.com@ABCD.COM");
System.out.println("Connected");
con.close();
}
}
The error log is:
Search Subject for Kerberos V5 INIT cred (<<DEF>>, sun.security.jgss.krb5.Krb5InitCredential)
Debug is true storeKey false useTicketCache true useKeyTab false doNotPrompt false ticketCache is null isInitiator true KeyTab is null refreshKrb5Config is false principal is hive/lhq0363.abcd.com@ABCD.COM tryFirstPass is false useFirstPass is false storePass is false clearPass is false
Acquire TGT from Cache
Principal is hive/lhq0363.abcd.com@ABCD.COM
Commit Succeeded
Exception in thread "main" java.sql.SQLException: Could not open client transport with JDBC Uri: jdbc:hive2://10.2.29.102:10000/default;principal=hive/lhq0363.abcd.com@ABCD.COM: GSS initiate failed
at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:218)
at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:156)
at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:270)
at com.test.HiveJDBC2.main(HiveJDBC2.java:26)
Caused by: org.apache.thrift.transport.TTransportException: GSS initiate failed
at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316)
at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:194)
... 5 more
I have searched quite a few threads but no luck. would someone give me any ideas?
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive
-
Kerberos
-
Security
08-07-2018
11:17 AM
hi there, In the last step of upgrading hdp2.6.3 to hdp3.0 on my ubuntu servers, it asks me to update password of Ranger Usersync, Tagsync and Keyadmin . However, in the Ranger Service Config Tab from Ambari, the password input boxes are all disabled. Am i doing anything wrong? How to resolve it?
... View more
Labels:
08-06-2018
12:05 PM
It's caused by missing file in folder dists in HDP-UTILS-1.1.0.22-ubuntu16.tar.gz and fixed as following after getting the advice from a HDP consultant 1. extract HDP-UTILS-1.1.0.22-ubuntu16.tar.gz 2. mkdir HDP-UTILS under folder dists 3. copy all files from HDP repot /hdp/HDP/ubuntu16/2.6.5.0-292/dists/HDP/ to above HDP-UTILS folder 4 update the HDP-UTILS repo base url according to https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.0.0/bk_ambari-installation/content/setting_up_a_local_repository_with_no_internet_access.html
... View more
07-26-2018
03:30 PM
my Ambari version is 2.6.0.0. The <tag> is commetted in the VDF file HDP-2.6.5.0-292.xml to fix the "invalid tag" issue while uploading the VDF in the 1st step.. Is it causing error afterwards?hdp-2650-292.xml
... View more
07-26-2018
11:13 AM
26 Jul 2018 19:00:38,853 ERROR [ambari-client-thread-52774] BaseManagementHandler:61 - Caught a system exception while attempting to create a resource: An internal system exception occurred: Stack data, stackName=HDP, stackVersion= 2.6, osType=ubuntu16, repoId= HDP-UTILS-1.1.0.22
org.apache.ambari.server.controller.spi.SystemException: An internal system exception occurred: Stack data, stackName=HDP, stackVersion= 2.6, osType=ubuntu16, repoId= HDP-UTILS-1.1.0.22
at org.apache.ambari.server.controller.internal.AbstractResourceProvider.createResources(AbstractResourceProvider.java:287)
at org.apache.ambari.server.controller.internal.RepositoryResourceProvider.createResources(RepositoryResourceProvider.java:217)
at org.apache.ambari.server.controller.internal.ClusterControllerImpl.createResources(ClusterControllerImpl.java:298)
at org.apache.ambari.server.api.services.persistence.PersistenceManagerImpl.create(PersistenceManagerImpl.java:97)
at org.apache.ambari.server.api.handlers.CreateHandler.persist(CreateHandler.java:37)
at org.apache.ambari.server.api.handlers.BaseManagementHandler.handleRequest(BaseManagementHandler.java:73)
at org.apache.ambari.server.api.services.BaseRequest.process(BaseRequest.java:144)
at org.apache.ambari.server.api.services.BaseService.handleRequest(BaseService.java:126)
at org.apache.ambari.server.api.services.BaseService.handleRequest(BaseService.java:90)
at org.apache.ambari.server.api.services.RepositoryService.createRepository(RepositoryService.java:100)
at sun.reflect.GeneratedMethodAccessor763.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
Caused by: org.apache.ambari.server.StackAccessException: Stack data, stackName=HDP, stackVersion= 2.6, osType=ubuntu16, repoId= HDP-UTILS-1.1.0.22
at org.apache.ambari.server.api.services.AmbariMetaInfo.getRepository(AmbariMetaInfo.java:411)
at org.apache.ambari.server.controller.AmbariManagementControllerImpl.verifyRepository(AmbariManagementControllerImpl.java:4621)
at org.apache.ambari.server.controller.AmbariManagementControllerImpl.verifyRepositories(AmbariManagementControllerImpl.java:4607)
at org.apache.ambari.server.controller.internal.RepositoryResourceProvider$6.invoke(RepositoryResourceProvider.java:221)
at org.apache.ambari.server.controller.internal.RepositoryResourceProvider$6.invoke(RepositoryResourceProvider.java:217)
at org.apache.ambari.server.controller.internal.AbstractResourceProvider.invokeWithRetry(AbstractResourceProvider.java:455)
at org.apache.ambari.server.controller.internal.AbstractResourceProvider.createResources(AbstractResourceProvider.java:278)
... 95 more
26 Jul 2018 11:42:00,631 ERROR [ambari-client-thread-50569] BaseManagementHandler:61 - Caught a system exception while attempting to create a resource: An internal system exception occurred: Stack data, stackName=HDP, stackVersion= 2.6, osType=ubuntu16, repoId= HDP-2.6-GPL
org.apache.ambari.server.controller.spi.SystemException: An internal system exception occurred: Stack data, stackName=HDP, stackVersion= 2.6, osType=ubuntu16, repoId= HDP-2.6-GPL
at org.apache.ambari.server.controller.internal.AbstractResourceProvider.createResources(AbstractResourceProvider.java:287)
at org.apache.ambari.server.controller.internal.RepositoryResourceProvider.createResources(RepositoryResourceProvider.java:217)
at org.apache.ambari.server.controller.internal.ClusterControllerImpl.createResources(ClusterControllerImpl.java:298)
at org.apache.ambari.server.api.services.persistence.PersistenceManagerImpl.create(PersistenceManagerImpl.java:97)
at org.apache.ambari.server.api.handlers.CreateHandler.persist(CreateHandler.java:37)
at org.apache.ambari.server.api.handlers.BaseManagementHandler.handleRequest(BaseManagementHandler.java:73)
at org.apache.ambari.server.api.services.BaseRequest.process(BaseRequest.java:144)
at org.apache.ambari.server.api.services.BaseService.handleRequest(BaseService.java:126)
at org.apache.ambari.server.api.services.BaseService.handleRequest(BaseService.java:90)
at org.apache.ambari.server.api.services.RepositoryService.createRepository(RepositoryService.java:100)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java
Caused by: org.apache.ambari.server.StackAccessException: Stack data, stackName=HDP, stackVersion= 2.6, osType=ubuntu16, repoId= HDP-2.6-GPL
at org.apache.ambari.server.api.services.AmbariMetaInfo.getRepository(AmbariMetaInfo.java:411)
at org.apache.ambari.server.controller.AmbariManagementControllerImpl.verifyRepository(AmbariManagementControllerImpl.java:4621)
at org.apache.ambari.server.controller.AmbariManagementControllerImpl.verifyRepositories(AmbariManagementControllerImpl.java:4607)
at org.apache.ambari.server.controller.internal.RepositoryResourceProvider$6.invoke(RepositoryResourceProvider.java:221)
at org.apache.ambari.server.controller.internal.RepositoryResourceProvider$6.invoke(RepositoryResourceProvider.java:217)
at org.apache.ambari.server.controller.internal.AbstractResourceProvider.invokeWithRetry(AbstractResourceProvider.java:455)
at org.apache.ambari.server.controller.internal.AbstractResourceProvider.createResources(AbstractResourceProvider.java:278)
... 96 more
... View more
07-26-2018
07:44 AM
hi there, I'd like to upgrade HDP2.6.3 to HDP2.6.5 without internet access in the cluster (OS: ubuntu). Local repot files are downloaded,extraced and accessiable by the cluster. In the last step, The base url are : HDP-2.6 : http://10.2.26.11/hdp2.6.5/HDP/ubuntu16/2.6.5.0-292 HDP-2.6-GPL : http://10.2.26.11/hdp2.6.5/HDP-GPL/ubuntu16/2.6.5.0-292/hdp-gpl.gpl.list HDP-UTILS-1.1.0.22: http://10.2.26.11/hdp2.6.5/HDP-UTILS/ubuntu16/1.1.0.22 "HDP-2.6" is successfully, but "HDP-2.6-GPL" and "HDP-UTILS-1.1.0.22" are encoutering error: "Some of the repositories failed validation. Make changes to the base url or skip validation..." Any advice? Thanks
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
07-20-2018
04:23 AM
Hi Felix, I followed your guideline and change the command as following spark-submit \
--class com.test.SmokeTest \
--master yarn \
--deploy-mode cluster \
--driver-memory 1g \
--executor-memory 2g \
--executor-cores 2 \
--num-executors 3 \
--files /etc/hbase/conf/hbase-site.xml \
--conf "spark.executor.extraClassPath=phoenix-client.jar:hbase-client.jar:phoenix-spark-4.7.0.2.6.2.0-205.jar:hbase-common.jar:hbase-protocol.jar:phoenix-core-4.7.0.2.6.2.0-205.jar" \
--conf "spark.driver.extraClassPath=phoenix-client.jar:hbase-client.jar:phoenix-spark-4.7.0.2.6.2.0-205.jar:hbase-common.jar:hbase-protocol.jar:phoenix-core-4.7.0.2.6.2.0-205.jar" \
--jars /usr/hdp/current/phoenix-client/phoenix-client.jar,/usr/hdp/current/phoenix-client/lib/hbase-client.jar,/usr/hdp/current/phoenix-client/lib/phoenix-spark-4.7.0.2.6.2.0-205.jar,/usr/hdp/current/phoenix-client/lib/hbase-common.jar,/usr/hdp/current/phoenix-client/lib/hbase-protocol.jar,/usr/hdp/current/phoenix-client/lib/phoenix-core-4.7.0.2.6.2.0-205.jar \
--verbose \
/tmp/test-1.0-SNAPSHOT.jar which encouter the same error: Userclass threw exception: java.lang.NoClassDefFoundError: org/apache/spark/sql/DataFrame But it run successfully with spark 1.6.3
... View more
07-18-2018
12:02 PM
spark-submit \
--class com.test.SmokeTest \
--master yarn \
--deploy-mode cluster \
--driver-memory 1g \
--executor-memory 2g \
--executor-cores 2 \
--num-executors 3 \
--files /etc/hbase/conf/hbase-site.xml \
--conf "spark.executor.extraClassPath=phoenix-client.jar:hbase-client.jar:phoenix-spark-4.7.0.2.6.2.0-205.jar:hbase-common.jar:hbase-protocol.jar:phoenix-core-4.7.0.2.6.2.0-205.jar" \
--conf "spark.driver.extraClassPath=/usr/hdp/current/phoenix-client/phoenix-client.jar:/usr/hdp/current/phoenix-client/lib/hbase-client.jar:/usr/hdp/current/phoenix-client/lib/phoenix-spark-4.7.0.2.6.2.0-205.jar:/usr/hdp/current/phoenix-client/lib/hbase-common.jar:/usr/hdp/current/phoenix-client/lib/hbase-protocol.jar:/usr/hdp/current/phoenix-client/lib/phoenix-core-4.7.0.2.6.2.0-205.jar" \
--jars /usr/hdp/current/phoenix-client/phoenix-client.jar,/usr/hdp/current/phoenix-client/lib/hbase-client.jar,/usr/hdp/current/phoenix-client/lib/phoenix-spark-4.7.0.2.6.2.0-205.jar,/usr/hdp/current/phoenix-client/lib/hbase-common.jar,/usr/hdp/current/phoenix-client/lib/hbase-protocol.jar,/usr/hdp/current/phoenix-client/lib/phoenix-core-4.7.0.2.6.2.0-205.jar \
--verbose \
/tmp/test-1.0-SNAPSHOT.jar Following your advice, set the classpath and copy the said xml, but still have error : 18/07/18 19:47:59 INFO Client:
client token: Token { kind: YARN_CLIENT_TOKEN, service: }
diagnostics: User class threw exception: java.lang.NoClassDefFoundError: org/apache/spark/sql/DataFrame
ApplicationMaster host: 10.2.29.104
ApplicationMaster RPC port: 0
queue: default
start time: 1531914415906
final status: FAILED
tracking URL: http://en1-dev1-tbdp.trendy-global.com:8088/proxy/application_1531814517578_0019/
user: nifi
Exception in thread "main" org.apache.spark.SparkException: Application application_1531814517578_0019 finished with failed status
at org.apache.spark.deploy.yarn.Client.run(Client.scala:1261)
at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1307)
at org.apache.spark.deploy.yarn.Client.main(Client.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$runMain(SparkSubmit.scala:751)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
18/07/18 19:47:59 INFO ShutdownHookManager: Shutdown hook called
... View more
07-17-2018
05:39 AM
Hi @Felix Albani thanks your advice, I changed the submit command as: spark-submit \
--class com.test.SmokeTest \
--master yarn \
--deploy-mode cluster \
--driver-memory 1g \
--executor-memory 2g \
--executor-cores 4 \
--num-executors 2 \
--files /etc/hbase/conf/hbase-site.xml \
--conf "spark.executor.extraClassPath=phoenix-4.7.0.2.6.2.0-205-spark2.jar:phoenix-client.jar:hbase-client.jar:phoenix-spark2-4.7.0.2.6.2.0-205.jar:hbase-common.jar:hbase-protocol.jar:phoenix-core-4.7.0.2.6.2.0-205.jar" \
--conf "spark.driver.extraClassPath=/usr/hdp/current/phoenix-client/phoenix-4.7.0.2.6.2.0-205-spark2.jar:/usr/hdp/current/phoenix-client/phoenix-client.jar:/usr/hdp/current/phoenix-client/lib/hbase-client.jar:/usr/hdp/current/phoenix-client/lib/phoenix-spark2-4.7.0.2.6.2.0-205.jar:/usr/hdp/current/phoenix-client/lib/hbase-common.jar:/usr/hdp/current/phoenix-client/lib/hbase-protocol.jar:/usr/hdp/current/phoenix-client/lib/phoenix-core-4.7.0.2.6.2.0-205.jar" \
--jars /usr/hdp/current/phoenix-client/phoenix-4.7.0.2.6.2.0-205-spark2.jar,/usr/hdp/current/phoenix-client/phoenix-client.jar,/usr/hdp/current/phoenix-client/lib/hbase-client.jar,/usr/hdp/current/phoenix-client/lib/phoenix-spark2-4.7.0.2.6.2.0-205.jar,/usr/hdp/current/phoenix-client/lib/hbase-common.jar,/usr/hdp/current/phoenix-client/lib/hbase-protocol.jar,/usr/hdp/current/phoenix-client/lib/phoenix-core-4.7.0.2.6.2.0-205.jar \
--verbose \
/tmp/test-1.0-SNAPSHOT.jar but no luck: 18/07/17 13:16:21 INFO CodeGenerator: Code generated in 33.11763 ms
18/07/17 13:16:22 ERROR ApplicationMaster: User class threw exception: java.lang.NoSuchMethodError: org.apache.phoenix.spark.DataFrameFunctions.saveToPhoenix$default$4()Lscala/Option;
java.lang.NoSuchMethodError: org.apache.phoenix.spark.DataFrameFunctions.saveToPhoenix$default$4()Lscala/Option;
at com.trendyglobal.bigdata.inventory.CreateTestData$anonfun$main$1.apply$mcVI$sp(CreateTestData.scala:87)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160)
at com.trendyglobal.bigdata.inventory.CreateTestData$.main(CreateTestData.scala:80)
at com.trendyglobal.bigdata.inventory.CreateTestData.main(CreateTestData.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.yarn.ApplicationMaster$anon$3.run(ApplicationMaster.scala:654)
18/07/17 13:16:22 INFO ApplicationMaster: Final app status: FAILED, exitCode: 15, (reason: User class threw exception: java.lang.NoSuchMethodError: org.apache.phoenix.spark.DataFrameFunctions.saveToPhoenix$default$4()Lscala/Option;)
18/07/17 13:16:22 INFO SparkContext: Invoking stop() from shutdown hook
18/07/17 13:16:22 INFO ServerConnector: Stopped Spark@81d2265{HTTP/1.1}{0.0.0.0:0}
18/07/17 13:16:22 INFO SparkUI: Stopped Spark web UI at http://10.2.29.104:37764
18/07/17 13:16:22 INFO YarnAllocator: Driver requested a total number of 0 executor(s).
18/07/17 13:16:22 INFO YarnClusterSchedulerBackend: Shutting down all executors
18/07/17 13:16:22 INFO YarnSchedulerBackend$YarnDriverEndpoint: Asking each executor to shut down
18/07/17 13:16:22 INFO SchedulerExtensionServices: Stopping SchedulerExtensionServices
(serviceOption=None,
services=List(),
started=false)<br> I can see the phoenix-spark2-4.7.0.2.6.2.0-205.jar was in the classpath ===============================================================================
YARN executor launch context:
env:
CLASSPATH -> phoenix-4.7.0.2.6.2.0-205-spark2.jar:phoenix-client.jar:hbase-client.jar:phoenix-spark2-4.7.0.2.6.2.0-205.jar:hbase-common.jar:hbase-protocol.jar:phoenix-core-4.7.0.2.6.2.0-205.jar<CPS>{{PWD}}<CPS>{{PWD}}/__spark_conf__<CPS>{{PWD}}/__spark_libs__/*<CPS>/etc/hadoop/conf<CPS>/usr/hdp/current/hadoop-client/*<CPS>/usr/hdp/current/hadoop-client/lib/*<CPS>/usr/hdp/current/hadoop-hdfs-client/*<CPS>/usr/hdp/current/hadoop-hdfs-client/lib/*<CPS>/usr/hdp/current/hadoop-yarn-client/*<CPS>/usr/hdp/current/hadoop-yarn-client/lib/*<CPS>$PWD/mr-framework/hadoop/share/hadoop/mapreduce/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/lib/*:$PWD/mr-framework/hadoop/share/hadoop/common/*:$PWD/mr-framework/hadoop/share/hadoop/common/lib/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/lib/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/lib/*:$PWD/mr-framework/hadoop/share/hadoop/tools/lib/*:/usr/hdp/2.6.2.0-205/hadoop/lib/hadoop-lzo-0.6.0.2.6.2.0-205.jar:/etc/hadoop/conf/secure
SPARK_YARN_STAGING_DIR -> hdfs://nn1-dev1-tbdp.trendy-global.com:8020/user/nifi/.sparkStaging/application_1529853578712_0039
SPARK_USER -> nifi
SPARK_YARN_MODE -> true
command:
LD_LIBRARY_PATH="/usr/hdp/current/hadoop-client/lib/native:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64:$LD_LIBRARY_PATH" \
{{JAVA_HOME}}/bin/java \
-server \
-Xmx2048m \
-Djava.io.tmpdir={{PWD}}/tmp \
'-Dspark.history.ui.port=18081' \
-Dspark.yarn.app.container.log.dir=<LOG_DIR> \
-XX:OnOutOfMemoryError='kill %p' \
org.apache.spark.executor.CoarseGrainedExecutorBackend \
--driver-url \
spark://CoarseGrainedScheduler@10.2.29.104:40401 \
--executor-id \
<executorId> \
--hostname \
<hostname> \
--cores \
4 \
--app-id \
application_1529853578712_0039 \
--user-class-path \
file:$PWD/__app__.jar \
--user-class-path \
file:$PWD/phoenix-4.7.0.2.6.2.0-205-spark2.jar \
--user-class-path \
file:$PWD/phoenix-client.jar \
--user-class-path \
file:$PWD/hbase-client.jar \
--user-class-path \
file:$PWD/phoenix-spark2-4.7.0.2.6.2.0-205.jar \
--user-class-path \
file:$PWD/hbase-common.jar \
--user-class-path \
file:$PWD/hbase-protocol.jar \
--user-class-path \
file:$PWD/phoenix-core-4.7.0.2.6.2.0-205.jar \
1><LOG_DIR>/stdout \
2><LOG_DIR>/stderr
resources:
hbase-common.jar -> resource { scheme: "hdfs" host: "nn1-dev1-tbdp.trendy-global.com" port: 8020 file: "/user/nifi/.sparkStaging/application_1529853578712_0039/hbase-common.jar" } size: 575685 timestamp: 1531804498373 type: FILE visibility: PRIVATE
phoenix-4.7.0.2.6.2.0-205-spark2.jar -> resource { scheme: "hdfs" host: "nn1-dev1-tbdp.trendy-global.com" port: 8020 file: "/user/nifi/.sparkStaging/application_1529853578712_0039/phoenix-4.7.0.2.6.2.0-205-spark2.jar" } size: 87275 timestamp: 1531804497220 type: FILE visibility: PRIVATE
__app__.jar -> resource { scheme: "hdfs" host: "nn1-dev1-tbdp.trendy-global.com" port: 8020 file: "/user/nifi/.sparkStaging/application_1529853578712_0039/inventory-calc-service-1.0-SNAPSHOT.jar" } size: 41478 timestamp: 1531804497134 type: FILE visibility: PRIVATE
__spark_conf__ -> resource { scheme: "hdfs" host: "nn1-dev1-tbdp.trendy-global.com" port: 8020 file: "/user/nifi/.sparkStaging/application_1529853578712_0039/__spark_conf__.zip" } size: 106688 timestamp: 1531804498824 type: ARCHIVE visibility: PRIVATE
hbase-client.jar -> resource { scheme: "hdfs" host: "nn1-dev1-tbdp.trendy-global.com" port: 8020 file: "/user/nifi/.sparkStaging/application_1529853578712_0039/hbase-client.jar" } size: 1398707 timestamp: 1531804498300 type: FILE visibility: PRIVATE
phoenix-spark2-4.7.0.2.6.2.0-205.jar -> resource { scheme: "hdfs" host: "nn1-dev1-tbdp.trendy-global.com" port: 8020 file: "/user/nifi/.sparkStaging/application_1529853578712_0039/phoenix-spark2-4.7.0.2.6.2.0-205.jar" } size: 81143 timestamp: 1531804498334 type: FILE visibility: PRIVATE
hbase-site.xml -> resource { scheme: "hdfs" host: "nn1-dev1-tbdp.trendy-global.com" port: 8020 file: "/user/nifi/.sparkStaging/application_1529853578712_0039/hbase-site.xml" } size: 7320 timestamp: 1531804498662 type: FILE visibility: PRIVATE
hbase-protocol.jar -> resource { scheme: "hdfs" host: "nn1-dev1-tbdp.trendy-global.com" port: 8020 file: "/user/nifi/.sparkStaging/application_1529853578712_0039/hbase-protocol.jar" } size: 4941870 timestamp: 1531804498450 type: FILE visibility: PRIVATE
__spark_libs__ -> resource { scheme: "hdfs" host: "nn1-dev1-tbdp.trendy-global.com" port: 8020 file: "/hdp/apps/2.6.2.0-205/spark2/spark2-hdp-yarn-archive.tar.gz" } size: 180384518 timestamp: 1507704288496 type: ARCHIVE visibility: PUBLIC
phoenix-client.jar -> resource { scheme: "hdfs" host: "nn1-dev1-tbdp.trendy-global.com" port: 8020 file: "/user/nifi/.sparkStaging/application_1529853578712_0039/phoenix-client.jar" } size: 107566119 timestamp: 1531804498256 type: FILE visibility: PRIVATE
phoenix-core-4.7.0.2.6.2.0-205.jar -> resource { scheme: "hdfs" host: "nn1-dev1-tbdp.trendy-global.com" port: 8020 file: "/user/nifi/.sparkStaging/application_1529853578712_0039/phoenix-core-4.7.0.2.6.2.0-205.jar" } size: 3834414 timestamp: 1531804498628 type: FILE visibility: PRIVATE
===============================================================================
... View more
07-16-2018
11:28 AM
any idea for this issue: https://community.hortonworks.com/questions/202521/spark-submit-nosuchmethoderror-savetophoenix.html
... View more