Member since
06-13-2017
45
Posts
2
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1351 | 01-24-2019 06:22 AM | |
1535 | 08-06-2018 12:05 PM |
12-11-2019
01:45 AM
@MattWho As your advice, I have created the said "state" folder under "conf", however, it raised the same error。
... View more
12-10-2019
01:53 AM
@MattWho I am running nifi 1.10 in windows10 with admin account and there is the said "state" directory: here is the full log about the s2s error: 2019-12-09 20:39:13,124 DEBUG [Timer-Driven Process Thread-10] org.apache.nifi.engine.FlowEngine A flow controller execution task 'java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@5a8bd0fa' has been cancelled. 2019-12-09 20:39:13,126 DEBUG [NiFi Web Server-17] org.apache.nifi.web.filter.TimerFilter GET http://localhost:8080/nifi-api/site-to-site/peers from 127.0.0.1 duration for Request ID null: 2 millis 2019-12-09 20:39:13,129 WARN [Http Site-to-Site PeerSelector] o.apache.nifi.remote.client.PeerSelector org.apache.nifi.remote.client.PeerSelector@f1ae5b8 Unable to refresh Remote Group's peers due to null 2019-12-09 20:39:13,129 WARN [Http Site-to-Site PeerSelector] o.a.n.r.SiteToSiteBulletinReportingTask SiteToSiteBulletinReportingTask[id=ab75497c-016e-1000-c86b-2763551bcb01] org.apache.nifi.remote.client.PeerSelector@f1ae5b8 Unable to refresh Remote Group's peers due to null 2019-12-09 20:39:13,129 DEBUG [Http Site-to-Site PeerSelector] o.apache.nifi.remote.client.PeerSelector java.lang.NullPointerException: null at org.apache.nifi.remote.client.PeerSelector.persistPeerStatuses(PeerSelector.java:112) at org.apache.nifi.remote.client.PeerSelector.refreshPeers(PeerSelector.java:306) at org.apache.nifi.remote.client.http.HttpClient$2.run(HttpClient.java:86) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2019-12-09 20:39:13,132 DEBUG [Timer-Driven Process Thread-5] o.a.n.controller.tasks.ConnectableTask Triggering StandardPublicPort[id=db33781e-016e-1000-03bb-ceb910fd0a45,name=ErrorIn]
... View more
12-07-2019
03:01 AM
hi there,
I have created a S2S reporting task to the same instance of nifi:
After enabing it, the port "Bulletin" can get error message however the bulletin board raise warining the message continueously :
2019-12-06 20:35:13,274 WARN [NiFi Site-to-Site Connection Pool Maintenance] o.apache.nifi.remote.client.PeerSelector org.apache.nifi.remote.client.PeerSelector@7a616d8a Unable to refresh Remote Group's peers due to null
Why and any idea to fix it?
... View more
- Tags:
- NiFi
Labels:
- Labels:
-
Apache NiFi
08-29-2019
04:27 AM
Both are true. the two NiFi clusters can't talk to each other directly, but they can comminute "messge" via DB or two folders ( one sync-in message , the other sync-out message) . The current S2S implemention won't work , so wondering if a customerized S2S implemention fits it
... View more
08-29-2019
01:42 AM
The “move data" action is taken by third party software instead of nifi, the data moved between two networks is not guaranteed (some times would lost data), i'd like to leverage nifi's existing reliable delivery approach (e.g. heartbeat, checksum, retry..) to get maxium possible reliability.
... View more
08-28-2019
05:09 AM
Hi there,
there are two nifi instances locate in two splited network which can't connect to each other via tcp/ip for some security reason. Instead the network can sync files in a folder (e.g. network A can move files in folder A to network B folder A, network B can move files in folder B to network folder B). Hence i'd like to implement a customized site-to-site protocol for such case.
Would any one have any ideas/hints for me?
thanks
Forest
... View more
Labels:
- Labels:
-
Apache NiFi
06-06-2019
11:06 AM
after restart the ambari-agent service, i can start the zk , hdfs, mr, hive , spark2 now. thanks @Jay Kumar SenSharma
... View more
06-06-2019
10:37 AM
i also tried to use "ambari-admin-password-reset" to set admin password then login ambari with admin, then repeat the start service actions, but no luck to start hdp services..
... View more
06-06-2019
10:27 AM
While trying to install HDP 2.6.5 sandbox on windows 10 with wmare, after starting the vm with 4cpu and 10g ram, i can login the ambari but all service can't be started
I have tried "restart all" service or restart zookeeper via the account: raj_ops, but no luck.
... View more
Labels:
01-24-2019
06:22 AM
fixed after adding the edgenode into the datanode /etc/hosts
... View more
01-23-2019
03:51 PM
hi there, Below spark-sql comand failed to run on an edgenode with only spark-client installed spark-sql --executor-cores 2 --num-executors 2 -e 'select profitcenter, count(1) from default.tableX group by profitcenter' --master yarn --name spark_sql_test1 --verbose --queue default The error log is: 19/01/23 19:18:24 ERROR ApplicationMaster: Failed to connect to driver at emr-header-1.cluster-92503:35638, retrying ...
19/01/23 19:18:24 ERROR ApplicationMaster: Uncaught exception:
org.apache.spark.SparkException: Failed to connect to driver!
at org.apache.spark.deploy.yarn.ApplicationMaster.waitForSparkDriver(ApplicationMaster.scala:672)
at org.apache.spark.deploy.yarn.ApplicationMaster.runExecutorLauncher(ApplicationMaster.scala:532)
at org.apache.spark.deploy.yarn.ApplicationMaster.org$apache$spark$deploy$yarn$ApplicationMaster$runImpl(ApplicationMaster.scala:347)
at org.apache.spark.deploy.yarn.ApplicationMaster$anonfun$run$2.apply$mcV$sp(ApplicationMaster.scala:260)
at org.apache.spark.deploy.yarn.ApplicationMaster$anonfun$run$2.apply(ApplicationMaster.scala:260)
at org.apache.spark.deploy.yarn.ApplicationMaster$anonfun$run$2.apply(ApplicationMaster.scala:260)
at org.apache.spark.deploy.yarn.ApplicationMaster$anon$5.run(ApplicationMaster.scala:815)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1727)
at org.apache.spark.deploy.yarn.ApplicationMaster.doAsUser(ApplicationMaster.scala:814)
at org.apache.spark.deploy.yarn.ApplicationMaster.run(ApplicationMaster.scala:259)
at org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:839)
at org.apache.spark.deploy.yarn.ExecutorLauncher$.main(ApplicationMaster.scala:869)
at org.apache.spark.deploy.yarn.ExecutorLauncher.main(ApplicationMaster.scala)
19/01/23 19:18:24 INFO ApplicationMaster: Final app status: FAILED, exitCode: 13, (reason: Uncaught exception: org.apache.spark.SparkException: Failed to connect to driver!)
19/01/23 19:18:24 INFO ShutdownHookManager: Shutdown hook called
End of LogType:stderr emr-header-1.cluster-92503 is the hostname of the edgenode.. The command works fine if change it to local[*] Would some one has any ideas? Thanks Forest
... View more
Labels:
- Labels:
-
Apache Spark
-
Apache YARN
12-14-2018
10:36 AM
After adding atlas service in hdp3.0, it failed to start service, the error is: Error from server at http://en2-blue-tbdp.xxx.com:8886/solr: shards can be added only to 'implicit' collections
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://en2-blue-tbdp.xxx.com:8886/solr: shards can be added only to 'implicit' collections
at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:577)
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:372)
at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:325)
at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at org.apache.ambari.logsearch.solr.commands.AbstractSolrRetryCommand.createAndProcessRequest(AbstractSolrRetryCommand.java:43)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.createShard(AmbariSolrCloudClient.java:227)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.createCollection(AmbariSolrCloudClient.java:116)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:473) The Infra Solr service works well in the hdp cluster, i am assuming atlas is using the infra solr. can some one has any advice?
... View more
Labels:
- Labels:
-
Apache Atlas
-
Apache Solr
08-30-2018
07:38 AM
thanks your hints @euricana . it really caused by server time is 5 min earlier than host time
... View more
08-30-2018
07:29 AM
The same program runs succefully on an edgenode with hive2 client.
... View more
08-30-2018
07:10 AM
Hi all,
I'd like to connect to hive with keberos enabled via JDBC from windows. MIT is installed and ticket tis reterieved.
The smoke test java code is
public class HiveJDBC2 {
private static String driverName = "org.apache.hive.jdbc.HiveDriver";
public static void main(String[] args) throws SQLException, IOException, ClassNotFoundException {
try {
Class.forName(driverName);
} catch (ClassNotFoundException e) {
e.printStackTrace();
System.exit(1);
}
System.setProperty("java.security.auth.login.config", "gss-jaas.conf");
System.setProperty("sun.security.jgss.debug", "true");
System.setProperty("javax.security.auth.useSubjectCredsOnly", "false");
System.setProperty("java.security.krb5.conf", "krb5.conf");
Connection con = DriverManager.getConnection("jdbc:hive2://10.2.29.102:10000/default;principal=hive/lhq0363.abcd.com@ABCD.COM");
System.out.println("Connected");
con.close();
}
}
The error log is:
Search Subject for Kerberos V5 INIT cred (<<DEF>>, sun.security.jgss.krb5.Krb5InitCredential)
Debug is true storeKey false useTicketCache true useKeyTab false doNotPrompt false ticketCache is null isInitiator true KeyTab is null refreshKrb5Config is false principal is hive/lhq0363.abcd.com@ABCD.COM tryFirstPass is false useFirstPass is false storePass is false clearPass is false
Acquire TGT from Cache
Principal is hive/lhq0363.abcd.com@ABCD.COM
Commit Succeeded
Exception in thread "main" java.sql.SQLException: Could not open client transport with JDBC Uri: jdbc:hive2://10.2.29.102:10000/default;principal=hive/lhq0363.abcd.com@ABCD.COM: GSS initiate failed
at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:218)
at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:156)
at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:270)
at com.test.HiveJDBC2.main(HiveJDBC2.java:26)
Caused by: org.apache.thrift.transport.TTransportException: GSS initiate failed
at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316)
at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:194)
... 5 more
I have searched quite a few threads but no luck. would someone give me any ideas?
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive
-
Kerberos
-
Security
08-10-2018
02:31 AM
I have tried to re-install ranger and ranger kms, everything works well except the warning still exist. Wondering if it is a wrong warning..
... View more
08-08-2018
11:19 AM
Hi there, after upgrading to hdp3.0, i got an warning of "User:amb_ranger_admin credentials on Ambari UI are not in sync with Ranger", I followed the advice from https://community.hortonworks.com/questions/88381/useramb-ranger-admin-credentials-on-ambari-ui-are.html , as i forgot the orginal password of amb_ranger_admin in ranger, i re-set the amb_ranger_admin password in ranger ui to same as ambari ranger ui "Ranger Admin username for Ambari" . But then it failed to restart ranger from Ambari, the error is: Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/RANGER/package/scripts/ranger_admin.py", line 236, in <module>
RangerAdmin().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 353, in execute
method(env)
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/RANGER/package/scripts/ranger_admin.py", line 94, in start
setup_ranger_xml.validate_user_password()
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/RANGER/package/scripts/setup_ranger_xml.py", line 838, in validate_user_password
raise Fail("Password validation failed for : " + ", ".join(validation) + ". Password should be minimum 8 characters with minimum one alphabet and one numeric. Unsupported special characters are \" ' \ `")
resource_management.core.exceptions.Fail: Password validation failed for : admin_password. Password should be minimum 8 characters with minimum one alphabet and one numeric. Unsupported special characters are " ' \ ` I am pretty sure that the password of "Ranger Admin username for Ambari" is fulfill the requirement. Any idea?
... View more
Labels:
08-07-2018
11:17 AM
hi there, In the last step of upgrading hdp2.6.3 to hdp3.0 on my ubuntu servers, it asks me to update password of Ranger Usersync, Tagsync and Keyadmin . However, in the Ranger Service Config Tab from Ambari, the password input boxes are all disabled. Am i doing anything wrong? How to resolve it?
... View more
Labels:
08-06-2018
12:05 PM
It's caused by missing file in folder dists in HDP-UTILS-1.1.0.22-ubuntu16.tar.gz and fixed as following after getting the advice from a HDP consultant 1. extract HDP-UTILS-1.1.0.22-ubuntu16.tar.gz 2. mkdir HDP-UTILS under folder dists 3. copy all files from HDP repot /hdp/HDP/ubuntu16/2.6.5.0-292/dists/HDP/ to above HDP-UTILS folder 4 update the HDP-UTILS repo base url according to https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.0.0/bk_ambari-installation/content/setting_up_a_local_repository_with_no_internet_access.html
... View more
07-26-2018
03:30 PM
my Ambari version is 2.6.0.0. The <tag> is commetted in the VDF file HDP-2.6.5.0-292.xml to fix the "invalid tag" issue while uploading the VDF in the 1st step.. Is it causing error afterwards?hdp-2650-292.xml
... View more
07-26-2018
11:13 AM
26 Jul 2018 19:00:38,853 ERROR [ambari-client-thread-52774] BaseManagementHandler:61 - Caught a system exception while attempting to create a resource: An internal system exception occurred: Stack data, stackName=HDP, stackVersion= 2.6, osType=ubuntu16, repoId= HDP-UTILS-1.1.0.22
org.apache.ambari.server.controller.spi.SystemException: An internal system exception occurred: Stack data, stackName=HDP, stackVersion= 2.6, osType=ubuntu16, repoId= HDP-UTILS-1.1.0.22
at org.apache.ambari.server.controller.internal.AbstractResourceProvider.createResources(AbstractResourceProvider.java:287)
at org.apache.ambari.server.controller.internal.RepositoryResourceProvider.createResources(RepositoryResourceProvider.java:217)
at org.apache.ambari.server.controller.internal.ClusterControllerImpl.createResources(ClusterControllerImpl.java:298)
at org.apache.ambari.server.api.services.persistence.PersistenceManagerImpl.create(PersistenceManagerImpl.java:97)
at org.apache.ambari.server.api.handlers.CreateHandler.persist(CreateHandler.java:37)
at org.apache.ambari.server.api.handlers.BaseManagementHandler.handleRequest(BaseManagementHandler.java:73)
at org.apache.ambari.server.api.services.BaseRequest.process(BaseRequest.java:144)
at org.apache.ambari.server.api.services.BaseService.handleRequest(BaseService.java:126)
at org.apache.ambari.server.api.services.BaseService.handleRequest(BaseService.java:90)
at org.apache.ambari.server.api.services.RepositoryService.createRepository(RepositoryService.java:100)
at sun.reflect.GeneratedMethodAccessor763.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
Caused by: org.apache.ambari.server.StackAccessException: Stack data, stackName=HDP, stackVersion= 2.6, osType=ubuntu16, repoId= HDP-UTILS-1.1.0.22
at org.apache.ambari.server.api.services.AmbariMetaInfo.getRepository(AmbariMetaInfo.java:411)
at org.apache.ambari.server.controller.AmbariManagementControllerImpl.verifyRepository(AmbariManagementControllerImpl.java:4621)
at org.apache.ambari.server.controller.AmbariManagementControllerImpl.verifyRepositories(AmbariManagementControllerImpl.java:4607)
at org.apache.ambari.server.controller.internal.RepositoryResourceProvider$6.invoke(RepositoryResourceProvider.java:221)
at org.apache.ambari.server.controller.internal.RepositoryResourceProvider$6.invoke(RepositoryResourceProvider.java:217)
at org.apache.ambari.server.controller.internal.AbstractResourceProvider.invokeWithRetry(AbstractResourceProvider.java:455)
at org.apache.ambari.server.controller.internal.AbstractResourceProvider.createResources(AbstractResourceProvider.java:278)
... 95 more
26 Jul 2018 11:42:00,631 ERROR [ambari-client-thread-50569] BaseManagementHandler:61 - Caught a system exception while attempting to create a resource: An internal system exception occurred: Stack data, stackName=HDP, stackVersion= 2.6, osType=ubuntu16, repoId= HDP-2.6-GPL
org.apache.ambari.server.controller.spi.SystemException: An internal system exception occurred: Stack data, stackName=HDP, stackVersion= 2.6, osType=ubuntu16, repoId= HDP-2.6-GPL
at org.apache.ambari.server.controller.internal.AbstractResourceProvider.createResources(AbstractResourceProvider.java:287)
at org.apache.ambari.server.controller.internal.RepositoryResourceProvider.createResources(RepositoryResourceProvider.java:217)
at org.apache.ambari.server.controller.internal.ClusterControllerImpl.createResources(ClusterControllerImpl.java:298)
at org.apache.ambari.server.api.services.persistence.PersistenceManagerImpl.create(PersistenceManagerImpl.java:97)
at org.apache.ambari.server.api.handlers.CreateHandler.persist(CreateHandler.java:37)
at org.apache.ambari.server.api.handlers.BaseManagementHandler.handleRequest(BaseManagementHandler.java:73)
at org.apache.ambari.server.api.services.BaseRequest.process(BaseRequest.java:144)
at org.apache.ambari.server.api.services.BaseService.handleRequest(BaseService.java:126)
at org.apache.ambari.server.api.services.BaseService.handleRequest(BaseService.java:90)
at org.apache.ambari.server.api.services.RepositoryService.createRepository(RepositoryService.java:100)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java
Caused by: org.apache.ambari.server.StackAccessException: Stack data, stackName=HDP, stackVersion= 2.6, osType=ubuntu16, repoId= HDP-2.6-GPL
at org.apache.ambari.server.api.services.AmbariMetaInfo.getRepository(AmbariMetaInfo.java:411)
at org.apache.ambari.server.controller.AmbariManagementControllerImpl.verifyRepository(AmbariManagementControllerImpl.java:4621)
at org.apache.ambari.server.controller.AmbariManagementControllerImpl.verifyRepositories(AmbariManagementControllerImpl.java:4607)
at org.apache.ambari.server.controller.internal.RepositoryResourceProvider$6.invoke(RepositoryResourceProvider.java:221)
at org.apache.ambari.server.controller.internal.RepositoryResourceProvider$6.invoke(RepositoryResourceProvider.java:217)
at org.apache.ambari.server.controller.internal.AbstractResourceProvider.invokeWithRetry(AbstractResourceProvider.java:455)
at org.apache.ambari.server.controller.internal.AbstractResourceProvider.createResources(AbstractResourceProvider.java:278)
... 96 more
... View more
07-26-2018
07:44 AM
hi there, I'd like to upgrade HDP2.6.3 to HDP2.6.5 without internet access in the cluster (OS: ubuntu). Local repot files are downloaded,extraced and accessiable by the cluster. In the last step, The base url are : HDP-2.6 : http://10.2.26.11/hdp2.6.5/HDP/ubuntu16/2.6.5.0-292 HDP-2.6-GPL : http://10.2.26.11/hdp2.6.5/HDP-GPL/ubuntu16/2.6.5.0-292/hdp-gpl.gpl.list HDP-UTILS-1.1.0.22: http://10.2.26.11/hdp2.6.5/HDP-UTILS/ubuntu16/1.1.0.22 "HDP-2.6" is successfully, but "HDP-2.6-GPL" and "HDP-UTILS-1.1.0.22" are encoutering error: "Some of the repositories failed validation. Make changes to the base url or skip validation..." Any advice? Thanks
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
07-20-2018
04:23 AM
Hi Felix, I followed your guideline and change the command as following spark-submit \
--class com.test.SmokeTest \
--master yarn \
--deploy-mode cluster \
--driver-memory 1g \
--executor-memory 2g \
--executor-cores 2 \
--num-executors 3 \
--files /etc/hbase/conf/hbase-site.xml \
--conf "spark.executor.extraClassPath=phoenix-client.jar:hbase-client.jar:phoenix-spark-4.7.0.2.6.2.0-205.jar:hbase-common.jar:hbase-protocol.jar:phoenix-core-4.7.0.2.6.2.0-205.jar" \
--conf "spark.driver.extraClassPath=phoenix-client.jar:hbase-client.jar:phoenix-spark-4.7.0.2.6.2.0-205.jar:hbase-common.jar:hbase-protocol.jar:phoenix-core-4.7.0.2.6.2.0-205.jar" \
--jars /usr/hdp/current/phoenix-client/phoenix-client.jar,/usr/hdp/current/phoenix-client/lib/hbase-client.jar,/usr/hdp/current/phoenix-client/lib/phoenix-spark-4.7.0.2.6.2.0-205.jar,/usr/hdp/current/phoenix-client/lib/hbase-common.jar,/usr/hdp/current/phoenix-client/lib/hbase-protocol.jar,/usr/hdp/current/phoenix-client/lib/phoenix-core-4.7.0.2.6.2.0-205.jar \
--verbose \
/tmp/test-1.0-SNAPSHOT.jar which encouter the same error: Userclass threw exception: java.lang.NoClassDefFoundError: org/apache/spark/sql/DataFrame But it run successfully with spark 1.6.3
... View more
07-18-2018
12:02 PM
spark-submit \
--class com.test.SmokeTest \
--master yarn \
--deploy-mode cluster \
--driver-memory 1g \
--executor-memory 2g \
--executor-cores 2 \
--num-executors 3 \
--files /etc/hbase/conf/hbase-site.xml \
--conf "spark.executor.extraClassPath=phoenix-client.jar:hbase-client.jar:phoenix-spark-4.7.0.2.6.2.0-205.jar:hbase-common.jar:hbase-protocol.jar:phoenix-core-4.7.0.2.6.2.0-205.jar" \
--conf "spark.driver.extraClassPath=/usr/hdp/current/phoenix-client/phoenix-client.jar:/usr/hdp/current/phoenix-client/lib/hbase-client.jar:/usr/hdp/current/phoenix-client/lib/phoenix-spark-4.7.0.2.6.2.0-205.jar:/usr/hdp/current/phoenix-client/lib/hbase-common.jar:/usr/hdp/current/phoenix-client/lib/hbase-protocol.jar:/usr/hdp/current/phoenix-client/lib/phoenix-core-4.7.0.2.6.2.0-205.jar" \
--jars /usr/hdp/current/phoenix-client/phoenix-client.jar,/usr/hdp/current/phoenix-client/lib/hbase-client.jar,/usr/hdp/current/phoenix-client/lib/phoenix-spark-4.7.0.2.6.2.0-205.jar,/usr/hdp/current/phoenix-client/lib/hbase-common.jar,/usr/hdp/current/phoenix-client/lib/hbase-protocol.jar,/usr/hdp/current/phoenix-client/lib/phoenix-core-4.7.0.2.6.2.0-205.jar \
--verbose \
/tmp/test-1.0-SNAPSHOT.jar Following your advice, set the classpath and copy the said xml, but still have error : 18/07/18 19:47:59 INFO Client:
client token: Token { kind: YARN_CLIENT_TOKEN, service: }
diagnostics: User class threw exception: java.lang.NoClassDefFoundError: org/apache/spark/sql/DataFrame
ApplicationMaster host: 10.2.29.104
ApplicationMaster RPC port: 0
queue: default
start time: 1531914415906
final status: FAILED
tracking URL: http://en1-dev1-tbdp.trendy-global.com:8088/proxy/application_1531814517578_0019/
user: nifi
Exception in thread "main" org.apache.spark.SparkException: Application application_1531814517578_0019 finished with failed status
at org.apache.spark.deploy.yarn.Client.run(Client.scala:1261)
at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1307)
at org.apache.spark.deploy.yarn.Client.main(Client.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$runMain(SparkSubmit.scala:751)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
18/07/18 19:47:59 INFO ShutdownHookManager: Shutdown hook called
... View more
07-17-2018
05:39 AM
Hi @Felix Albani thanks your advice, I changed the submit command as: spark-submit \
--class com.test.SmokeTest \
--master yarn \
--deploy-mode cluster \
--driver-memory 1g \
--executor-memory 2g \
--executor-cores 4 \
--num-executors 2 \
--files /etc/hbase/conf/hbase-site.xml \
--conf "spark.executor.extraClassPath=phoenix-4.7.0.2.6.2.0-205-spark2.jar:phoenix-client.jar:hbase-client.jar:phoenix-spark2-4.7.0.2.6.2.0-205.jar:hbase-common.jar:hbase-protocol.jar:phoenix-core-4.7.0.2.6.2.0-205.jar" \
--conf "spark.driver.extraClassPath=/usr/hdp/current/phoenix-client/phoenix-4.7.0.2.6.2.0-205-spark2.jar:/usr/hdp/current/phoenix-client/phoenix-client.jar:/usr/hdp/current/phoenix-client/lib/hbase-client.jar:/usr/hdp/current/phoenix-client/lib/phoenix-spark2-4.7.0.2.6.2.0-205.jar:/usr/hdp/current/phoenix-client/lib/hbase-common.jar:/usr/hdp/current/phoenix-client/lib/hbase-protocol.jar:/usr/hdp/current/phoenix-client/lib/phoenix-core-4.7.0.2.6.2.0-205.jar" \
--jars /usr/hdp/current/phoenix-client/phoenix-4.7.0.2.6.2.0-205-spark2.jar,/usr/hdp/current/phoenix-client/phoenix-client.jar,/usr/hdp/current/phoenix-client/lib/hbase-client.jar,/usr/hdp/current/phoenix-client/lib/phoenix-spark2-4.7.0.2.6.2.0-205.jar,/usr/hdp/current/phoenix-client/lib/hbase-common.jar,/usr/hdp/current/phoenix-client/lib/hbase-protocol.jar,/usr/hdp/current/phoenix-client/lib/phoenix-core-4.7.0.2.6.2.0-205.jar \
--verbose \
/tmp/test-1.0-SNAPSHOT.jar but no luck: 18/07/17 13:16:21 INFO CodeGenerator: Code generated in 33.11763 ms
18/07/17 13:16:22 ERROR ApplicationMaster: User class threw exception: java.lang.NoSuchMethodError: org.apache.phoenix.spark.DataFrameFunctions.saveToPhoenix$default$4()Lscala/Option;
java.lang.NoSuchMethodError: org.apache.phoenix.spark.DataFrameFunctions.saveToPhoenix$default$4()Lscala/Option;
at com.trendyglobal.bigdata.inventory.CreateTestData$anonfun$main$1.apply$mcVI$sp(CreateTestData.scala:87)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160)
at com.trendyglobal.bigdata.inventory.CreateTestData$.main(CreateTestData.scala:80)
at com.trendyglobal.bigdata.inventory.CreateTestData.main(CreateTestData.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.yarn.ApplicationMaster$anon$3.run(ApplicationMaster.scala:654)
18/07/17 13:16:22 INFO ApplicationMaster: Final app status: FAILED, exitCode: 15, (reason: User class threw exception: java.lang.NoSuchMethodError: org.apache.phoenix.spark.DataFrameFunctions.saveToPhoenix$default$4()Lscala/Option;)
18/07/17 13:16:22 INFO SparkContext: Invoking stop() from shutdown hook
18/07/17 13:16:22 INFO ServerConnector: Stopped Spark@81d2265{HTTP/1.1}{0.0.0.0:0}
18/07/17 13:16:22 INFO SparkUI: Stopped Spark web UI at http://10.2.29.104:37764
18/07/17 13:16:22 INFO YarnAllocator: Driver requested a total number of 0 executor(s).
18/07/17 13:16:22 INFO YarnClusterSchedulerBackend: Shutting down all executors
18/07/17 13:16:22 INFO YarnSchedulerBackend$YarnDriverEndpoint: Asking each executor to shut down
18/07/17 13:16:22 INFO SchedulerExtensionServices: Stopping SchedulerExtensionServices
(serviceOption=None,
services=List(),
started=false)<br> I can see the phoenix-spark2-4.7.0.2.6.2.0-205.jar was in the classpath ===============================================================================
YARN executor launch context:
env:
CLASSPATH -> phoenix-4.7.0.2.6.2.0-205-spark2.jar:phoenix-client.jar:hbase-client.jar:phoenix-spark2-4.7.0.2.6.2.0-205.jar:hbase-common.jar:hbase-protocol.jar:phoenix-core-4.7.0.2.6.2.0-205.jar<CPS>{{PWD}}<CPS>{{PWD}}/__spark_conf__<CPS>{{PWD}}/__spark_libs__/*<CPS>/etc/hadoop/conf<CPS>/usr/hdp/current/hadoop-client/*<CPS>/usr/hdp/current/hadoop-client/lib/*<CPS>/usr/hdp/current/hadoop-hdfs-client/*<CPS>/usr/hdp/current/hadoop-hdfs-client/lib/*<CPS>/usr/hdp/current/hadoop-yarn-client/*<CPS>/usr/hdp/current/hadoop-yarn-client/lib/*<CPS>$PWD/mr-framework/hadoop/share/hadoop/mapreduce/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/lib/*:$PWD/mr-framework/hadoop/share/hadoop/common/*:$PWD/mr-framework/hadoop/share/hadoop/common/lib/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/lib/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/lib/*:$PWD/mr-framework/hadoop/share/hadoop/tools/lib/*:/usr/hdp/2.6.2.0-205/hadoop/lib/hadoop-lzo-0.6.0.2.6.2.0-205.jar:/etc/hadoop/conf/secure
SPARK_YARN_STAGING_DIR -> hdfs://nn1-dev1-tbdp.trendy-global.com:8020/user/nifi/.sparkStaging/application_1529853578712_0039
SPARK_USER -> nifi
SPARK_YARN_MODE -> true
command:
LD_LIBRARY_PATH="/usr/hdp/current/hadoop-client/lib/native:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64:$LD_LIBRARY_PATH" \
{{JAVA_HOME}}/bin/java \
-server \
-Xmx2048m \
-Djava.io.tmpdir={{PWD}}/tmp \
'-Dspark.history.ui.port=18081' \
-Dspark.yarn.app.container.log.dir=<LOG_DIR> \
-XX:OnOutOfMemoryError='kill %p' \
org.apache.spark.executor.CoarseGrainedExecutorBackend \
--driver-url \
spark://CoarseGrainedScheduler@10.2.29.104:40401 \
--executor-id \
<executorId> \
--hostname \
<hostname> \
--cores \
4 \
--app-id \
application_1529853578712_0039 \
--user-class-path \
file:$PWD/__app__.jar \
--user-class-path \
file:$PWD/phoenix-4.7.0.2.6.2.0-205-spark2.jar \
--user-class-path \
file:$PWD/phoenix-client.jar \
--user-class-path \
file:$PWD/hbase-client.jar \
--user-class-path \
file:$PWD/phoenix-spark2-4.7.0.2.6.2.0-205.jar \
--user-class-path \
file:$PWD/hbase-common.jar \
--user-class-path \
file:$PWD/hbase-protocol.jar \
--user-class-path \
file:$PWD/phoenix-core-4.7.0.2.6.2.0-205.jar \
1><LOG_DIR>/stdout \
2><LOG_DIR>/stderr
resources:
hbase-common.jar -> resource { scheme: "hdfs" host: "nn1-dev1-tbdp.trendy-global.com" port: 8020 file: "/user/nifi/.sparkStaging/application_1529853578712_0039/hbase-common.jar" } size: 575685 timestamp: 1531804498373 type: FILE visibility: PRIVATE
phoenix-4.7.0.2.6.2.0-205-spark2.jar -> resource { scheme: "hdfs" host: "nn1-dev1-tbdp.trendy-global.com" port: 8020 file: "/user/nifi/.sparkStaging/application_1529853578712_0039/phoenix-4.7.0.2.6.2.0-205-spark2.jar" } size: 87275 timestamp: 1531804497220 type: FILE visibility: PRIVATE
__app__.jar -> resource { scheme: "hdfs" host: "nn1-dev1-tbdp.trendy-global.com" port: 8020 file: "/user/nifi/.sparkStaging/application_1529853578712_0039/inventory-calc-service-1.0-SNAPSHOT.jar" } size: 41478 timestamp: 1531804497134 type: FILE visibility: PRIVATE
__spark_conf__ -> resource { scheme: "hdfs" host: "nn1-dev1-tbdp.trendy-global.com" port: 8020 file: "/user/nifi/.sparkStaging/application_1529853578712_0039/__spark_conf__.zip" } size: 106688 timestamp: 1531804498824 type: ARCHIVE visibility: PRIVATE
hbase-client.jar -> resource { scheme: "hdfs" host: "nn1-dev1-tbdp.trendy-global.com" port: 8020 file: "/user/nifi/.sparkStaging/application_1529853578712_0039/hbase-client.jar" } size: 1398707 timestamp: 1531804498300 type: FILE visibility: PRIVATE
phoenix-spark2-4.7.0.2.6.2.0-205.jar -> resource { scheme: "hdfs" host: "nn1-dev1-tbdp.trendy-global.com" port: 8020 file: "/user/nifi/.sparkStaging/application_1529853578712_0039/phoenix-spark2-4.7.0.2.6.2.0-205.jar" } size: 81143 timestamp: 1531804498334 type: FILE visibility: PRIVATE
hbase-site.xml -> resource { scheme: "hdfs" host: "nn1-dev1-tbdp.trendy-global.com" port: 8020 file: "/user/nifi/.sparkStaging/application_1529853578712_0039/hbase-site.xml" } size: 7320 timestamp: 1531804498662 type: FILE visibility: PRIVATE
hbase-protocol.jar -> resource { scheme: "hdfs" host: "nn1-dev1-tbdp.trendy-global.com" port: 8020 file: "/user/nifi/.sparkStaging/application_1529853578712_0039/hbase-protocol.jar" } size: 4941870 timestamp: 1531804498450 type: FILE visibility: PRIVATE
__spark_libs__ -> resource { scheme: "hdfs" host: "nn1-dev1-tbdp.trendy-global.com" port: 8020 file: "/hdp/apps/2.6.2.0-205/spark2/spark2-hdp-yarn-archive.tar.gz" } size: 180384518 timestamp: 1507704288496 type: ARCHIVE visibility: PUBLIC
phoenix-client.jar -> resource { scheme: "hdfs" host: "nn1-dev1-tbdp.trendy-global.com" port: 8020 file: "/user/nifi/.sparkStaging/application_1529853578712_0039/phoenix-client.jar" } size: 107566119 timestamp: 1531804498256 type: FILE visibility: PRIVATE
phoenix-core-4.7.0.2.6.2.0-205.jar -> resource { scheme: "hdfs" host: "nn1-dev1-tbdp.trendy-global.com" port: 8020 file: "/user/nifi/.sparkStaging/application_1529853578712_0039/phoenix-core-4.7.0.2.6.2.0-205.jar" } size: 3834414 timestamp: 1531804498628 type: FILE visibility: PRIVATE
===============================================================================
... View more
07-16-2018
11:28 AM
any idea for this issue: https://community.hortonworks.com/questions/202521/spark-submit-nosuchmethoderror-savetophoenix.html
... View more
07-16-2018
11:28 AM
any idea for this issue: https://community.hortonworks.com/questions/202521/spark-submit-nosuchmethoderror-savetophoenix.html
... View more
07-16-2018
11:18 AM
any solution or work around? i am facing the same error https://community.hortonworks.com/questions/202521/spark-submit-nosuchmethoderror-savetophoenix.html
... View more
07-16-2018
11:11 AM
seems it relate to https://issues.apache.org/jira/browse/PHOENIX-3333 , however, in hdp2.6.2, it is fixed according to https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.5/bk_release-notes/content/patch_phoenix.html
... View more
07-16-2018
08:44 AM
Hi there,
in my hdp 2.6.2, i am using spark2.1.1, phoenix 4.7. When start spark-shell as below,
spark-shell --conf "spark.executor.extraClassPath=/usr/hdp/current/phoenix-client/phoenix-4.7.0.2.6.2.0-205-spark2.jar:/usr/hdp/current/phoenix-client/phoenix-client.jar:/usr/hdp/current/phoenix-client/lib/hbase-client.jar:/usr/hdp/current/phoenix-client/lib/phoenix-spark2-4.7.0.2.6.2.0-205.jar:/usr/hdp/current/phoenix-client/lib/hbase-common.jar:/usr/hdp/current/phoenix-client/lib/hbase-protocol.jar:/usr/hdp/current/phoenix-client/lib/phoenix-core-4.7.0.2.6.2.0-205.jar" --conf "spark.driver.extraClassPath=/usr/hdp/current/phoenix-client/phoenix-4.7.0.2.6.2.0-205-spark2.jar:/usr/hdp/current/phoenix-client/phoenix-client.jar:/usr/hdp/current/phoenix-client/lib/hbase-client.jar:/usr/hdp/current/phoenix-client/lib/phoenix-spark2-4.7.0.2.6.2.0-205.jar:/usr/hdp/current/phoenix-client/lib/hbase-common.jar:/usr/hdp/current/phoenix-client/lib/hbase-protocol.jar:/usr/hdp/current/phoenix-client/lib/hoenix-core-4.7.0.2.6.2.0-205.jar"
It can successfully save data to table2 with code
val phoenixOptionMap=Map("table"->"TABLE1","zkUrl"->"zk1:2181/hbase-secure")
val df2=spark.sqlContext.read.format("org.apache.phoenix.spark").options(phoenixOptionMap).load()
val configuration = HBaseConfiguration.create();
configuration.set("zookeeper.znode.parent", "/hbase-secure")
df2.saveToPhoenix("table2",configuration,Option("zk1:2181/hbase-secure"))
Then i created a new scala program as:
package com.test
import org.apache.spark.sql.{SQLContext, SparkSession}
import org.apache.phoenix.spark._
import org.apache.hadoop.hbase.HBaseConfiguration
object SmokeTest {
def main(args: Array[String]): Unit = {
val spark = SparkSession
.builder()
.appName("PhoenixSmokeTest")
.getOrCreate()
val phoenixOptionMap=Map("table"->"TABLE1","zkUrl"->"zk1:2181/hbase-secure")
val df2=spark.sqlContext.read.format("org.apache.phoenix.spark").options(phoenixOptionMap).load()
val configuration = HBaseConfiguration.create();
configuration.set("zookeeper.znode.parent", "/hbase-secure")
configuration.addResource("/etc/hbase/conf/hbase-site.xml")
df2.saveToPhoenix("table2",configuration,Option("zk1:2181/hbase-secure"))
}
}
and run it with below spark-submit script spark-submit \
--class com.test.SmokeTest \
--master yarn\
--deploy-mode client \
--driver-memory 1g \
--executor-memory 1g \
--executor-cores 4 \
--num-executors 2 \
--conf "spark.executor.extraClassPath=/usr/hdp/current/phoenix-client/phoenix-4.7.0.2.6.2.0-205-spark2.jar:/usr/hdp/current/phoenix-client/phoenix-client.jar:/usr/hdp/current/phoenix-client/lib/hbase-client.jar:/usr/hdp/current/phoenix-client/lib/phoenix-spark2-4.7.0.2.6.2.0-205.jar:/usr/hdp/current/phoenix-client/lib/hbase-common.jar:/usr/hdp/current/phoenix-client/lib/hbase-protocol.jar:/usr/hdp/current/phoenix-client/lib/phoenix-core-4.7.0.2.6.2.0-205.jar" \
--conf "spark.driver.extraClassPath=/usr/hdp/current/phoenix-client/phoenix-4.7.0.2.6.2.0-205-spark2.jar:/usr/hdp/current/phoenix-client/phoenix-client.jar:/usr/hdp/current/phoenix-client/lib/hbase-client.jar:/usr/hdp/current/phoenix-client/lib/phoenix-spark2-4.7.0.2.6.2.0-205.jar:/usr/hdp/current/phoenix-client/lib/hbase-common.jar:/usr/hdp/current/phoenix-client/lib/hbase-protocol.jar:/usr/hdp/current/phoenix-client/lib/phoenix-core-4.7.0.2.6.2.0-205.jar" \
--jars /usr/hdp/current/phoenix-client/phoenix-4.7.0.2.6.2.0-205-spark2.jar,/usr/hdp/current/phoenix-client/phoenix-client.jar,/usr/hdp/current/phoenix-client/lib/hbase-client.jar,/usr/hdp/current/phoenix-client/lib/phoenix-spark2-4.7.0.2.6.2.0-205.jar,/usr/hdp/current/phoenix-client/lib/hbase-common.jar,/usr/hdp/current/phoenix-client/lib/hbase-protocol.jar,/usr/hdp/current/phoenix-client/lib/phoenix-core-4.7.0.2.6.2.0-205.jar \
--verbose \
/tmp/test-1.0-SNAPSHOT.jar It failed with below message 18/07/16 16:30:16 INFO ClientCnxn: Session establishment complete on server zk1/10.2.29.102:2181, sessionid = 0x364270588b5472f, negotiated timeout = 60000
18/07/16 16:30:17 INFO Metrics: Initializing metrics system: phoenix
18/07/16 16:30:17 INFO MetricsConfig: loaded properties from hadoop-metrics2.properties
18/07/16 16:30:17 INFO MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
18/07/16 16:30:17 INFO MetricsSystemImpl: phoenix metrics system started
Exception in thread "main" java.lang.NoSuchMethodError: org.apache.phoenix.spark.DataFrameFunctions.saveToPhoenix$default$4()Lscala/Option;
at com.trendyglobal.bigdata.inventory.SmokeTest$.main(SmokeTest.scala:28)
at com.trendyglobal.bigdata.inventory.SmokeTest.main(SmokeTest.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:751)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
18/07/16 16:30:20 INFO SparkContext: Invoking stop() from shutdown hook
18/07/16 16:30:20 INFO ServerConnector: Stopped Spark@38f66b77{HTTP/1.1}{0.0.0.0:4040}<br> Woud anyone has any advice? Thanks, Forest
... View more
Labels:
- Labels:
-
Apache Phoenix
-
Apache Spark