Member since
04-13-2016
422
Posts
150
Kudos Received
55
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1933 | 05-23-2018 05:29 AM | |
| 4970 | 05-08-2018 03:06 AM | |
| 1685 | 02-09-2018 02:22 AM | |
| 2716 | 01-24-2018 08:37 PM | |
| 6172 | 01-24-2018 05:43 PM |
06-30-2016
04:51 PM
@sheetal
You are true, 1) Noticed that Download option is not working after upgrading Ambari from 2.1 to 2.2 and Smartsense from 1.1 to 1.2 2) Verified the Ambari and Smartsense rpm's 3) Noticed that in Smartsense View configuration, the version was still being reported as 1.1 4) The Smartsense view jar under /var/lib/ambari-server/resources/views also had the 1.1 jar. 5) Copied the new 1.2 Smartsense jar from /usr/hdp/share/hst/ambari-service/SMARTSENSE/package/files/view/smartsense-ambari-view-1.2.1.0-161.jar to /var/lib/ambari-server/resources/views and restarted Ambari server. This fixed the issue and the 'Download' option is available for all bundles.
I'm not sure that if the jar's need to be manually copied for Smartsense View upgrade from 1.1 to 1.2 after the upgrade. I always thought that SmartSense View will be updated as part of Ambari upgrade.
... View more
06-28-2016
01:36 AM
@Shihab Can you please check and Hive metastore? Please provide metastore logs for details understanding.
... View more
06-27-2016
10:03 PM
@Paul Hargis Thank's Paul, No luck I'm using Spark -1.4 (HDP 2.3)
... View more
06-27-2016
03:06 PM
@Adam Davidson Thanks for the response. Yes, Spark is installed on all the machines. ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn --deploy-mode cluster --executor-memory 2g --num-executors 1 -driver-memory 1024m --executor-memory 1024m --files /usr/hdp/current/spark-client/conf/hive-site.xml --jars /usr/hdp/current/spark-client/lib/datanucleus-api-jdo-3.2.6.jar,/usr/hdp/current/spark-client/lib/datanucleus-rdbms-3.2.9.jar,/usr/hdp/current/spark-client/lib/datanucleus-core-3.2.10.jar lib/spark-examples*.jar 10 Even if I run above command it's throwing me the same error.
... View more
06-25-2016
02:51 AM
@ScipioTheYounger If I'm wrong can you please correct me, You mean to say that CLI and Beeline are working same even after installation of Ranger?
... View more
06-25-2016
12:32 AM
@Paul Hargis Thanks for the quick response and appreciate for validating on your machine. I'm not running in a sandbox, I'm getting this error in the cluster which contains RAM of 256GB. Even below command gives me the same error message: ./bin/spark-submit --class org.apache.spark.examples.SparkPi--master yarn --deploy-mode cluster --num-executors 1--driver-memory 1024m--executor-memory 1024m--executor-cores 1 lib/spark-examples*.jar 10
... View more
06-24-2016
07:19 PM
Hi, When I'm running Sample Spark Job in client mode it executing and when I run the same job in cluster mode it's failing. May I know the reason. Client mode: ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client --num-executors 1 --driver-memory 512m --executor-memory 512m --executor-cores 1 lib/spark-examples*.jar 10 Cluster mode: ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn --deploy-mode cluster --num-executors 3 --driver-memory 4g --executor-memory 2g --executor-cores 1 lib/spark-examples*.jar 10 Error message: yarn logs -applicationId <applicationnumber> output: Container: container_1466521315275_0219_02_000001 on hostname.domain.com_45454
==========================================================================================
LogType:stderr
Log Upload Time:Fri Jun 24 14:11:39 -0500 2016
LogLength:88
Log Contents:
Error: Could not find or load main class org.apache.spark.deploy.yarn.ApplicationMaster
End of LogType:stderr
LogType:stdout
Log Upload Time:Fri Jun 24 14:11:39 -0500 2016
LogLength:0
Log Contents:
End of LogType:stdout Spark-default.conf file: spark.driver.extraJavaOptions -Dhdp.version=2.3.2.0-2950
spark.history.kerberos.enabled true
spark.history.kerberos.keytab /etc/security/keytabs/spark.headless.keytab
spark.history.kerberos.principal spark-hdp@DOMAIN>COM
spark.history.provider org.apache.spark.deploy.yarn.history.YarnHistoryProvider
spark.history.ui.port 18080
spark.yarn.am.extraJavaOptions -Dhdp.version=2.3.2.0-2950
spark.yarn.containerLauncherMaxThreads 25
spark.yarn.driver.memoryOverhead 384
spark.yarn.executor.memoryOverhead 384
spark.yarn.historyServer.address sparkhistory.domain.com:18080
spark.yarn.max.executor.failures 3
spark.yarn.preserve.staging.files false
spark.yarn.queue default
spark.yarn.scheduler.heartbeat.interval-ms 5000
spark.yarn.services org.apache.spark.deploy.yarn.history.YarnHistoryService
spark.yarn.submit.file.replication 3 Any help is highly appreciated and thanks in advance.
... View more
Labels:
- Labels:
-
Apache Spark
06-23-2016
02:55 PM
@Gangadhar Kadam Yes, this time I got different error, ERROR yarn.ApplicationMaster: SparkContext did not initialize after waiting for 100000 ms. Please check earlier log output for errors. Failing the application.
... View more
06-23-2016
04:36 AM
@Gangadhar Kadam spark-submit --class org.apache.spark.examples.SparkPi --master yarn --deploy-mode cluster --executor-memory 1G --num-executors 1 -driver-memory 2G --executor-memory 2G --jars /usr/hdp/current/spark-client/lib/datanucleus-api-jdo-3.2.6.jar,/usr/hdp/current/spark-client/lib/datanucleus-rdbms-3.2.9.jar,/usr/hdp/current/spark-client/lib/datanucleus-core-3.2.10.jar --files /usr/hdp/current/spark-client/conf/hive-site.xml /usr/hdp/current/spark-client/lib/spark-examples-1.4.1.2.3.2.0-2950-hadoop2.7.1.2.3.2.0-2950.jar 10
... View more
06-23-2016
04:14 AM
@Gangadhar Kadam Thanks for the quick response, HA is enabled for HiveServer2 and we are pointing to two thrift servers at hive.metastore.uris I have already followed all the steps, but that doesn't help my scenario. I'm able to run the same job in client mode.
... View more