Member since
12-10-2015
43
Posts
39
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4652 | 02-04-2016 01:37 AM | |
15772 | 02-03-2016 02:03 AM | |
7172 | 01-26-2016 08:00 AM |
12-17-2015
07:36 AM
2 Kudos
I'm currently exploring Oozie's SparkAction, but I'm running into errors. The code is pretty straightforward; it's just a simple select from a Hive table then I count the records of the Dataframe. It's just some simple dummy code to use while I learn how to work with Oozie: val tbl = sqlContext.sql("SELECT * FROM tbl")
val count = tbl.count
log.info(s"The table has ${count} records.") It works as expected when using `spark-submit` but when trying to run it as an Oozie SparkAction, I get the following error in the logs: Main class:
org.apache.spark.deploy.yarn.Client
Arguments:
--name
Testing Spark Action
--jar
hdfs://myhost.com:8020/user/bigdata/workflows/sparkaction-test/lib/sparkaction-test_2.10-1.0.jar
--class
com.myCompany.SparkActionTest
System properties:
SPARK_SUBMIT -> true
spark.app.name -> Testing Spark Action
spark.submit.deployMode -> cluster
spark.master -> yarn-cluster
Classpath elements:
Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.SparkMain], main() threw exception, Application application_1454025267777_0681 finished with failed status
org.apache.spark.SparkException: Application application_1454025267777_0681 finished with failed status
at org.apache.spark.deploy.yarn.Client.run(Client.scala:974)
at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1020)
at org.apache.spark.deploy.yarn.Client.main(Client.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:685)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
at org.apache.oozie.action.hadoop.SparkMain.runSpark(SparkMain.java:104)
at org.apache.oozie.action.hadoop.SparkMain.run(SparkMain.java:95)
at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:47)
at org.apache.oozie.action.hadoop.SparkMain.main(SparkMain.java:38)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:241)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
log4j:WARN No appenders could be found for logger (org.apache.spark.util.ShutdownHookManager).
log4j:WARN Please initialize the log4j system properly.
The project directory is arranged as follows: sparkaction-test
-workflow.xml
-hive-site.xml
-job.properties
-lib/
-sparkaction-test_2.10-1.0.jar The content of job.properties: nameNode=hdfs://myhost.com:8020
jobTracker=myhost.com:8032
queueName=default
projectRoot=user/${user.name}/workflows/sparkaction-test
master=yarn-cluster
mode=cluster
class=com.myCompany.SparkActionTest
hiveSite=hive-site.xml
jars=${nameNode}/${projectRoot}/lib/sparkaction-test_2.10-1.0.jar
oozie.use.system.libpath=true
oozie.wf.application.path=${nameNode}/${projectRoot}
spark.yarn.historyServer.address=http://myhost.com:18080/
spark.eventLog.dir=${nameNode}/user/spark/applicationHistory
spark.eventLog.enabled=true workflow.xml: <workflow-app name="spark-test-wf" xmlns="uri:oozie:workflow:0.4">
<start to="spark-test"/>
<action name="spark-test">
<spark xmlns="uri:oozie:spark-action:0.1">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapred.compress.map.output</name>
<value>true</value>
</property>
</configuration>
<master>${master}</master>
<mode>${mode}</mode>
<name>Testing Spark Action</name>
<class>${class}</class>
<jar>${jars}</jar>
</spark>
<ok to="end"/>
<error to="errorcleanup" />
</action>
<kill name="errorcleanup">
<message>Spark Test WF failed. [${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name ="end"/>
</workflow-app> These are the jars in the Oozie sharelib:
datanucleus-api-jdo-3.2.6.jar datanucleus-core-3.2.10.jar datanucleus-rdbms-3.2.9.jar oozie-sharelib-spark-4.2.0.2.3.4.0-3485.jar spark-1.5.2.2.3.4.0-3485-yarn-shuffle.jar spark-assembly-1.5.2.2.3.4.0-3485-hadoop2.7.1.2.3.4.0-3485.jar spark-examples-1.5.2.2.3.4.0-3485-hadoop2.7.1.2.3.4.0-3485.jar Environment: HDP 2.3.4 Spark 1.5.2 Oozie 4.2.0 What could be the problem?
... View more
Labels:
12-11-2015
06:07 AM
1 Kudo
I see! They probably could have phrased the documentation better, IMHO. you will not want to hardcode master in the program,
but rather launch the application with spark-submit and
receive it there The above quote from the documentation was never actually clear to me and I thought that I had to "receive" the master URL by reading it in the code from some configuration or parameter then setting master. Thanks for clearing that up!
... View more
12-11-2015
03:22 AM
1 Kudo
@Guilherme Braccialli one thing about your code that got me curious though is how you instantiated your SparkContext... I've been following the Spark programming guide, and that's why I set the app name and master on the SparkConf before initializing `sc`... In your code you don't set the app master, so in this case does `SparkContext` pick up master from the `--master` arg from the submit command?
... View more
12-11-2015
03:18 AM
1 Kudo
@Guilherme Braccialli I did look at your code, and it's an interesting approach. However in my actual application, the SQL commands don't need to be parameterized since the app performs ETL on a specific set of data already. Still, I'll keep it in mind. I have been trying to come up with a utility mixin that provides wrapper methods to the calls to hiveContext.sql, and all my other Spark apps that needs to call it need only to provide the column and table names, as well as the where conditions.
... View more
12-11-2015
02:35 AM
1 Kudo
@Guilherme Braccialli That did the trick! 😃 I didn't notice that at first. I wasn't the one who set-up our cluster so I had no idea that the contents of those two files were different. It's a subtle thing but I had a lot of trouble just because of that. Thank you very much!
... View more
12-11-2015
02:20 AM
@Guilherme Braccialli I just want to start by thanking you for your quick responses. I've been struggling with this problem for a while now, and actually I've also asked this on stackoverflow but no luck. As for /usr/hdp/current/spark-client/conf/hive-site.xml, the content is pretty much the same as yours: <configuration>
<property>
<name>hive.metastore.uris</name>
<value>thrift://host.xxx.com:9083</value>
</property>
</configuration>
... View more
12-11-2015
02:07 AM
@Ana Gillan we're using 2.3.2, and Kerberos is disabled. @Jonas Straub I've updated the post with some of the simpler sample code I've used to try and test things out. Even a simple select statement is giving me the same errors... though I can't be sure if my use of the CAKE pattern could be resulting in some unwanted side-effects.
... View more
12-11-2015
01:52 AM
@Guilherme Braccialli Thanks
for your reply. I tried your suggestion of putting the --files
parameter before --jars when submitting, but now I'm running into an
exception saying the HiveMetastoreClient could not be instantiated. I'll
update my post with the code and new stack trace.
... View more
12-10-2015
08:34 AM
log.txt Uploading a copy of the log excerpt in a text file because it won't format properly in the post
... View more
12-10-2015
08:27 AM
2 Kudos
I have a Spark (version 1.4.1) application on HDP 2.3.2. It works fine when running it in YARN-Client mode. However, when running it on YARN-Cluster mode none of my Hive tables can be found by the application.
I submit the application like so:
./bin/spark-submit
--class com.myCompany.Main
--master yarn-cluster
--num-executors 3
--driver-memory 4g
--executor-memory 10g
--executor-cores 1
--jars lib/datanucleus-api-jdo-3.2.6.jar,lib/datanucleus-rdbms-3.2.9.jar,lib/datanucleus-core-3.2.10.jar /home/spark/apps/YarnClusterTest.jar
--files /etc/hive/conf/hive-site.xml
Here's an excerpt from the logs:
5/12/02 11:05:13 INFO hive.HiveContext: Initializing execution hive, version 0.13.1
15/12/02 11:05:14 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
15/12/02 11:05:14 INFO metastore.ObjectStore: ObjectStore, initialize called
15/12/02 11:05:14 INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
15/12/02 11:05:14 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
15/12/02 11:05:14 INFO storage.BlockManagerMasterEndpoint: Registering block manager worker2.xxx.com:34697 with 5.2 GB RAM, BlockManagerId(1, worker2.xxx.com, 34697)
15/12/02 11:05:16 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
15/12/02 11:05:16 INFO metastore.MetaStoreDirectSql: MySQL check failed, assuming we are not on mysql: Lexical error at line 1, column 5. Encountered: "@" (64), after : "".
15/12/02 11:05:17 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
15/12/02 11:05:17 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
15/12/02 11:05:18 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
15/12/02 11:05:18 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
15/12/02 11:05:18 INFO metastore.ObjectStore: Initialized ObjectStore
15/12/02 11:05:19 WARN metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 0.13.1aa
15/12/02 11:05:19 INFO metastore.HiveMetaStore: Added admin role in metastore
15/12/02 11:05:19 INFO metastore.HiveMetaStore: Added public role in metastore
15/12/02 11:05:19 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty
15/12/02 11:05:19 INFO session.SessionState: No Tez session required at this point. hive.execution.engine=mr.
15/12/02 11:05:19 INFO parse.ParseDriver: Parsing command: SELECT * FROM streamsummary
15/12/02 11:05:20 INFO parse.ParseDriver: Parse Completed
15/12/02 11:05:20 INFO hive.HiveContext: Initializing HiveMetastoreConnection version 0.13.1 using Spark classes.
15/12/02 11:05:20 INFO metastore.HiveMetaStore: 0: get_table : db=default tbl=streamsummary
15/12/02 11:05:20 INFO HiveMetaStore.audit: ugi=spark ip=unknown-ip-addr cmd=get_table : db=default tbl=streamsummary
15/12/02 11:05:20 DEBUG myCompany.Main$: no such table streamsummary; line 1 pos 14
I basically run into the same 'no such table' problem for any time my application needs to read from or write to the Hive tables.
Thanks in advance!
UPDATE:
I tried submitting the spark application with the --files parameter provided before --jars as per @Guilherme Braccialli's suggestion, but doing so now gives me an exception saying that the HiveMetastoreClient could not be instantiated.
spark-submit:
./bin/spark-submit \
--class com.myCompany.Main \
--master yarn-cluster \
--num-executors 3 \
--driver-memory 1g \
--executor-memory 11g \
--executor-cores 1 \
--files /etc/hive/conf/hive-site.xml \
--jars lib/datanucleus-api-jdo-3.2.6.jar,lib/datanucleus-rdbms-3.2.9.jar,lib/datanucleus-core-3.2.10.jar \<br> /home/spark/apps/YarnClusterTest.jar
code: // core.scala
trait Core extends java.io.Serializable{
/**
* This trait should be mixed in by every other class or trait that is dependent on `sc`
*
*/
val sc: SparkContext
lazy val sqlContext = new HiveContext(sc)
}
// yarncore.scala
trait YarnCore extends Core {
/**
* This trait initializes the SparkContext with YARN as the master
*/
val conf = new SparkConf().setAppName("my app").setMaster("yarn-cluster")
val sc = new SparkContext(conf)
}
main.scala
object Test {
def main(args:Array[String]){
/**initialize the spark application**/
val app = new YarnCore // initializes the SparkContext in YARN mode
with sqlHelper // provides SQL functionality
with Transformer // provides UDF's for transforming the dataframes into the marts
/**initialize the logger**/
val log = Logger.getLogger(getClass.getName)
val count = app.sqlContext.sql("SELECT COUNT(*) FROM streamsummary")
log.info("streamsummary has ${count} records")
/**Shut down the spark app**/
app.sc.stop
}
}
exception: 15/12/11 09:34:55 INFO hive.HiveContext: Initializing execution hive, version 0.13.1
15/12/11 09:34:56 ERROR yarn.ApplicationMaster: User class threw exception: java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:346)
at org.apache.spark.sql.hive.client.ClientWrapper.<init>(ClientWrapper.scala:117)
at org.apache.spark.sql.hive.HiveContext.executionHive$lzycompute(HiveContext.scala:165)
at org.apache.spark.sql.hive.HiveContext.executionHive(HiveContext.scala:163)
at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:170)
at com.epldt.core.Core$class.sqlContext(core.scala:13)
at com.epldt.Test$anon$1.sqlContext$lzycompute(main.scala:17)
at com.epldt.Test$anon$1.sqlContext(main.scala:17)
at com.epldt.Test$.main(main.scala:26)
at com.epldt.Test.main(main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.deploy.yarn.ApplicationMaster$anon$2.run(ApplicationMaster.scala:486)
Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1412)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:62)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:72)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2453)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2465)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:340)
... 14 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1410)
... 19 more
Caused by: java.lang.NumberFormatException: For input string: "1800s"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:580)
at java.lang.Integer.parseInt(Integer.java:615)
at org.apache.hadoop.conf.Configuration.getInt(Configuration.java:1258)
at org.apache.hadoop.hive.conf.HiveConf.getIntVar(HiveConf.java:1211)
at org.apache.hadoop.hive.conf.HiveConf.getIntVar(HiveConf.java:1220)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:293)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:214)
... 24 more
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Spark
-
Apache YARN
- « Previous
- Next »