Member since
12-30-2015
164
Posts
29
Kudos Received
10
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
19124 | 01-07-2019 06:17 AM | |
967 | 12-27-2018 07:28 AM | |
3421 | 11-26-2018 10:12 AM | |
1039 | 11-16-2018 12:15 PM | |
3109 | 10-22-2018 09:31 AM |
09-26-2018
07:04 AM
The above error wiped out once we have added HADOOP_CONF_DIR and YARN_CONF_DIR in .bashrc file in users home directory
... View more
09-26-2018
05:56 AM
Exception in thread "main" java.lang.Exception: When running with master 'yarn' either HADOOP_CONF_DIR or YARN_CONF_DIR must be set in the environment.
at org.apache.spark.deploy.SparkSubmitArguments.validateSubmitArguments(SparkSubmitArguments.scala:288)
at org.apache.spark.deploy.SparkSubmitArguments.validateArguments(SparkSubmitArguments.scala:248)
at org.apache.spark.deploy.SparkSubmitArguments.<init>(SparkSubmitArguments.scala:120)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:130)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
[spark@ip-10-0-10-76 ~]$
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Spark
-
Apache YARN
09-25-2018
12:17 PM
Hi, The below article may help you https://community.hortonworks.com/articles/91925/how-to-enable-gc-logs-for-hiveserver2-metastore-we.html
... View more
09-25-2018
12:10 PM
1 Kudo
Can you please check whether ambari server is running or not ? if server is running check the port is listenig or not *8080). Please try to restart the ambari-server and check again ?
... View more
09-24-2018
01:01 PM
Hi @Felix Albani, I have configured required properties in custom spark-default config but still spark in not utilizing those properties. i have used above syntax got worked for me.
... View more
09-24-2018
11:42 AM
Hello Experts, I am facing the below issue while loading hive from spark scala> import com.hortonworks.hwc.HiveWarehouseSession
import com.hortonworks.hwc.HiveWarehouseSession scala> import com.hortonworks.hwc.HiveWarehouseSession._ import com.hortonworks.hwc.HiveWarehouseSession._ scala> val hive1 = HiveWarehouseSession.session(spark).build()
java.util.NoSuchElementException: spark.sql.hive.hiveserver2.jdbc.url
at org.apache.spark.sql.internal.SQLConf$anonfun$getConfString$2.apply(SQLConf.scala:1571)
at org.apache.spark.sql.internal.SQLConf$anonfun$getConfString$2.apply(SQLConf.scala:1571)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.internal.SQLConf.getConfString(SQLConf.scala:1571)
at org.apache.spark.sql.RuntimeConfig.get(RuntimeConfig.scala:74)
at com.hortonworks.spark.sql.hive.llap.HWConf.getConnectionUrlFromConf(HWConf.java:124)
at com.hortonworks.spark.sql.hive.llap.HWConf.getConnectionUrl(HWConf.java:103)
at com.hortonworks.spark.sql.hive.llap.HiveWarehouseBuilder.build(HiveWarehouseBuilder.java:97)
... 51 elided I have configured spark llap by using the following url : https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.0/integrating-hive/content/hive_configure_a_spark_hive_connection.html Any thing i am missing in this setup ? Any help much appreciated?
... View more
Labels:
- Labels:
-
Apache Spark
09-24-2018
09:37 AM
I am using HDP 3.0 with Hive LLAP. I have pasted the code and output below: scala> import org.apache.spark.sql.hive.HiveContext import org.apache.spark.sql.hive.HiveContext scala> val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)
warning: there was one deprecation warning; re-run with -deprecation for details
sqlContext: org.apache.spark.sql.hive.HiveContext = org.apache.spark.sql.hive.HiveContext@19e03398
scala> sqlContext.sql("show databases").show() +------------+
|databaseName| +------------+ | default|
+ ------------+ In Hive shell i am able to see all the databases: 0: jdbc:hive2://ip-10-0-10-76.amer.o9solution> show databases;
INFO : Compiling command(queryId=hive_20180924093400_b66c3d0c-8e76-4a16-aed7-771fcae43225): show databases
INFO : Semantic Analysis Completed (retrial = false)
INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:database_name, type:string, comment:from deserializer)], properties:null)
INFO : Completed compiling command(queryId=hive_20180924093400_b66c3d0c-8e76-4a16-aed7-771fcae43225); Time taken: 0.003 seconds
INFO : Executing command(queryId=hive_20180924093400_b66c3d0c-8e76-4a16-aed7-771fcae43225): show databases
INFO : Starting task [Stage-0:DDL] in serial mode
INFO : Completed executing command(queryId=hive_20180924093400_b66c3d0c-8e76-4a16-aed7-771fcae43225); Time taken: 0.005 seconds
INFO : OK +---------------------+
| database_name | +---------------------+
| default |
| information_schema |
| rh_ml |
| schema_7539 |
| sys | +---------------------+ Any help to resolve this issue? Thank you in advance.
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Spark
09-17-2018
10:18 AM
Could you please check in RM which user and application launching this job.
... View more
09-17-2018
06:42 AM
Hi @Sudharsan
Ganeshkumar, -m denotes number of mappers to launch to run your query.
... View more
09-17-2018
06:26 AM
Hi @Anurag Mishra, It seems Tez unable to launch the session. first kill the all running applications and retry to lauch the job. if it doesn't work tune the tez configuration seetting by using below url : https://community.hortonworks.com/articles/14309/demystify-tez-tuning-step-by-step.html
... View more