Member since
07-13-2016
12
Posts
0
Kudos Received
0
Solutions
03-27-2017
10:45 PM
Looks like some jar missing from classpath. Just realized this is more related to Oozie than Hive. Sorry I don't have more insights here. This might help: https://community.cloudera.com/t5/Batch-Processing-and-Workflow/Hive-action-failing-in-Oozie-workflow/td-p/24328
... View more
03-27-2017
10:35 PM
@rakesh kumar Kerberos is for authentication purpose while doAs is for authorization. When doAs is set to true, the queries against Metastore and jobs executed on HADOOP cluster run as the end user as opposed to user "hive". For example, the HDFS permission checks during job executions will be against the end user.
... View more
03-27-2017
10:07 PM
Hi @Bala Vignesh N V Specifying the hive.execution.engine to spark will result in kicking off Spark jobs for the SQL query. But that's not supported by Hortonworks. The better way is to use Spark thrift server plus beeline to run queries: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.0/bk_installing_manually_book/content/starting_sts.html You can create hive tables, execute a query (by submitting a spark job under the hood) and the query result set is generated based on SparkContext. Is that what you need?
... View more
03-27-2017
06:41 PM
Regarding how to specify the YARN queues, this link may be useful: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.2/bk_yarn_resource_mgt/content/setting_up_queues.html
... View more
07-13-2016
09:11 PM
Hi, Looks like the highlighted parts below are conflicting. Both are trying to set SPARK_HOME variable (with different values). Do you have any ideas? link: http://hortonworks.com/hadoop-tutorial/a-lap-around-apache-spark/ 3. Set JAVA_HOME and SPARK_HOME: Make sure that you set JAVA_HOME before you launch the Spark Shell or thrift server. export JAVA_HOME=<path to JDK 1.8> The Spark install creates the directory where Spark binaries are unpacked to /usr/hdp/2.3.4.1-10/spark. Set the SPARK_HOME variable to this directory: export SPARK_HOME=/usr/hdp/2.3.4.1-10/spark/ 4. Create hive-site in the Spark conf directory: As user root, create the file SPARK_HOME/conf/hive-site.xml. Edit the file to contain only the following configuration setting: <configuration> <property> <name>hive.metastore.uris</name> <!--Make sure that <value> points to the Hive Metastore URI in your cluster --> <value>thrift://sandbox.hortonworks.com:9083</value> <description>URI for client to contact metastore server</description> </property></configuration> Set SPARK_HOME If you haven’t already, make sure to set SPARK_HOME before proceeding: export SPARK_HOME=/usr/hdp/current/spark-client
... View more
Labels:
- Labels:
-
Apache Spark