Member since
09-25-2015
356
Posts
382
Kudos Received
62
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2439 | 11-03-2017 09:16 PM | |
1917 | 10-17-2017 09:48 PM | |
3818 | 09-18-2017 08:33 PM | |
4510 | 08-04-2017 04:14 PM | |
3458 | 05-19-2017 06:53 AM |
04-11-2017
05:03 PM
3 Kudos
Application IDs are provided by Resource Manager to the client through the ApplicationSubmissionContext. More information can be found here: https://hadoop.apache.org/docs/r2.7.2/hadoop-yarn/hadoop-yarn-site/WritingYarnApplications.html
... View more
04-07-2017
12:50 AM
1 Kudo
This is a little bit different from your situation. Here the user was not using Ambari and believe his service was not actually running. In your case you should check Ambari Hive Summary page to see if your Hive Metastore service is up, if not start it using Ambari.
... View more
04-07-2017
12:34 AM
You can use the "Refresh configs" option under the Service Actions dropdown on the Flume configs page. This will only refresh the configs and not restart Flume service. Flume agent process periodically polls for any changes to the agent config and will reload the changed config file automatically.
... View more
04-06-2017
11:52 PM
2 Kudos
A simple Spark1 Java application to show a list of tables in Hive Metastore is as follows: import org.apache.spark.SparkContext;
import org.apache.spark.SparkConf;
import org.apache.spark.sql.hive.HiveContext;
import org.apache.spark.sql.DataFrame;
public class SparkHiveExample {
public static void main(String[] args) {
SparkConf conf = new SparkConf().setAppName("SparkHive Example");
SparkContext sc = new SparkContext(conf);
HiveContext hiveContext = new org.apache.spark.sql.hive.HiveContext(sc);
DataFrame df = hiveContext.sql("show tables");
df.show();
}
} Note that Spark pulls metadata from Hive metastore and also uses hiveql for parsing queries but the execution of queries as such happens in the Spark execution engine.
... View more
04-05-2017
11:54 PM
1 Kudo
The presence of hive principal in the URL basically tells the JDBC driver that the connection is made to a secure kerberos cluster. Internally then it consumes this and other connection params to create/establish a transport. In case of Dynamic discovery mode some params including this one are stored in the Zookeeper and fetched so doesn't need to be mentioned in the client connection url.
... View more
04-05-2017
04:52 PM
3 Kudos
On Hive CLI depending on the execution engine, for "mr" you can use "set mapred.job.queue.name=<queuename>;" or for "tez" you can use "set tez.queue.name=<queuename>;".
... View more
04-03-2017
04:31 PM
2 Kudos
Can you share the query? You can workaround this by setting hive.execution.engine=mr.
... View more
04-02-2017
09:23 PM
1 Kudo
No, HIVE-11217 was about improving the error message. "CAST(NULL AS bigint)" should work irrespective of this fix. Can you share your query?
... View more
04-01-2017
02:35 AM
2 Kudos
Are you using NULL in the select clause, in that case you might be encountering HIVE-11217. As per the JIRA there is no fix but the error message is improved with something like: CREATE-TABLE-AS-SELECT creates a VOID type, please use CAST to specify the type
... View more
03-30-2017
10:29 PM
2 Kudos
Hive requires the mysql jdbc jar under HIVE_HOME/lib (/usr/hdp/current/hive-client/lib/). This generally should be done on all hosts that have hive installed (at the least on the hive host and host where you are running the schema tool from).
... View more