Member since
04-06-2016
5
Posts
0
Kudos Received
0
Solutions
10-14-2016
12:05 PM
Using Hive on Tez with HDP2.5 sandbox. I submit a query (select * from ...) from beeline and kill it immediately (after ~1s). The application is killed, and the following log line appears: Could not connect to AM, killing session via YARN, sessionName=HIVE-xxx, applicationId=application_xxx My problem is, that another container is started immediately with the same sessionName which begins to execute my previously cancelled query. This does not happen is I kill the application after 15s, in that case the application's final status is SUCCEDED (killedDags=1).
... View more
Labels:
08-19-2016
12:12 PM
Nope, unfortunately every other options are disabled due to security reasons.
... View more
08-19-2016
11:57 AM
I'd like to bulk insert records from my client into a Hive table via JDBC (to be clear: the data is not on the Hadoop cluster, but on the local file system of the client. I can't upload it to the cluster or to HDFS.). The best I could achieve so far is calling insert into...values using hive on tez, which gives ~10000 records/sec throughput, but this is still very poor. I've also checked Hive Streaming Data Ingest, but for that I need to connect to the Hive Metastore, which is not an option in my case. Unfortunately I can connect only to the hiveserver2 port (10000). Any other ideas?
... View more
Labels:
04-07-2016
10:47 AM
How can I upgrade a custom Ambari service independently from HDP and other services? Let's say the third-party service has a new version that runs on the same HDP version. It seems that on the Ambari Web UI I can only upgrade/downgrade to HDP versions. I've checked the code for other services (like HBase), but I think it's upgraded only when the whole cluster is upgraded. Is there an option for this like for Cloudera parcels? Or should I manually copy the new service version to the cluster, stop and delete the original, and add the new version on the Web UI?
... View more
Labels:
04-06-2016
02:49 PM
I'd like to create a custom Ambari service that modifies the HIVE_CLASSPATH env variable and the mapreduce.application.classpath Hadoop property (i. e. I want that Hive and Hadoop use these extended classpaths). Is it possible to do that in Ambari? Where can I find some examples about that? Thanks
... View more
Labels: