Member since
03-23-2015
1288
Posts
114
Kudos Received
98
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 4350 | 06-11-2020 02:45 PM | |
| 5956 | 05-01-2020 12:23 AM | |
| 3777 | 04-21-2020 03:38 PM | |
| 4050 | 04-14-2020 12:26 AM | |
| 3028 | 02-27-2020 05:51 PM |
06-12-2019
11:25 PM
Hi @csegokul It sounds weird Hue behaves this way, I have never heard such thing that you need to logout and log back into Hue to get Hive results, something is not correct. Did Hue report any issues/errors in the interface? Cheers Eric
... View more
06-12-2019
11:21 PM
Hi, Couple of questions: 1. Have you checked HS2 log and see if it complained anything or did beeline reach HS2 at all? I suspect not, but just want to be sure. 2. Based on the code here: https://github.com/cloudera/hive/blob/cdh6.1.0/beeline/src/java/org/apache/hive/beeline/BeeLine.java#L1035-L1048 It looks like that beeline failed to get the connection string. Have you tried to quote the connection string just in case? beeline -u 'jdbc:hive2://hostname.domain.dom:10000' Cheers Eric
... View more
05-21-2019
08:54 PM
Hi, Couple of questions: 1. why you need to use impyla to connect to Hive? Impyla is designed for Impala, not Hive. 2. is port 9443 the correct port for Hive? 3. have you checked on the HiveServer2 log to see what is reported there? Cheers Eric
... View more
05-18-2019
10:29 PM
No it was just one insert and after the repeat it succeeded, so I am not able to reproduce, and thus no patterns. CDH 5.15 Can you give me a detailed hint how to get the full stacktrace (from the Impala daemon?) of the failed fragment? I dont have the query profile (already deleted) but as I can remember one of the fragment (out of 10) was waiting for almost 2h to HDFS sink, others finished within a minute. Maybe it is a hdfs issue?
... View more
05-18-2019
05:02 PM
Hi Tomas, This message is normal behaviour and expected to happen when the Datanode's security key manager rolls its keys. It will cause clients to print this whenever they use the older cached keys, but the post-action of printing this message is that the client refetches the new key and the job completes. Since Impala is a client of HDFS, there is no concern or worry about this message, as it is part of normal operation. We also see this from HBase logs, which is again, normal. Hope above helps. Cheers Eric
... View more
05-13-2019
01:44 AM
I checked the log but nothing indicates a server side issue. From the server side it indicates that the client has cancelled. I have tested the ODBC driver using a client app (C#) and things were fine. So I'm now trying to see if there is something to be done from SQL Server side. Thanks a lot for your support.
... View more
05-11-2019
06:09 PM
Hi, It looks like that you running spark in cluster mode, and your ApplicationMaster is running OOM. In cluster mode, the Driver is running inside the AM, I can see that you have Driver of 110G and executor memory of 12GB. Have you tried to increase both of them to see if it can help? How much I do not know, but maybe slowly increase to and keep trying. However, the driver memory of 110GB seems to be a lot, am wondering what kind of dataset is this Spark job processing? How large is the volume? Cheers Eric
... View more
05-10-2019
06:51 AM
We do not want to keep the old partitions. We just want to re-partition the data using the timestamps value. The data only exists currently as partitioned by the string value
... View more
05-10-2019
04:09 AM
1 Kudo
Hi Ajay, As I mentioned before in the previous post: Hue will hold the query handler open so that it can do paginations, and it will only kill the handler after user navigates away from the impala page. If user stays on the page, the handler will be kept open and the query is considered as in flight. This is intended and part of design. If you do not want it to be open for long time, you need to set the idle_session_timeout at Impala level. Cheers Eric
... View more