Member since
07-01-2015
460
Posts
78
Kudos Received
43
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1346 | 11-26-2019 11:47 PM | |
1305 | 11-25-2019 11:44 AM | |
9486 | 08-07-2019 12:48 AM | |
2184 | 04-17-2019 03:09 AM | |
3499 | 02-18-2019 12:23 AM |
09-11-2018
11:28 AM
But Hue does not have to be configured the same way as CM. Every component can have his truststore and keystore configured in a different path. Also for example Hue requires "cert" file in PEM format, other components requires JKS - truststores and keystores.
... View more
09-11-2018
11:14 AM
1 Kudo
Yes of course.Restart the scm and agents.Then two things can happen: - everything falls apart - your agents will not be able to communicate with the scm server - all ok - check your certificate with openssl - if it is still old, you are configuing the certificate in the wrong path. Check also your settings in /etc.
... View more
09-11-2018
10:46 AM
You can try to get the server certificate via openssl command: openssl s_client -connect <host> and verify if the certificate is new or old. If it is new, then your browser or PC has some issues.
... View more
09-11-2018
10:44 AM
Is it failing immediately or during the execution? Do you see the job running in YARN? If yes, try to increase map/reduce memories. If it fails immediately increase the client heap more - but it is wierd having 4g should be more than enough for a simple count..
... View more
09-11-2018
10:41 AM
The canary test does not have anything with the failover. This just reports to the Cloudera Manager the health status. The actual failover in HA scenario is initiated by a Failover controller(s) - you have probably two NN, 3 (or 5) JNodes and Failover controllers. When the Active NN goes down, the FC brings the standby NN into a Active mode
... View more
09-11-2018
10:33 AM
As far as I know there is no way to extend the TLS certificate validity, so if you created a new certificate, and placed into a truststore make sure the old one is removed.
... View more
09-11-2018
04:46 AM
Hi,
noticed in Hue logs the following error:
[11/Sep/2018 13:35:59 +0200] kerberos_ ERROR handle_mutual_auth(): Mutual authentication unavailable on 404 response
The Hue service works mostly as expected but have some times failures, not logging any usefull information (except this)
Is this error related to some bad configuration of the Hue services (two running in HA mode)?
Thanks
... View more
Labels:
- Labels:
-
Cloudera Hue
-
Kerberos
09-10-2018
01:15 AM
Regarding soft limit exceeded: I had the same issue, I suppose you are not running on the latest CDH5 version (5.15). The memory on Kudu tablet servers (in my case) was not released, even when the injection stopped and no workload was running against the Kudu cluster at all. I was told that the newer version of Kudu should handle better the memory allocation. You can find the detailed memory consumption on the tablets server's UI. My solution was to decrease the number of tablets and the number of tables from Kudu.
... View more
09-10-2018
12:58 AM
Yes you can do it in multiple ways. For example you can use Group by or Distinct. If you want to find duplicities on the subset of the columns (i.e. find all rows where customer_id is duplicate) I would recommend to use a Group by.
... View more
09-07-2018
12:14 AM
Hi, thanks for the clarification. I am working on CDH 5.13+. Maybe I was not exact, but my experience is that Impala very often cant run the query because it requires more memory than it can get. It is true that query in running state fails rarely on "memory limit" if the pools are set to a non-overlapping mode (i.e. total memory of Impala daemons divided for example by 3 pools - such as M, L, XL, and in each pool you allow to run only that much queries how much do you have in the pool - such as XL has a 9GB limit, the pool size is 27GB, you allow 3 queries at a time to run). But my complain was that in general, that many times the query (with up-to-date stats on tables) simply does not start to run in the corresponding pool (for example pool L with 2GB limit) because the planner thinks it will need more memory. And this is what I was referring to, that you as and end user can't trade this for a disk spill, cant say, hey I know that maybe the sum of the operators are above 2GB in your plan, but please run it in this pool with 2G limit, and if you will hit the wall, start to spill (i.e. trade off for performance). So my experience is, that the user has to "upgrade" the query to pool XL, and run it there. But then it is like in a "highway" - everybody is driving on the left lane (or right - for UK readers). And the funny thing is that after the query finishes in XL pool, you examine the memory allocations and it would fit into the 2G limit. And @Tim Armstrong thank you very much for all the efforts, I have a quite experience and I know the history of Impala, so thumbs up for all the improvements done in the last versions - looking forward to CDH6!
... View more