Member since
03-11-2020
186
Posts
28
Kudos Received
40
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
455 | 11-07-2024 08:47 AM | |
309 | 11-07-2024 08:36 AM | |
424 | 06-18-2024 01:34 AM | |
230 | 06-18-2024 01:25 AM | |
499 | 06-18-2024 01:16 AM |
12-23-2022
08:59 PM
@wert_1311 Could you please confirm if you are using ranger. If yes check the ranger audits. I would also suggest you to tail -f while inserting data to the hive server logs to get more information.
... View more
12-23-2022
08:56 PM
1 Kudo
@mabilgen Cause Root cause is due to condition where JournalNode identifies that promised epoch from the NameNode is not newest, on the contrary it sees that the RPC request from NameNode has lesser epoch value than the locally stored promised epoch . Therefore is throws the warning - "IPC's epoch 155 is less than the last promised epoch 156" . JournalNode therefore reject this RPC request from NameNode to avoid the split-brain. It will accept the RPC request from the NameNode which is send the RPC request with the newest epoch. This can be caused due to various reasons in the environment. 1.) any big job. 2.) any network issue. 3.) not enough resources on node Instructions To resolve the issue we need to identify if network is stable in the cluster. Also it is good to see if this issue is happening regularly within any specific NameNode. If yes, then there could some network hardware issue or resource issue with this NameNode. Few tuneables could be: a.) We can try to raise the number of dfs.datanode.max.transfer.threads from 4K to 16K and observe the performance on the cluster. b.) Also, we can raise the heapsize of NameNode to a higher value if there is too many GC activities or too long Full GC.
... View more
12-23-2022
08:48 PM
@bananaman This happens when the search filter is not correct. In your case its. Try to change search filter. sAMAccountName=* If this does not help try to add the usersync logs to check further.
... View more
12-21-2022
09:41 PM
@kvbigdata What happens if you set Cloudera Manager > Hive > Configuration > Service Monitor Client Config Overrides > Add
Name: hive.metastore.client.socket.timeout
Value: 600 We have provided the below information regarding the current situation of the canary jira . https://my.cloudera.com/knowledge/quotError--The-Hive-Metastore-canary-failed-to-create-a?id=337839 You have only one workaround ,that is to disable the canary test as you already did currently and that will not harm anything on your cluster . Currently the workaround is to disable the Canary tests on the HMS. # Access HIve Service
# Configuration Tab >> Look for Hive Metastore Canary Health Test
# Uncheck the box
# Restart the service.
... View more
12-18-2022
07:25 PM
1 Kudo
@Tomas79 To get the truststore password and location use below CM API curl -s -k -u admin:admin 'https://CM HOSTNAME(FQDN):7183/api/v45/certs/truststorePassword' If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped.
... View more
12-18-2022
07:15 PM
@myzard Increase the timeout below. You can also check after changing the below property from Ambari UI > Advance Hive-interactive-env:
Number of retries while checking LLAP app status = 40
(It will show a warning, please ignore that and save)
This should increase the timeout from 400 to 800s. If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped.
... View more
12-18-2022
05:31 AM
@MEFTAH1997 Kindly share the zeppelin logs for the timeframe when login failed. Also check if you are hitting the same issue as below. https://community.cloudera.com/t5/Customer/ERROR-quot-org-apache-shiro-authc-AuthenticationException/ta-p/319320 Solution: To resolve this issue you can try the below steps.
------
usermod -a -G shadow zeppelin
chgrp shadow /etc/shadow
chmod 040 /etc/shadow
rm -f /var/run/nologin
----- If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped.
... View more
12-18-2022
05:16 AM
@drgenious This is an OS-level issue that will need to be addressed at the OS level by the system admin. The bottom line here is that thrift-0.9.2 needs to be uninstalled There are various things that could be happening:
1) Multiple python versions.
2) Multiple pip versions.
3) Broken installation. Solution: 1
- You can try to create the Python virtual environment to connect to impala-shell
virtualenv venv -p python2
cd venv
source bin/activate
(venv) impala-shell Solution : 2 (i) Remove easy-install.pth files available in,
/usr/lib/python2.6/site-packages/
/usr/lib64/python2.6/site-packages/
(ii) Try running impala-shell If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped.
... View more
12-18-2022
04:45 AM
@Srinivs Kindly check if the username and passwords are correct for DB and there should not be any extra space. grant the privileges as shown below.
postgres=# GRANT CONNECT ON DATABASE rman TO rman;
GRANT
postgres=# GRANT ALL ON DATABASE rman TO rman;
GRANT
postgres=# GRANT ALL PRIVILEGES ON DATABASE "rman" to rman;
GRANT If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped.
... View more
12-17-2022
08:22 PM
1 Kudo
@snm1523As per the description I see that while you are creating the key getting following error. "user not allowed to do create key". Solution: 1. To see the cm_kms you need to login with keyadmin user in Ranger Admin. Did you tried logging in with "keyadmin" user? Or 2. Please login to the Ranger webUI as admin and under Settings --> Users/Groups/Roles and search 'User Name: rangerkms' Click on the rangekms user, then under roles add the keyadmin role. Save, then resume the upgrade in CM. If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped.
... View more