Member since
03-11-2020
197
Posts
30
Kudos Received
40
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2136 | 11-07-2024 08:47 AM | |
1505 | 11-07-2024 08:36 AM | |
1053 | 06-18-2024 01:34 AM | |
727 | 06-18-2024 01:25 AM | |
887 | 06-18-2024 01:16 AM |
01-10-2023
05:46 AM
Encountered error with /opt/cloudera/cm/bin/gen_credentials.sh: Cannot access generated keytab file /var/run/cloudera-scm-server/cmf32289673674897789.keytab
... View more
Labels:
- Labels:
-
Cloudera Manager
-
Kerberos
01-10-2023
05:28 AM
@techfriend this can be resolved after modifiying the principle. WARNING: no policy specified for mapred/ip-172-31-46-169.us-west-2.compute.internal@HADM.RU; defaulting to no policy
add_principal: Principal or policy already exists while creating "mapred/ip-172-31-46-169.us-west-2.compute.internal@HADM.RU".
+ '[' 604800 -gt 0 ']'
++ kadmin -k -t /var/run/cloudera-scm-server/cmf5922922234613877041.keytab -p cloudera-scm/admin@HADM.RU -r HADM.RU -q 'getprinc -terse mapred/ip-172-31-46-169.us-west-2.compute.internal@HADM.RU'
++ tail -1
++ cut -f 12
+ RENEW_LIFETIME=0
+ '[' 0 -eq 0 ']'
+ echo 'Unable to set maxrenewlife'
+ exit 1 Login to kadmin.local shell then modify the principle using below comamnd. kadmin.local modprinc -maxrenewlife 90day +allow_renewable mapred/ip-172-31-46-169.us-west-2.compute.internal@HADM.RU
... View more
12-26-2022
09:54 PM
1 Kudo
@mabilgen Thanks for the update keep us posted if this issue occurs again. If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped.
... View more
12-23-2022
09:19 PM
@quangbilly79 It should be /opt/cloudera/parcels/SPARK2/lib/
... View more
12-23-2022
08:59 PM
@wert_1311 Could you please confirm if you are using ranger. If yes check the ranger audits. I would also suggest you to tail -f while inserting data to the hive server logs to get more information.
... View more
12-23-2022
08:56 PM
1 Kudo
@mabilgen Cause Root cause is due to condition where JournalNode identifies that promised epoch from the NameNode is not newest, on the contrary it sees that the RPC request from NameNode has lesser epoch value than the locally stored promised epoch . Therefore is throws the warning - "IPC's epoch 155 is less than the last promised epoch 156" . JournalNode therefore reject this RPC request from NameNode to avoid the split-brain. It will accept the RPC request from the NameNode which is send the RPC request with the newest epoch. This can be caused due to various reasons in the environment. 1.) any big job. 2.) any network issue. 3.) not enough resources on node Instructions To resolve the issue we need to identify if network is stable in the cluster. Also it is good to see if this issue is happening regularly within any specific NameNode. If yes, then there could some network hardware issue or resource issue with this NameNode. Few tuneables could be: a.) We can try to raise the number of dfs.datanode.max.transfer.threads from 4K to 16K and observe the performance on the cluster. b.) Also, we can raise the heapsize of NameNode to a higher value if there is too many GC activities or too long Full GC.
... View more
12-23-2022
08:48 PM
@bananaman This happens when the search filter is not correct. In your case its. Try to change search filter. sAMAccountName=* If this does not help try to add the usersync logs to check further.
... View more
12-21-2022
09:41 PM
@kvbigdata What happens if you set Cloudera Manager > Hive > Configuration > Service Monitor Client Config Overrides > Add
Name: hive.metastore.client.socket.timeout
Value: 600 We have provided the below information regarding the current situation of the canary jira . https://my.cloudera.com/knowledge/quotError--The-Hive-Metastore-canary-failed-to-create-a?id=337839 You have only one workaround ,that is to disable the canary test as you already did currently and that will not harm anything on your cluster . Currently the workaround is to disable the Canary tests on the HMS. # Access HIve Service
# Configuration Tab >> Look for Hive Metastore Canary Health Test
# Uncheck the box
# Restart the service.
... View more
12-18-2022
07:25 PM
1 Kudo
@Tomas79 To get the truststore password and location use below CM API curl -s -k -u admin:admin 'https://CM HOSTNAME(FQDN):7183/api/v45/certs/truststorePassword' If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped.
... View more
12-18-2022
07:15 PM
@myzard Increase the timeout below. You can also check after changing the below property from Ambari UI > Advance Hive-interactive-env:
Number of retries while checking LLAP app status = 40
(It will show a warning, please ignore that and save)
This should increase the timeout from 400 to 800s. If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped.
... View more