Member since
10-01-2018
802
Posts
142
Kudos Received
130
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2252 | 04-15-2022 09:39 AM | |
1742 | 03-16-2022 06:22 AM | |
5121 | 03-02-2022 09:44 PM | |
2041 | 03-02-2022 08:40 PM | |
1277 | 01-05-2022 07:01 AM |
10-09-2024
11:14 PM
1 Kudo
This occurs in two scenarios 1. Jdk-11 installed but not provided mod 777 2. jdk-11 not properly extracted and moved install openjdk11 download the file openjdk-11_linux-x64_bin.tar to /tmp tar -xf openjdk-11_linux-x64_bin.tar mv /tmp/jdk-11 /usr/lib/jvm chmod 777 /usr/lib/jvm/jdk-11 update-alternatives --install "/usr/bin/java" "java" "/usr/lib/jvm/jdk-11/bin/java" 1010 update-alternatives --install "/usr/bin/javac" "javac" "/usr/lib/jvm/jdk-11/bin/javac" 1010 verify > java -version & javac -version restart cloudera-scm-server
... View more
07-16-2024
03:25 PM
@GangWar @wert_1311 I have found HDFS files that are persistently under-replicated, despite being over a year old. They are rare, but vulnerable to loss with one disk failure. To be clear, this shows the replication target, not the actual: hdfs dfs -ls filename The actual can be found with 'hdfs fsck filename -blocks -files filename' In theory, this situation should be transient, but I have found some cases. See example below where a file is 3 blocks in length and one of them only has one replica. # hdfs fsck -blocks -files /tmp/part-m-03752 OUTPUT: /tmp/part-m-03752: Under replicated BP-955733439-1.2.3.4-1395362440665:blk_1967769468_1100461809792. Target Replicas is 3 but found 1 live replica(s), 0 decommissioned replica(s), 0 decommissioning replica(s). /tmp/part-m-03752: Replica placement policy is violated for BP-955733439-1.2.3.4-1395362440665:blk_1967769468_1100461809792. Block should be additionally replicated on 1 more rack(s). 0. BP-955733439-1.2.3.4-1395362440665:blk_1967769089_1100461809406 len=134217728 Live_repl=3 1. BP-955733439-1.2.3.4-1395362440665:blk_1967769276_1100461809593 len=134217728 Live_repl=3 2. BP-955733439-1.2.3.4-1395362440665:blk_1967769468_1100461809792 len=40324081 Live_repl=1 Status: HEALTHY Total size: 308759537 B Total dirs: 0 Total files: 1 Total symlinks: 0 Total blocks (validated): 3 (avg. block size 102919845 B) Minimally replicated blocks: 3 (100.0 %) Over-replicated blocks: 0 (0.0 %) Under-replicated blocks: 1 (33.333332 %) Mis-replicated blocks: 1 (33.333332 %) Default replication factor: 3 Average block replication: 2.3333333 Corrupt blocks: 0 Missing replicas: 2 (22.222221 %) Number of data-nodes: 30 Number of racks: 3 The filesystem under path '/tmp/part-m-03752' is HEALTHY # hadoop fs -ls /tmp/part-m-03752 OUTPUT: -rw-r--r-- 3 wuser hadoop 308759537 2021-12-11 16:58 /tmp/part-m-03752 [sorry, code quoting is not working for me for some reason.] Presumably, the file was incorrectly replicated when it was written because of some failure and the defaults for dfs.client.block.write.replace-datanode-on-failure props were such that new DNs were not obtained at write time to replace ones that failed. The puzzling thing here is why does it not get re-replicated after all this time?
... View more
03-05-2024
12:14 PM
@lv_antel Welcome to the Cloudera Community! As this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post. Thanks.
... View more
02-27-2024
12:29 AM
2 Kudos
Yes @mike_bronson7 above steps also works
... View more
12-01-2023
03:14 AM
The stacktrace indicates a resemblance to the issue reported in https://issues.apache.org/jira/browse/HIVE-21698. To address this, it is recommended to upgrade to CDP version 7.1.7 or a higher release
... View more
10-02-2023
10:17 PM
Can you please help me on how can I migrate from MIT kerberos to AD kerberos if currently MIT kerberos is being used by 6000+ applications, or can you share some documentation on how to do it?
... View more
07-18-2023
06:42 AM
@GangWar This is still a problem in CDP 7.1.8 where there is no possibility of turning off the "Auto-TLS is Enabled" satus in Admin --> Security. Has anyone found the solution? I've now combed through UI settings, db and local files for anything to do with TLS and removed most if it. I know its turned off but as long as CDP thinks that Auto-TLS is ON I can't run the Auto-TLS setup Wizzard.
... View more
05-16-2023
02:08 AM
Hi @Paarth Spark HBase Connector (SHC) is not supported in CDP. You need to use HBase Spark Connector to access the HBase data using Spark. You can find the sample reference: https://docs.cloudera.com/runtime/7.2.10/managing-hbase/topics/hbase-using-hbase-spark-connector.html
... View more
03-07-2023
12:04 PM
Welcome to the community @supersonic-2021 . As this is an older post, we recommend starting a new thread. The new thread will provide the opportunity to provide details specific to your environment that could aid others in providing a more accurate answer to your question.
... View more
02-03-2023
09:10 PM
After following above steps I'm still not able to start hiveserver2
... View more