Member since
09-11-2018
76
Posts
7
Kudos Received
5
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1816 | 07-15-2021 06:18 AM | |
| 2425 | 06-21-2021 01:36 AM | |
| 4191 | 06-21-2021 01:29 AM | |
| 3452 | 04-19-2021 10:17 AM | |
| 4878 | 02-09-2020 10:24 PM |
09-17-2021
03:09 AM
Hi @Chetankumar , Given you have heterogeneous storage & HDFS follows rack topology to balance the blocks across the datanodes. Currently the DataNode Volume uses Round Robin policy, we change it to Available Space policy. This means new data will be written to the lesser used disks.By doing that it will chose datanode based on available space. This can help in your case You can avail the below settings in HDFS - CM->HDFS-> config -> DataNode Volume Choosing Policy -> change to Available Space Save changes and restart datanodes. If that helps, Please feel free to mark the post as Accepted solution. regards, Vipin
... View more
07-15-2021
06:18 AM
1 Kudo
Hi @Amn_468 In Kudu Table is divided into multiple tablets and those tables are distributed across the cluster. So the table data will be stored across multiple TS (kudu nodes) You can get that info from Kudu master WebUI CM->Kudu ->Webui -> Tables ->select table curl -i -k --negotiate -u : "http://Abcde-host:8051/tables" Also, You can run ksck command to get that info :- https://kudu.apache.org/docs/command_line_tools_reference.html#table-list Does that answer your question, if yes please feel free to mark the post as solution accepted and give a thumbs up. regards,
... View more
06-21-2021
03:49 AM
Hi @FEIDAI Check in hdfs trash if the deleted folder is there. (if you haven't used -skipTrash) If you manage to find the folder under trash, copy it to your destination path hdfs dfs -cp /user/hdfs/.Trash/Current/<your file> <destination> Otherwise, The best option is probably to find and use a data recovery tool or backup, Regards,
... View more
06-21-2021
01:36 AM
Hi @sakitha Seems to be a known issue. Is the topic whitelist is set to " * " ? Can you please try with dot - " .*" Let us know if that works for you. Regards, ~ If the above answers your questions. Please give a thumbs up and mark the post as accept as solution.
... View more
06-21-2021
01:29 AM
1 Kudo
Hi @wert_1311 That's right, balancer just balances the tablet across the kudu cluster. If one host is consuming more space, it could be that the size of tablets is huge. Thats right, Kudu cant rebalance like HDFS based on dfs usage. one of the workaround you can try:- - Stop that specific kudu TS role - Run ksck until it comes healthy. - once ksck is healthy, rebuild that particular Kudu TS (rebuilding = wiping all data and wal dir) https://kudu.apache.org/docs/administration.html#rebuilding_kudu - start that specific TS - Run rebalance again That should help. Let me know how did that go. Cheers, ~ If that answers your question - Please give the thumbs up & mark the post as accept as solution.
... View more
06-17-2021
12:39 PM
Hi @wert_1311 , Check for Tablet distribution across tablet servers. For some reason if one tablet server goes down/unavailable, the data will be replicated to other tablet servers. You get can get number of tablets per tablet server using this command :- sudo -u kudu kudu table list <csv of master addresses> -list_tablets | grep "^ " | cut -d' ' -f6,7 | sort | uniq -c If you find the tablet distribution is uneven. You can go ahead with kudu rebalance tool to balance your cluster. https://docs.cloudera.com/runtime/7.2.2/administering-kudu/topics/kudu-running-tablet-rebalancing-tool.html Let me know how did that go. If that answers your question, Please mark this post as "accept as solution" Regards,
... View more
04-20-2021
10:11 PM
Hi @sipocootap2 , AFAIK "/getimage" is deprecated in CDH and we suggest this not to be used. Instead you can use the command "hdfs dfsadmin -fetchImage <dir>" to download & save the latest fsimage. Based on research, in earlier versions of CDH, the getImage method was available after which a need was realized to provide a proper command/utility to download the FSimage and as a result of which "hdfs dfsadmin -fetchImage" was born. Once that was put in place, the getImage was removed. Does that answers your questions ? If yes, feel free to mark this post as "accept as solution" Regards,
... View more
04-19-2021
10:17 AM
Hi @Chetankumar You can perform disk hot swap of DN. https://docs.cloudera.com/documentation/enterprise/6/6.3/topics/admin_dn_swap.html If the Replication factor is set to 3 for all the files then taking down one disk shouldn't be a problem as Namenode will auto-replicate the under-replicated blocks. As part of small test first stop the datanode and wait for sometime (While NN copies the blocks to other available datanodes). Run fsck to confirm if HDFS file system is healthy. When it is healthy, you can easily play around with that stopped datanode. Idea is to ensure the replication factor to 3 so that you dont incur any dataloss. if the Replication factor is set to 1 for some files and if those blocks are hosted on that /data01 disk. Then it could be a potential loss. As long as you have RF=3 you would be good. Does that answer your questions ? Let us know Regards,
... View more
03-02-2021
09:15 PM
Hi @JeromeAlbin Looks like https://issues.apache.org/jira/browse/IMPALA-9486 The Error pop up because you are connecting to Impala anonymously (no user, no password). You can specify a user (even if it's not declared in Kudu), then it should work Please read the page 12 of the following document: https://docs.cloudera.com/documentation/other/connectors/impala-jdbc/2-6-15/Cloudera-JDBC-Driver-for-Impala-Install-Guide.pdf Using User Name ----------------------- This authentication mechanism requires a user name but does not require a password. The user name labels the session, facilitating database tracking. Does that answer your question ? if yes, then feel free to mark this post "accept as solution" Regards, vipin
... View more
02-03-2021
12:20 AM
1 Kudo
HI @vidanimegh Ensure if you are able to do forward and reverse dns lookup., Iptables are off. Perform CM agent hard restart. Whats the java version, There's this bug https://bugs.openjdk.java.net/browse/JDK-8215032 wherein Servers with Kerberos enabled stop functioning. That could be a possibility
... View more