Member since
09-11-2018
76
Posts
7
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1039 | 07-15-2021 06:18 AM | |
1475 | 06-21-2021 01:36 AM | |
2384 | 06-21-2021 01:29 AM | |
2236 | 04-19-2021 10:17 AM | |
3287 | 02-09-2020 10:24 PM |
03-05-2024
12:14 PM
@lv_antel Welcome to the Cloudera Community! As this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post. Thanks.
... View more
05-25-2022
01:14 AM
@PDDF_VIGNESH, did @paras response help you resolve this issue?
... View more
09-23-2021
07:51 AM
1 Kudo
HDFS data might not always be distributed uniformly across DataNodes. One common reason is addition of new DataNodes to an existing cluster. HDFS provides a balancer utility that analyzes block placement and balances data across the DataNodes. The balancer moves blocks until the cluster is deemed to be balanced, which means that the utilization of every DataNode (ratio of used space on the node to total capacity of the node) differs from the utilization of the cluster (ratio of used space on the cluster to total capacity of the cluster) by no more than a given threshold percentage. The balancer does not balance between individual volumes on a single DataNode. To free up the spaces in particular datanodes. You can use a block distribution application to pin its block replicas to particular datanodes so that the pinned replicas are not moved for cluster balancing. https://docs.cloudera.com/HDPDocuments/HDP2/HDP-2.6.0/bk_hdfs-administration/content/overview_hdfs_balancer.html
... View more
07-15-2021
06:18 AM
1 Kudo
Hi @Amn_468 In Kudu Table is divided into multiple tablets and those tables are distributed across the cluster. So the table data will be stored across multiple TS (kudu nodes) You can get that info from Kudu master WebUI CM->Kudu ->Webui -> Tables ->select table curl -i -k --negotiate -u : "http://Abcde-host:8051/tables" Also, You can run ksck command to get that info :- https://kudu.apache.org/docs/command_line_tools_reference.html#table-list Does that answer your question, if yes please feel free to mark the post as solution accepted and give a thumbs up. regards,
... View more
06-27-2021
11:50 PM
Hi @FEIDAI, has any of the replies resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more
06-22-2021
10:29 PM
@wert_1311, has @kingpin's reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more
06-21-2021
01:36 AM
Hi @sakitha Seems to be a known issue. Is the topic whitelist is set to " * " ? Can you please try with dot - " .*" Let us know if that works for you. Regards, ~ If the above answers your questions. Please give a thumbs up and mark the post as accept as solution.
... View more
06-03-2021
01:10 AM
Hi @kingpin. Thanks for the reply. Is there an official document that "/getimage" is deprecated? Cause I couldn't find it. When I run "hdfs dfsadmin -fetchImage" command, it also calls the same curl command. I could get fsimage with hdfs command, but I want to figure out the reason why curl fails.
... View more
05-31-2021
01:30 AM
What if there are multiple namespaces with multiple namenodes? `hdfs dfsadmin -fetchImage` command only reads from the default namespace.
... View more
04-22-2021
07:40 PM
Dear @kingpin Thanks for your reply, I do have a sparkstreaming proccess for storing files to hdfs from kafka every two minutes. You can refer the screenshot below for the data volume. The java heap is increasing every day. I think it will be over 70GB after 1 month, but the blocks is still less than 2 million. Is there anyway to clean cache of javaheap? The memory of java heap will be normal after rebooting. sparkstreaming of car @kingpin Thanks for your reply, I do have a sparkstreaming proccess for storing files to hdfs from kafka every two minutes. You can refer the screenshot below for the data volume. The java heap is increasing every day. I think it will be over 70GB after 1 month, but the blocks is still less than 2 million. Is there anyway to clean cache of javaheap? The memory of java heap will be normal after rebooting. sparkstreaming of consuming kakfa:
... View more