Member since
07-30-2020
219
Posts
45
Kudos Received
60
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
435 | 11-20-2024 11:11 PM | |
490 | 09-26-2024 05:30 AM | |
1084 | 10-26-2023 08:08 AM | |
1852 | 09-13-2023 06:56 AM | |
2129 | 08-25-2023 06:04 AM |
07-30-2023
10:00 AM
1 Kudo
It requires Hbase cluster as its a SQL layer on top of Hbase
... View more
07-28-2023
10:09 AM
1 Kudo
@sra_vis For CDP, In Cloudera Manager, click Home Click to the right of the cluster name and select Add Service. A list of service types display. Select Phoenix from the list and click Continue. Follow the wizard to add the Phoenix service. For HDP : https://docs.cloudera.com/HDPDocuments/HDP2/HDP-2.6.1/bk_command-line-installation/content/ch_install_phoenix_chapter.html
... View more
07-28-2023
09:57 AM
1 Kudo
Summary : Rebalance option is greyed out in the Cloudera Manager under the HDFS -1 or Balancer Actions dropdown menu. Symptoms The Rebalance option is grayed out in the Cloudera Manager under the HDFS -1 or Balancer Actions menu. You have verified that they are no currently running Balancer operations under the HDFS-1->Balancer Instance->Commands tab. The HDFS Balancer menu option is normally greyed out if there is a active Balancer job running. Instructions To diagnose and workaround this issue, perform the following actions: Log into the Cloudera Manager SQL database and check the status of BALANCER in the Cloudera Manager database using the following command. select CONFIGURED_STATUS, ROLE_TYPE FROM ROLES where ROLE_TYPE = "BALANCER" ; The normal status is NA. If the status returned is BUSY then the status must be changed to NA using the following command. update ROLES set CONFIGURED_STATUS="NA" where ROLE_TYPE = "BALANCER"; Re-execute the select statement to check to ensure the configured_status is now indicating the correct state. After changing the status the Rebalance option should no longer be greyed out in Cloudera Manager.
... View more
07-25-2023
01:54 AM
Hi @lukepoo , The 'enable_table_replication' command verifies if the table is present on the destination and tries to create it using the peer info. So there are chances that it is failing at this step. Have you tried setting the replication_scope to 1 for this table and see if replication works considering you have created the same table on the target as well.
... View more
07-25-2023
01:46 AM
Hi @lukepoo . Yes, you can try to raise the timeouts as well do check if the regions of this table have a good locality. For the current run, it seems the timeout happened on the region '1e8adfc0af2cf791361126e24822e63c'. So you can check the region locality and try to compact them if its less than 1.
... View more
07-25-2023
01:39 AM
Hi @dmitrybalak . Try the below Cloudera article and see if that helps resolve this. https://my.cloudera.com/knowledge/Rebalance-option-is-greyed-out-in-the-Cloudera-Manager-under?id=91268
... View more
07-21-2023
04:11 AM
Hi @cdl-support . You can refer to the below article and check if those help. https://my.cloudera.com/knowledge/Issue-with-Small-Files-in-HDFS?id=308948 Using Hive : https://docs.cloudera.com/best-practices/latest/impala-performance/topics/bp-impala-avoiding-small-files.html
... View more
06-22-2023
06:34 AM
hi @yagoaparecidoti Yes, the Block for the file would be present on the Datanode in either of the rack. So if you write a file with RF 3, 1 block will be present on a rack and its 2 replicas will be present on different Datanodes of 2nd rack.
... View more
06-22-2023
02:27 AM
Hi, @yagoaparecidoti With a replication factor of 3, the BlockPlacementPolicyDefault will put one replica on the local machine if the writer is on a datanode, otherwise on a random datanode in the same rack as that of the writer, another replica on a node in a different (remote) rack, and the last on a different node in the same remote rack. So totally 2 racks will be used, in sceneraio like 2 racks going down at the same time will cause data inavailability where using BlockPlacementPolicyRackFaultTolerant will help in placing 3 blocks on 3 different racks. So, you can safely set 2 racks. In case, you want to go with BlockPlacementPolicyRackFaultTolerant ( rare cases where both the racks go down ), you can follow the below doc : https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsBlockPlacementPolicies.html
... View more
06-22-2023
02:11 AM
HI @damon The most likely culprit, in this case, would be the Packet length crossing the jute buffer limit. As this is seen for the Kms service, verify if you are getting any warnings such as "Packet len is out of range!" If you do, then raise the jute buffer for the Zookeeper service as well as for the Kms service ( client ) and restart the respective services to resolve this issue.
... View more