Member since
07-19-2020
102
Posts
14
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
274 | 10-22-2024 05:23 AM | |
398 | 10-11-2024 04:28 AM | |
991 | 09-24-2024 10:15 PM | |
568 | 09-17-2024 01:07 PM | |
724 | 09-09-2024 03:02 AM |
12-17-2024
12:41 PM
1 Kudo
@JSSSS The error is this "java.io.IOException: File /user/JS/input/DIC.txt._COPYING_ could only be written to 0 of the 1 minReplication nodes. There are 3 datanode(s) running and 3 node(s) are excluded in this operation." All the 3 datanode according to the log are excludeNodes=[192.168.1.81:9866, 192.168.1.125:9866, 192.168.1.8> with replication factor of 3 , writes should succeed to all the 3 datanodes else the write fails. The cluster may have under-replicated or unavailable blocks due to excluded nodes HDFS cannot use these nodes, possibly due to: Disk space issues. Write errors or disk failures. Network connectivity problems between the NameNode and DataNodes. 1. Verify if the DataNodes are live and connected to the NameNode hdfs dfsadmin -report Look for the "Live nodes" and "Dead nodes" section If all 3 DataNodes are excluded, they might show up as dead or decommissioned. Ensure the DataNodes have sufficient disk space for the write operation df -h Look at the HDFS data directories (/hadoop/hdfs/data) If disk space is full, clear unnecessary files or increase disk capacity hdfs dfs -rm -r /path/to/old/unused/files View the list of excluded nodes cat $HADOOP_HOME/etc/hadoop/datanodes.exclude If nodes are wrongly excluded: Remove their entries from datanodes.exclude. Refresh the NameNode to apply changes hdfs dfsadmin -refreshNodes Block Placement Policy: If the cluster has DataNodes with specific restrictions (e.g., rack awareness), verify the block placement policy grep dfs.block.replicator.classname $HADOOP_HOME/etc/hadoop/hdfs-site.xml Default: org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault Happy hadooping
... View more
10-14-2024
02:53 PM
1 Kudo
@manyquestions Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.
... View more
10-04-2024
04:47 AM
@MaraWang
Have you been able to resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more
10-01-2024
12:24 PM
@Amandi Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.
... View more
09-29-2024
10:52 PM
1 Kudo
@Amandi, Did the response help resolve your query? If it did, kindly mark the relevant reply as the solution, as it will aid others in locating the answer more easily in the future.
... View more
09-18-2024
10:53 PM
1 Kudo
Hi Thanks for the link however my question is Read replica is performing compaction , any reason why it is so ? where should we look into to fix this issue? Since replication is done by readonly cluster it is deleting and causing FileNotFoundException
... View more
09-18-2024
11:48 AM
Hi @therealsrikanth Manual snapshots copying is the only way forward as you have security and compatibility issues between the clusters. Kindly try to follow below steps - Create Snapshot Operation Take the snapshot in CDH. For example after login from HBase shell: $ hbase shell hbase> snapshot '<TABLE_NAME>', '<SNAPSHOT_NAME>' Major compact the table $ major_compact '<TABLE_NAME>' Copy the files to the local environment from the below locations: hdfs dfs -get /hbase/data/.hbase-snapshot/ /tmp/dir hdfs dfs -get /hbase/data/archive/ /tmp/dir2 Restore operation Transfer the files to the CDP environment. Use -copyFromLocal operation to copy the contents to HDFS: cd /tmp hdfs dfs -copyFromLocal dir /hbase/data/.hbase-snapshot hdfs dfs -copyFromLocal dir2 /hbase/archive/data/default Note: "default" is a namespace name on which newly created tables are placed if you don't specify a custom namespace. Make sure, the directories are created in HDFS. the path should look like this after copying: /hbase/archive/data/<Namespace>/<TABLE_NAME>/<hfile1> /hbase/archive/data/<Namespace>/<TABLE_NAME>/<hfile2> ... Check permissions on /hbase/archive directory, it should be owned by user HBase. Login to the HBase shell and check the snapshots: Hbase shell hbase:001:0> list_snapshots When the snapshot is visible, you can use clone_snapshot command to create a new table using the snapshot: hbase> clone_snapshot '<SNAPSHOT_NAME>', '<TABLE_NAME_NEW>' Was your question answered? Please take some time to click on "Accept as Solution" -- If you find a reply useful, say thanks by clicking on the thumbs up button below this post.
... View more
09-05-2024
03:04 PM
1 Kudo
Hi @HadoopCommunity Was your question answered? Please take some time to click on "Accept as Solution" -- If you find a reply useful, say thanks by clicking on the thumbs up button below this post.
... View more
08-23-2024
07:02 AM
1 Kudo
Hi @Marks_08 It could be an issue with the the beeline shell not able to include all the required configuration files needed for authentication. Could you please try with exporting the configuration manually and then launching the beeline shell. For example - export HADOOP_CONF_DIR=/etc/hadoop/conf:/etc/hive/conf:/etc/hbase/conf
... View more