Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1996 | 06-15-2020 05:23 AM | |
| 16431 | 01-30-2020 08:04 PM | |
| 2144 | 07-07-2019 09:06 PM | |
| 8334 | 01-27-2018 10:17 PM | |
| 4727 | 12-31-2017 10:12 PM |
02-11-2021
11:10 PM
Since you are using Ambari, you can you can try to use Rebalance HDFS action, or directly the Hadoop Balancer tool.
... View more
02-08-2021
02:25 PM
1 Kudo
Out of all the options available to deal with this situation, I think resetting your network configurations is the best. Resetting your network configurations is one of the maintenance procedures that help refresh or repair your network connectivity. In this way, it could eliminate latency and will return your network status like when you started using the Internet. To resolve your concern, we suggest that you reset your TCP/IP or Internet Protocol to its default settings. It's like using the services of Mortgage Advisor London when looking to buy a house, in the sense that it is the best choice you can make.
... View more
01-28-2021
12:04 AM
@mike_bronson7 Adding to @GangWar :- To your question - dose this action could also affected the data itself on the data-nodes machines ? No it doesnt affect data on datanode directly. This is metadata operation on namenode which when need to be run when NameNode fails to progress through the edits or fsimage then the NameNode may need to be started with -recover option. Since the metadata has reference to the blocks on datanode, hence this is a critical operation and may incur data loss.
... View more
01-26-2021
03:16 PM
1 Kudo
@mike_bronson7 It seems to me like this is a symptom of having the default replication set to 3. This is for redundancy and processing capability within HDFS. It is recommended to have minimum 3 data nodes in the cluster to accommodate 3 healthy replicas of a block (as we have a default replication of 3). HDFS will not write replicas of the same blocks to the same data node. In your scenario there will be under replicated blocks and 1 healthy replica will be placed on the available data node. You may run setrep [1] to change the replication factor. If you provide a path to a directory then the command recursively changes the replication factor of all files under the directory tree rooted at path. hdfs dfs -setrep -w 1 /user/hadoop/dir1 [1] https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/FileSystemShell.html#setrep
... View more
01-22-2021
12:23 AM
@mike_bronson7Please refer below KB article for this issue https://my.cloudera.com/knowledge/StandBy-NameNode-fails-to-start-Error-shows--?id=271605
... View more
12-16-2020
02:28 PM
1 Kudo
@mike_bronson7 To achieve your goal for the 2 issues you will need to edit server.properties of Kafka to add the following line. auto.leader.rebalance.enable = false Then run the below assuming you are having a zookeeper quorum of host1;host2,host3 bin/kafka-preferred-replica-election.sh --zookeeper host1:2181,host2:2181,host3:2181/kafka This should balance your partitions you can validate with bin/kafka-topics.sh --zookeeper host1:2181,host2:2181,host3:2181/kafka --describe For the second issue with the lost broker, you need to create a new broker and update the broker.id with the previous broker's id which was not gone or not recoverable then run $ kafka-preferred-replica-election.sh to balance the topics.
... View more
10-14-2020
08:48 AM
now I want to add the component as HBASE master and RegionServers and Phoenix Query Servers
... View more
10-12-2020
02:47 AM
little question why just not stop the service - HDFS on each new data node and set it to maintenance mode ?
... View more
10-10-2020
11:35 AM
1 Kudo
@mike_bronson7 Always stick to the Cloudera documentation. Yes !!! there is no risk in running that command I can understand your reservation.
... View more
10-08-2020
08:53 AM
Should we also delete Kafka topics before deleting the service? Will the topics still remain after deleting the service?
... View more