Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1513 | 06-15-2020 05:23 AM | |
10235 | 01-30-2020 08:04 PM | |
1666 | 07-07-2019 09:06 PM | |
6946 | 01-27-2018 10:17 PM | |
3888 | 12-31-2017 10:12 PM |
08-21-2021
04:11 AM
1 Kudo
@mike_bronson7 Can you share your capacity scheduler , total memory and vcores configs ?
... View more
07-18-2021
02:10 PM
@mike_bronson7 Are you using the default capacity schedule settings? No queues/leafs created? Is what you shared the current seeting?
... View more
07-02-2021
06:05 AM
Disk balancer is not available in the HDP 2.X and it is available from the HDP 3.X As a workaround for this, we can decommission the data node where we are observing the disk are not balanced equally, clean up the data node, recommission the node again, and run the HDFS balancer again. Thanks, Prathap Kumar.
... View more
06-30-2021
08:48 AM
@mike_bronson7 Here you go how to determine YARN and MapReduce Memory Configuration Settings Happy hadooping
... View more
03-21-2021
10:37 PM
about the API Ambari command cli , can you show me the full syntax that replace the disable to enable
... View more
02-16-2021
07:14 AM
@mike_bronson7 I think you will be able to find a helpful, previously-posted answer to a question very similar to yours in this thread: Hortonworks Repositories can't be accessed Hope this helps.
... View more
02-11-2021
11:10 PM
Since you are using Ambari, you can you can try to use Rebalance HDFS action, or directly the Hadoop Balancer tool.
... View more
02-08-2021
02:25 PM
1 Kudo
Out of all the options available to deal with this situation, I think resetting your network configurations is the best. Resetting your network configurations is one of the maintenance procedures that help refresh or repair your network connectivity. In this way, it could eliminate latency and will return your network status like when you started using the Internet. To resolve your concern, we suggest that you reset your TCP/IP or Internet Protocol to its default settings. It's like using the services of Mortgage Advisor London when looking to buy a house, in the sense that it is the best choice you can make.
... View more
01-28-2021
12:04 AM
@mike_bronson7 Adding to @GangWar :- To your question - dose this action could also affected the data itself on the data-nodes machines ? No it doesnt affect data on datanode directly. This is metadata operation on namenode which when need to be run when NameNode fails to progress through the edits or fsimage then the NameNode may need to be started with -recover option. Since the metadata has reference to the blocks on datanode, hence this is a critical operation and may incur data loss.
... View more
01-26-2021
03:16 PM
1 Kudo
@mike_bronson7 It seems to me like this is a symptom of having the default replication set to 3. This is for redundancy and processing capability within HDFS. It is recommended to have minimum 3 data nodes in the cluster to accommodate 3 healthy replicas of a block (as we have a default replication of 3). HDFS will not write replicas of the same blocks to the same data node. In your scenario there will be under replicated blocks and 1 healthy replica will be placed on the available data node. You may run setrep [1] to change the replication factor. If you provide a path to a directory then the command recursively changes the replication factor of all files under the directory tree rooted at path. hdfs dfs -setrep -w 1 /user/hadoop/dir1 [1] https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/FileSystemShell.html#setrep
... View more