Member since
01-19-2017
3676
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 487 | 06-04-2025 11:36 PM | |
| 1014 | 03-23-2025 05:23 AM | |
| 537 | 03-17-2025 10:18 AM | |
| 2023 | 03-05-2025 01:34 PM | |
| 1263 | 03-03-2025 01:09 PM |
11-03-2020
03:34 PM
@Masood You could use the CDP REST API http://<cmhost>:<port>/api/v1/clusters/<cluster-name>/services/HDFS/roles/HDFS-NAMENODE-<namenode id> And look out for and search for "haStatus"
... View more
11-03-2020
03:31 PM
@jlguti I think your problem according to the log you share is network-related, Check your /etc/hosts ensure that the hostnames can be DNS resolved. Caused by: java.io.IOException: Failed to connect to bupry-dev-00:46319
Caused by: java.net.UnknownHostException: bupry-dev-00 Make sure the hosts' entries are FQDN and the first lines IPv4 and IPv6 are not tampered with # Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
##############################################
192.168.0.20 your_host_name Host_Alias Or something like this 127.0.0.1 localhost
127.0.1.1 techpiezo-pc
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters Please revert
... View more
11-03-2020
01:00 PM
@jlguti Can you share the output of the below command? Where to store container logs. An application's localized log directory will be found in ${yarn.nodemanager.log-dirs}/application_${appid}. Individual containers' log directories will be below this, in directories named container_{$contid}. Each container directory will contain the files stderr, stdin, and syslog generated by that container. yarn logs -applicationId application_1604418534431_0001 That could give us pointers to the potential issue either memory or some misconfiguration. Happy Happoingg
... View more
11-03-2020
12:48 PM
@ni4ni @Masood Checkpointing is a process that takes an fsimage and edit log and compacts them into a new fsimage. This way, instead of replaying a potentially unbounded edit log, the NameNode can load the final in-memory state directly from the fsimage. This is a far more efficient operation and reduces NameNode startup time. Checkpointing is one of the most important activites of the standby or secondary Namenode in a cluster. In an HA cluster, all connections and cluster activity is managed by the Active namenode and the Standby NameNode takes the responsibility of compacting the edits logs and fsimage it does also performs checkpoints of the namespace state, and thus it is not necessary to run a Secondary NameNode Hope that helps
... View more
10-31-2020
08:40 AM
@varun_rathinam Is your nifi cluster kerberized if so you will need to provide the keytabs
... View more
10-30-2020
02:25 AM
1 Kudo
@sgovi Is there an entry in your local host's file?
... View more
10-27-2020
12:15 PM
@donno I know how frustrating it is. I have just downloaded a fresh image of HDP 2.6.5 unpacked the image and performed the classic steps all all is looks fine I hope you a step that you might have missed. Here we go. Sandbox settings, Memory, CPU, Network I enabled on Bridged Adapter as my laptop is connected to my LAN a class C network Uncompressed the HDP 2.6.5 image and on successful extraction, I get the IP and URL with my local IP My Local hosts file Reset the root default is root/hadoop and Ambari admin passwords this automatically starts the Ambari server After the successful start-up of Ambari I can now access the Ambari UI using my local LAN IP All service is up and you can see the Ambari and HDP versions I have demo'ed this 100's of time in here I hope you get successful and happy ending Happy hadooping
... View more
10-27-2020
12:58 AM
@Amn_468 The NameNode is solely responsible for the Cluster Metadata so please increase the NN heap size and restart the services. Please revert
... View more
10-26-2020
11:55 PM
@Amn_468 Increasing the Java Heap Size for the NameNode and Secondary NameNode Services,you could be using the default 1GB setting for heap size As a general rule of thumb take a look at the configuration of your Heap Sizes for every 1 Million Blocks in your cluster should have at least 1GB of Heap Size. 2 Million Blocks 2GB heap size 3 Million Blocks 3GB heap size ..... n Million Blocks n GB heap size After increasing the Java Heap Size and restart the HDFS Services that should resolve the issue. Please revert
... View more
10-26-2020
02:36 PM
@sriram72 Can you share screenshots I just posted a response to a similar query? Please have a look at this Sandbox issue Hope that helps
... View more