Member since
01-20-2014
578
Posts
102
Kudos Received
94
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5091 | 10-28-2015 10:28 PM | |
2222 | 10-10-2015 08:30 PM | |
4200 | 10-10-2015 08:02 PM | |
3193 | 10-07-2015 02:38 PM | |
1938 | 10-06-2015 01:24 AM |
03-28-2019
08:17 AM
Facing the same problem [root@hdp1 data]# systemctl status hadoop-hdfs-namenode ● hadoop-hdfs-namenode.service - LSB: Hadoop namenode Loaded: loaded (/etc/rc.d/init.d/hadoop-hdfs-namenode; bad; vendor preset: disabled) Active: failed (Result: exit-code) since Thu 2019-03-28 11:50:58 UTC; 3h 8min ago Docs: man:systemd-sysv-generator(8) Process: 3230 ExecStart=/etc/rc.d/init.d/hadoop-hdfs-namenode start (code=exited, status=1/FAILURE) Mar 28 11:50:49 hdp1 systemd[1]: Starting LSB: Hadoop namenode... Mar 28 11:50:49 hdp1 su[3235]: (to hdfs) root on none Mar 28 11:50:49 hdp1 hadoop-hdfs-namenode[3230]: starting namenode, logging to /var/log/hadoop-hdfs/hadoop-hdfs-namenode-hdp1.out Mar 28 11:50:58 hdp1 hadoop-hdfs-namenode[3230]: Failed to start Hadoop namenode. Return value: 1[FAILED] Mar 28 11:50:58 hdp1 systemd[1]: hadoop-hdfs-namenode.service: control process exited, code=exited status=1 Mar 28 11:50:58 hdp1 systemd[1]: Failed to start LSB: Hadoop namenode. Mar 28 11:50:58 hdp1 systemd[1]: Unit hadoop-hdfs-namenode.service entered failed state. Mar 28 11:50:58 hdp1 systemd[1]: hadoop-hdfs-namenode.service failed. [root@hdp1 data]# systemctl start hadoop-hdfs-namenode Job for hadoop-hdfs-namenode.service failed because the control process exited with error code. See "systemctl status hadoop-hdfs-namenode.service" and "journalctl -xe" for details.
... View more
03-19-2019
07:48 AM
Hi. i got solution for same. please map hostfile in local machine with cluster nodes.it will work without any issue. Thanks HadoopHelp
... View more
09-24-2018
01:26 PM
I'm able to resolve this issue. In my case IP address is incorrect in /etc/hosts file. Once corrected able to add host successfully. Thanks, Meher
... View more
08-28-2018
03:34 AM
I am just sharing the relevant part of the linked docs, as they contain the instructions on how to enable the hbase balancer via hbase shell: Load Balancer It is assumed that the Region Load Balancer is disabled while the graceful_stop script runs (otherwise the balancer and the decommission script will end up fighting over region deployments). Use the shell to disable the balancer: hbase(main):001:0> balance_switch false
true
0 row(s) in 0.3590 seconds This turns the balancer OFF. To reenable, do: hbase(main):001:0> balance_switch true
false
0 row(s) in 0.3590 seconds The graceful_stop will check the balancer and if enabled, will turn it off before it goes to work. If it exits prematurely because of error, it will not have reset the balancer. Hence, it is better to manage the balancer apart from graceful_stopreenabling it after you are done w/ graceful_stop.
... View more
08-11-2018
04:37 AM
please, could you put, your hosts file example? what did you put there? Thanks a lot
... View more
07-15-2018
12:33 PM
@Sean,@Clint, Can we use mrjob library to execute the mapreduce python code in cloudera quickstart vm ? Vidya
... View more
04-30-2018
07:15 AM
It's work for me! Thanks.
... View more
03-11-2018
09:41 AM
@csguna Regarding #1... I am looking for any additional configuration steps required. Or there is no 'additional' config required. Thanks.
... View more
01-03-2018
02:04 PM
1 Kudo
The simple answer is to open up the ports in a bidirectional manner on all the hosts. For instance: on each node in cluster A: Allow connectivity to 1004 (or 50010 without Kerberos) and 50020 on each datanode in cluster B. As well as 8020 to namenodes in Cluster B. on each node in cluster B: Allow connectivity to 1004 (or 50010 without Kerberos) and 50020 on each datanode in cluster A. As well as 8020 to namenodes in Cluster A. However... You are right, where the distcp is executed will determine the source/destination. Executing distcp on Cluster A will cause a mapreduce job to run on cluster A. Each datanode will(may) run a task that will connect to the namenode(s) on cluster B for block locations and then datanodes on cluster B for transfer. I'm not sure if the node the distcp is executed on will need access as well. So I generally run the distcp on one of the datanodes.
... View more