Support Questions

Find answers, ask questions, and share your expertise

issue with hadoop -bash: hadoop: command not found

avatar
Contributor

Hi Team facing a very strange issue with cloudera latest installation. I am able to view the HDFS directory from web interface, but when i run a simple hadoop fs -ls in putty shell , it says -bash: hadoop: command not found.  while i can see all HDFS files from web interface. Can you please help.

[root@hadoop-vm3 log]# hadoop fs -ls /
-bash: hadoop: command not found

 

 

Version: Cloudera Express 5.3.1 (#191 built by jenkins on 20150123-2020 git: b0377087cf605a686591e659eb14078923bc3c83)

Server Time: Feb 20, 2015 1:29:11 PM, Eastern Standard Time (EST)

Copyright © 2011-2014 Cloudera, Inc. All rights reserved.
Hadoop and the Hadoop elephant logo are trademarks of the Apache Software Foundation.

Go!
 
/user/hive/warehouse/cdr_test_demo_self_partition
 
Permission Owner Group Size Replication Block Size Name
drwxrwxrwtadminhive0 B00 B.hive-staging_hive_2015-02-20_04-40-09_720_8287848305105515146-1
drwxrwxrwtroothive0 B00 Bttime=2015-02-20
drwxrwxrwtroothive0 B00 B

ttime=2015-02-21

 

 

 

1 ACCEPTED SOLUTION

avatar
Contributor

Thanks alot Gautam , its working fine now after restart , I can access hadoop file system @namenode machine  now from command line  . many many thanks for your help here !!!

 

By the way shall I run this command on data node as well ? i logged in to one of data node and its not able to recognize hadoop there 

 

[root@hadoopvm1 ~]# hadoop fs -ls /
-bash: hadoop: command not found

 

service cloudera-scm-agent restart ?

View solution in original post

14 REPLIES 14

avatar
Check what /etc/alternatives/hadoop points to, most likely it would be
to an unavailable parcel. The simplest way to resolve this is
- check all files under /var/lib/alternatives for references to invalid parcels
- delete those references, ensuring the reference to 5.3.1 is present
and is the first option
- restart the Cloudera Manager agent, this will set up alternatives again


Regards,
Gautam Gopalakrishnan

avatar
Contributor

Hi Gautam 

thanks alot for your reply , need some more info to fix this. can this be fixed from cloudera manager or i need to fix it on each machine by running some command

 

I checked  /etc/alternatives/hadoop and it points to 

hadoop -> /opt/cloudera/parcels/CDH-5.3.1-1.cdh5.3.1.p0.5/bin/hadoop

while in /var/lib/alternatives

 

also i found that hadoop is there . Now I am not sure what to delete here..?

 

So my question is how to restore this for all hadoop command like hadoop/hive etc....just to give you backgroud when this issue started, i got some configuration warning in cloudera manager and it was referring some outdated parcel etc, i just fixed that warning and this issue started. so is there any way i can fix this from cloudera manager. your help will be really appericiated ....

 

one interesting thing i noticed that when i gave the complete path , its able to list files

 

[root@hadoop-vm3 bin]# /opt/cloudera/parcels/CDH-5.3.1-1.cdh5.3.1.p0.5/bin/hadoop fs -ls /
Found 4 items
drwxr-xr-x - hbase hbase 0 2015-02-20 05:41 /hbase
drwxrwxr-x - solr solr 0 2015-02-18 04:50 /solr
drwxrwxrwt - hdfs supergroup 0 2015-02-20 06:44 /tmp
drwxr-xr-x - hdfs supergroup 0 2015-02-20 00:54 /user

 

 

==========================================================================

[root@hadoop-vm3 bin]# ls /var/lib/alternatives/
avro-tools hadoop-httpfs-conf hiveserver2 jre_openjdk mapred solr-conf sqoop-create-hive-table sqoop-version
beeline hadoop-kms-conf hive-webhcat-conf kite-dataset mta solrctl sqoop-eval statestored
catalogd hbase hue-conf libnssckbi.so.x86_64 oozie spark-conf sqoop-export whirr
cli_mt hbase-conf impala-conf links oozie-conf spark-executor sqoop-help yarn
cli_st hbase-indexer impalad llama pig spark-shell sqoop-import zookeeper-client
flume-ng hbase-solr-conf impala-shell llamaadmin pig-conf spark-submit sqoop-import-all-tables zookeeper-conf
flume-ng-conf hcat ip6tables.x86_64 llama-conf print sqoop sqoop-job zookeeper-server
hadoop hdfs iptables.x86_64 load_gen pyspark sqoop2 sqoop-list-databases zookeeper-server-cleanup
hadoop-0.20 hive java mahout sentry sqoop2-conf sqoop-list-tables zookeeper-server-initialize
hadoop-conf hive-conf jre_1.6.0 mahout-conf sentry-conf sqoop-codegen sqoop-merge
hadoop-fuse-dfs hive-hcatalog-conf jre_1.7.0 mail senty-conf sqoop-conf sqoop-metastore

==========================================================================

avatar
Can you paste the contents of /var/lib/alternatives/HADOOP here?

Regards,
Gautam Gopalakrishnan

avatar
Contributor

HI Gautam

 

please find the conetent as below, let me know next...

 

[root@hadoop-vm3 alternatives]# pwd
/var/lib/alternatives
[root@hadoop-vm3 alternatives]# cat hadoop
auto
/usr/bin/hadoop

/opt/cloudera/parcels/CDH-5.3.1-1.cdh5.3.1.p0.5/bin/hadoop
10
[root@hadoop-vm3 alternatives]#

avatar
That does tally with what I see. Are you able to
- verify that the file
"/opt/cloudera/parcels/CDH-5.3.1-1.cdh5.3.1.p0.5/bin/hadoop" actually exists
- restart the Cloudera Manager agent
# service cloudera-scm-agent restart
- check if the /usr/bin/hadoop symlink has been created

Regards,
Gautam Gopalakrishnan

avatar
Contributor

yes file does exist and it has below content 

 

[root@hadoop-vm3 ~]# cat /opt/cloudera/parcels/CDH-5.3.1-1.cdh5.3.1.p0.5/bin/hadoop
#!/bin/bash
# Reference: http://stackoverflow.com/questions/59895/can-a-bash-script-tell-what-directory-its-stored-in
SOURCE="${BASH_SOURCE[0]}"
BIN_DIR="$( dirname "$SOURCE" )"
while [ -h "$SOURCE" ]
do
SOURCE="$(readlink "$SOURCE")"
[[ $SOURCE != /* ]] && SOURCE="$DIR/$SOURCE"
BIN_DIR="$( cd -P "$( dirname "$SOURCE" )" && pwd )"
done
BIN_DIR="$( cd -P "$( dirname "$SOURCE" )" && pwd )"
LIB_DIR=$BIN_DIR/../lib

# Autodetect JAVA_HOME if not defined
. $LIB_DIR/bigtop-utils/bigtop-detect-javahome

export HADOOP_LIBEXEC_DIR=//$LIB_DIR/hadoop/libexec

exec $LIB_DIR/hadoop/bin/hadoop "$@"
[root@hadoop-vm3 ~]#

 

 

avatar
Contributor

Thanks alot Gautam , its working fine now after restart , I can access hadoop file system @namenode machine  now from command line  . many many thanks for your help here !!!

 

By the way shall I run this command on data node as well ? i logged in to one of data node and its not able to recognize hadoop there 

 

[root@hadoopvm1 ~]# hadoop fs -ls /
-bash: hadoop: command not found

 

service cloudera-scm-agent restart ?

avatar
Glad to know it is resolved now. You can try the same procedure on all your
cluster nodes where /usr/bin/hadoop is not symlinked correctly.

Regards,
Gautam Gopalakrishnan

avatar
Expert Contributor

i am having same problem , but when i opened /var/lib/alternatives      , i found hadoop file and most of other files empty ! with zero size