Support Questions

Find answers, ask questions, and share your expertise

"Hadoop fs -ls" Produces the Local Filesystem's "ls -la"

avatar
Contributor

Yesterday I configured a 5 node cluster via Cloudera Manager. I then added another node, and was about to make a "Gateway Template" to assign to it through the Add Node workflow, when my Firefox crashed. I loaded it back up and manually assigned all the gateway roles to it (HDFS, MapReduce, HBase, Hive).

 

However, when I issue "hadoop fs -ls" on the node, it basically prints out the results of "ls -la" on my local filesystem. As in, whichever folder I am in on the local filesystem, "hadoop fs -ls" will show me all the files, their permissions, and hidden files.

 

What is the fix for this?

 

Would it hurt to make another node, and instead of assigning it Gateway roles through Cloudera Manager or downloading any Cloudera Agent, to simply download the client packages for HDFS, MapReduce, Hive, HBase, Pig, Oozie, etc. and manually connect it as a "gateway" node?

 

Thank you.

1 ACCEPTED SOLUTION

avatar

Hi Matthew,

 

After using the host template to assign gateway roles to your host, did you deploy client configuration? This command will update all the files in /etc/hadoop/conf, /etc/hive/conf, etc.

 

Thanks,

Darren

View solution in original post

3 REPLIES 3

avatar

Hi Matthew,

 

After using the host template to assign gateway roles to your host, did you deploy client configuration? This command will update all the files in /etc/hadoop/conf, /etc/hive/conf, etc.

 

Thanks,

Darren

avatar
Expert Contributor
It means your node is not making a call to Hadoop file system It is running a local filesystem command "ls -a"...
Check your core-site.xml in hadoop conf directory for fs.default.name property and value should be your namenode hostname...

Other way to do it through CM is, make sure the hdfs gateway role is installed on the host and deploy the client configurations..

If you don't see the host in HDFS gateway, then you need to add gateway to the host and deploy client configurations...
Em Jay

avatar
New Contributor
Please check softlink on
/etc/alternatives/

make sure they are pointing to correct hadoop config in /etc/hadoop/conf/core-site.xml etc.

deploying client config from CM also helps to resolve all the softlinks