Support Questions
Find answers, ask questions, and share your expertise

HDFS Config changes in Cloudera Manager not getting reflected on hosts

Solved Go to solution

HDFS Config changes in Cloudera Manager not getting reflected on hosts

Expert Contributor

Hello Everyone,

 

We have recently deployed CDP. I need to fetch data from one of my other cluster (HDFS HA-Enabled) to this cluster using distcp.

 

For configuring the remote nameservice in my CDP cluster, I followed the steps given in this link and added the configuration properties to "HDFS Service Advanced Configuration Snippet (Safety Valve) for hdfs-site.xml" section of HDFS configuration in Cloudera Manager.

 

After this, I restarted the services and tried to run the following command from one of my hosts in the cluster:

 

hadoop fs -ls hdfs://<remote-nameservice-name>/

 

But it throws this error:

 

 

 

21/03/22 13:59:34 WARN fs.FileSystem: Failed to initialize fileystem hdfs://<remote-nameservice-name>/: java.lang.IllegalArgumentException: java.net.UnknownHostException: <remote-nameservice-name>
-ls: java.net.UnknownHostException: <remote-nameservice-name>
Usage: hadoop fs [generic options]
        [-appendToFile <localsrc> ... <dst>]
        [-cat [-ignoreCrc] <src> ...]
        [-checksum <src> ...]
        [-chgrp [-R] GROUP PATH...]
        [-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...]
        [-chown [-R] [OWNER][:[GROUP]] PATH...]
        [-copyFromLocal [-f] [-p] [-l] [-d] [-t <thread count>] <localsrc> ... <dst>]
        [-copyToLocal [-f] [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
        [-count [-q] [-h] [-v] [-t [<storage type>]] [-u] [-x] [-e] <path> ...]
        [-cp [-f] [-p | -p[topax]] [-d] <src> ... <dst>]
        [-createSnapshot <snapshotDir> [<snapshotName>]]
        [-deleteSnapshot <snapshotDir> <snapshotName>]
        [-df [-h] [<path> ...]]
        [-du [-s] [-h] [-v] [-x] <path> ...]
        [-expunge [-immediate]]
        [-find <path> ... <expression> ...]
        [-get [-f] [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
        [-getfacl [-R] <path>]
        [-getfattr [-R] {-n name | -d} [-e en] <path>]
        [-getmerge [-nl] [-skip-empty-file] <src> <localdst>]
        [-head <file>]
        [-help [cmd ...]]
        [-ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [-e] [<path> ...]]
        [-mkdir [-p] <path> ...]
        [-moveFromLocal [-f] [-p] [-l] [-d] <localsrc> ... <dst>]
        [-moveToLocal <src> <localdst>]
        [-mv <src> ... <dst>]
        [-put [-f] [-p] [-l] [-d] [-t <thread count>] <localsrc> ... <dst>]
        [-renameSnapshot <snapshotDir> <oldName> <newName>]
        [-rm [-f] [-r|-R] [-skipTrash] [-safely] <src> ...]
        [-rmdir [--ignore-fail-on-non-empty] <dir> ...]
        [-setfacl [-R] [{-b|-k} {-m|-x <acl_spec>} <path>]|[--set <acl_spec> <path>]]
        [-setfattr {-n name [-v value] | -x name} <path>]
        [-setrep [-R] [-w] <rep> <path> ...]
        [-stat [format] <path> ...]
        [-tail [-f] [-s <sleep interval>] <file>]
        [-test -[defsz] <path>]
        [-text [-ignoreCrc] <src> ...]
        [-touch [-a] [-m] [-t TIMESTAMP ] [-c] <path> ...]
        [-touchz <path> ...]
        [-truncate [-w] <length> <path> ...]
        [-usage [cmd ...]]

Generic options supported are:
-conf <configuration file>        specify an application configuration file
-D <property=value>               define a value for a given property
-fs <file:///|hdfs://namenode:port> specify default filesystem URL to use, overrides 'fs.defaultFS' property from configurations.
-jt <local|resourcemanager:port>  specify a ResourceManager
-files <file1,...>                specify a comma-separated list of files to be copied to the map reduce cluster
-libjars <jar1,...>               specify a comma-separated list of jar files to be included in the classpath
-archives <archive1,...>          specify a comma-separated list of archives to be unarchived on the compute machines

The general command line syntax is:
command [genericOptions] [commandOptions]

Usage: hadoop fs [generic options] -ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [-e] [<path> ...]

 

 

 

 I checked the hdfs-site.xml files on the host and it doesn't have the updated config.

 

What might be the reason behind this? Why does Cloudera manager not manage the config change in the back end?

.

Thanks,

Megh

1 ACCEPTED SOLUTION

Accepted Solutions

Re: HDFS Config changes in Cloudera Manager not getting reflected on hosts

Expert Contributor

Added the same configuration in "HDFS Client Advanced Configuration Snippet (Safety Valve) for hdfs-site.xml" section and restarted the cluster and now the issue is resolved.

 

Thanks,

Megh

View solution in original post

1 REPLY 1

Re: HDFS Config changes in Cloudera Manager not getting reflected on hosts

Expert Contributor

Added the same configuration in "HDFS Client Advanced Configuration Snippet (Safety Valve) for hdfs-site.xml" section and restarted the cluster and now the issue is resolved.

 

Thanks,

Megh

View solution in original post