Support Questions
Find answers, ask questions, and share your expertise

HDP 2.6 Setup through Ambari fails with NameNode install failure

Highlighted

HDP 2.6 Setup through Ambari fails with NameNode install failure

Explorer

OS: Oracle Linux 7

HDP : 2.6

Setup through ambari fails with NameNode install with the below error

[opc@cls1-host1 ~]$ sudo ls -ld /boot/efi/hadoop/hdfs/namenode
drwx------. 2 root root 4096 Aug 15 03:45 /boot/efi/hadoop/hdfs/namenode

Not sure why Ambari is trying to access the above location. Ambari installation should take care of this. Otherwise what is the workaround for this issue?

stderr

Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 361, in <module>
    NameNode().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 367, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 80, in install
    self.configure(env)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 120, in locking_configure
    original_configure(obj, *args, **kw)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 87, in configure
    namenode(action="configure", hdfs_binary=hdfs_binary, env=env)
  File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
    return fn(*args, **kwargs)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 98, in namenode
    create_name_dirs(params.dfs_name_dir)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 290, in create_name_dirs
    cd_access="a",
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 166, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 199, in action_create
    recursion_follow_links=self.resource.recursion_follow_links, safemode_folders=self.resource.safemode_folders)
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 75, in _ensure_metadata
    sudo.chown(path, user_entity, group_entity)
  File "/usr/lib/python2.6/site-packages/resource_management/core/sudo.py", line 40, in chown
    return os.chown(path, uid, gid)
OSError: [Errno 1] Operation not permitted: '/boot/efi/hadoop/hdfs/namenode'
stdout: /var/lib/ambari-agent/data/output-24.txt
2018-08-18 15:27:06,730 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=None -> 2.6
2018-08-18 15:27:06,735 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2018-08-18 15:27:06,736 - Group['hdfs'] {}
2018-08-18 15:27:06,737 - Group['hadoop'] {}
2018-08-18 15:27:06,737 - Group['users'] {}
2018-08-18 15:27:06,738 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-08-18 15:27:06,739 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-08-18 15:27:06,740 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-08-18 15:27:06,741 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2018-08-18 15:27:06,742 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-08-18 15:27:06,743 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2018-08-18 15:27:06,744 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-08-18 15:27:06,744 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2018-08-18 15:27:06,745 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs'], 'uid': None}
2018-08-18 15:27:06,746 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-08-18 15:27:06,747 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-08-18 15:27:06,748 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-08-18 15:27:06,749 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-08-18 15:27:06,750 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2018-08-18 15:27:06,755 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2018-08-18 15:27:06,756 - Group['hdfs'] {}
2018-08-18 15:27:06,756 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', u'hdfs']}
2018-08-18 15:27:06,757 - FS Type: 
2018-08-18 15:27:06,757 - Directory['/etc/hadoop'] {'mode': 0755}
2018-08-18 15:27:06,772 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2018-08-18 15:27:06,772 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2018-08-18 15:27:06,788 - Repository['HDP-2.6-repo-1'] {'append_to_file': False, 'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.3.0', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None}
2018-08-18 15:27:06,795 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': '[HDP-2.6-repo-1]\nname=HDP-2.6-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.3.0\n\npath=/\nenabled=1\ngpgcheck=0'}
2018-08-18 15:27:06,795 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match
2018-08-18 15:27:06,796 - Repository['HDP-UTILS-1.1.0.21-repo-1'] {'append_to_file': True, 'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None}
2018-08-18 15:27:06,800 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': '[HDP-2.6-repo-1]\nname=HDP-2.6-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.3.0\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-UTILS-1.1.0.21-repo-1]\nname=HDP-UTILS-1.1.0.21-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7\n\npath=/\nenabled=1\ngpgcheck=0'}
2018-08-18 15:27:06,800 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match
2018-08-18 15:27:06,800 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-08-18 15:27:06,898 - Skipping installation of existing package unzip
2018-08-18 15:27:06,898 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-08-18 15:27:06,907 - Skipping installation of existing package curl
2018-08-18 15:27:06,907 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-08-18 15:27:06,917 - Skipping installation of existing package hdp-select
2018-08-18 15:27:07,192 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2018-08-18 15:27:07,193 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=None -> 2.6
2018-08-18 15:27:07,212 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2018-08-18 15:27:07,225 - Command repositories: HDP-2.6-repo-1, HDP-UTILS-1.1.0.21-repo-1
2018-08-18 15:27:07,225 - Applicable repositories: HDP-2.6-repo-1, HDP-UTILS-1.1.0.21-repo-1
2018-08-18 15:27:07,227 - Looking for matching packages in the following repositories: HDP-2.6-repo-1, HDP-UTILS-1.1.0.21-repo-1
2018-08-18 15:27:08,823 - Command repositories: HDP-2.6-repo-1, HDP-UTILS-1.1.0.21-repo-1
2018-08-18 15:27:08,823 - Applicable repositories: HDP-2.6-repo-1, HDP-UTILS-1.1.0.21-repo-1
2018-08-18 15:27:08,824 - Looking for matching packages in the following repositories: HDP-2.6-repo-1, HDP-UTILS-1.1.0.21-repo-1
2018-08-18 15:27:10,412 - Package['hadoop_2_6_3_0_235'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-08-18 15:27:10,508 - Skipping installation of existing package hadoop_2_6_3_0_235
2018-08-18 15:27:10,510 - Package['hadoop_2_6_3_0_235-client'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-08-18 15:27:10,520 - Skipping installation of existing package hadoop_2_6_3_0_235-client
2018-08-18 15:27:10,520 - Package['snappy'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-08-18 15:27:10,530 - Skipping installation of existing package snappy
2018-08-18 15:27:10,530 - Package['snappy-devel'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-08-18 15:27:10,540 - Skipping installation of existing package snappy-devel
2018-08-18 15:27:10,542 - Package['hadoop_2_6_3_0_235-libhdfs'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-08-18 15:27:10,552 - Skipping installation of existing package hadoop_2_6_3_0_235-libhdfs
2018-08-18 15:27:10,554 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'}
2018-08-18 15:27:10,558 - File['/etc/security/limits.d/hdfs.conf'] {'content': Template('hdfs.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2018-08-18 15:27:10,559 - XmlConfig['hadoop-policy.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2018-08-18 15:27:10,568 - Generating config: /usr/hdp/current/hadoop-client/conf/hadoop-policy.xml
2018-08-18 15:27:10,568 - File['/usr/hdp/current/hadoop-client/conf/hadoop-policy.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2018-08-18 15:27:10,576 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2018-08-18 15:27:10,583 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-client.xml
2018-08-18 15:27:10,584 - File['/usr/hdp/current/hadoop-client/conf/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2018-08-18 15:27:10,590 - Directory['/usr/hdp/current/hadoop-client/conf/secure'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2018-08-18 15:27:10,590 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf/secure', 'configuration_attributes': {}, 'configurations': ...}
2018-08-18 15:27:10,598 - Generating config: /usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml
2018-08-18 15:27:10,598 - File['/usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2018-08-18 15:27:10,604 - XmlConfig['ssl-server.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2018-08-18 15:27:10,611 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-server.xml
2018-08-18 15:27:10,611 - File['/usr/hdp/current/hadoop-client/conf/ssl-server.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2018-08-18 15:27:10,618 - XmlConfig['hdfs-site.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {u'final': {u'dfs.support.append': u'true', u'dfs.datanode.data.dir': u'true', u'dfs.namenode.http-address': u'true', u'dfs.namenode.name.dir': u'true', u'dfs.webhdfs.enabled': u'true', u'dfs.datanode.failed.volumes.tolerated': u'true'}}, 'configurations': ...}
2018-08-18 15:27:10,625 - Generating config: /usr/hdp/current/hadoop-client/conf/hdfs-site.xml
2018-08-18 15:27:10,625 - File['/usr/hdp/current/hadoop-client/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2018-08-18 15:27:10,668 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {u'final': {u'fs.defaultFS': u'true'}}, 'owner': 'hdfs', 'configurations': ...}
2018-08-18 15:27:10,675 - Generating config: /usr/hdp/current/hadoop-client/conf/core-site.xml
2018-08-18 15:27:10,675 - File['/usr/hdp/current/hadoop-client/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2018-08-18 15:27:10,696 - File['/usr/hdp/current/hadoop-client/conf/slaves'] {'content': Template('slaves.j2'), 'owner': 'hdfs'}
2018-08-18 15:27:10,700 - Directory['/hadoop/hdfs/namenode'] {'owner': 'hdfs', 'create_parents': True, 'group': 'hadoop', 'mode': 0755, 'cd_access': 'a'}
2018-08-18 15:27:10,701 - Directory['/boot/efi/hadoop/hdfs/namenode'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2018-08-18 15:27:10,701 - Changing owner for /boot/efi/hadoop/hdfs/namenode from 0 to hdfs
2018-08-18 15:27:10,701 - Changing group for /boot/efi/hadoop/hdfs/namenode from 0 to hadoop

Command failed after 1 tries
6 REPLIES 6
Highlighted

Re: HDP 2.6 Setup through Ambari fails with NameNode install failure

Mentor

@ranjith ranjith

EFI system partition. The EFI system partition (also called ESP or EFISYS) is an OS independent partition that acts as the storage place for the EFI bootloaders, applications and drivers to be launched by the UEFI firmware. It is mandatory for UEFI boot.

This is not a valid mount point just like /home because its a protected filesystem

You MUST remove this mount point from both the NameNode and DataNode directories

/boot/efi/hadoop/hdfs/namenode and /boot/efi/hadoop/hdfs/datanode if it exists

HDFS-->Configs-->settings see attached screenshot

85681-namenode.jpg

HTH

Highlighted

Re: HDP 2.6 Setup through Ambari fails with NameNode install failure

Explorer

@ranjith ranjith /boot/efi/hadoop/hdfs/namenode is not a valid Namenode directory.

To correct this go to Ambari > HDFS > Config

Then change the value of the namenode directory from:

/boot/efi/hadoop/hdfs/namenode

to:

/hadoop/hdfs/namenode

Please let me know how it goes

Highlighted

Re: HDP 2.6 Setup through Ambari fails with NameNode install failure

Explorer

Thanks for your replies. The issue here is the setup is failing during the installation through ambari wizard. I could not change those settings until the install finishes. I mean to say I could not view those settings page until the install finished even though it errors out. Right now it fails at namenode install and errors out and UI wizard does not move forward. Can you please advise here?

Highlighted

Re: HDP 2.6 Setup through Ambari fails with NameNode install failure

Mentor

ranjith.jpg@ranjith ranjith

During the Ambari install you have the option to change these parameters, I am very sure of that 1000% I just think you miss to see that option visually it's under customized HDFS and YARN remember to remove /boot/efi/hadoop/yarn/* too and I can't remember the screen but definitely after the registration of the hosts.

I have attached my HDP 3.0 screenshot to please have a look

Highlighted

Re: HDP 2.6 Setup through Ambari fails with NameNode install failure

Explorer

Create the directory where it should be and link it to the /boot/... once the installation is done change it from hdfs config

Other option is changing it from ambari db

Re: HDP 2.6 Setup through Ambari fails with NameNode install failure

Rising Star

In the installation process there is step of configure services you can go back to that step then correct the paths namenode directory and datanode directory save that config and then proceed with the installation process further

Note: Please upvote and accept the answer if you find it useful

Don't have an account?