Support Questions

Find answers, ask questions, and share your expertise

Unable to install Ambari 2.5.2

Explorer

Problem Installing a Fresh Install of Ambari after removing the earlier version .

We had Ambari 2.5.1 earlier then we completely removed 2.5.1 and started to install Ambari 2.5.2

We face few issues when we try to install Ambari :

Attached logs.txt

Snippet from the Logs :

stderr: /var/lib/ambari-agent/data/errors-14.txt

resource_management.core.exceptions.ExecutionFailed: Execution of 'useradd -m -u 1015 -G hadoop -g hadoop yarn' returned 12. useradd: cannot create directory /home/yarn
Error: Error: Unable to run the custom hook script ['/usr/bin/python', '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py', 'ANY', '/var/lib/ambari-agent/data/command-14.json', '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY', '/var/lib/ambari-agent/data/structured-out-14.json', 'INFO', '/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1', '']

stdout: /var/lib/ambari-agent/data/output-14.txt

Error: Error: Unable to run the custom hook script ['/usr/bin/python', '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py', 'ANY', '/var/lib/ambari-agent/data/command-14.json', '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY', '/var/lib/ambari-agent/data/structured-out-14.json', 'INFO', '/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1', '']Command failed after 1 tries



As described in this page : https://community.hortonworks.com/questions/103915/problem-in-installing-ambarihdp-in-cluster-witho...

I am able to execute the useradd -m -u 1015 -G hadoop -g hadoop yarn command . However I was unable to create a directory under /home/yarn
Since it is mounted as NFS . I have added the details in the
cat /etc/auto.home

* -fstype=nfs homes:/global/export/home/&
hdfs :/usr/local/home/hdfs
yarn :/usr/local/home/yarn
mapred :/usr/local/home/mapred
activity_analyzer :/usr/local/home/activity_analyzer


After updating the details to /etc/auto.home I have removed Ambari Server and Agent using the link : https://community.hortonworks.com/questions/1110/how-to-completely-remove-uninstall-ambari-and-hdp.h...

Have tried to reinstall Ambari . Again I have got the same error .

df -h

Filesystem Size Used Avail Use% Mounted on
/dev/mapper/os-root 10G 2.4G 7.7G 24% /
devtmpfs 910M 0 910M 0% /dev
tmpfs 920M 12K 920M 1% /dev/shm
tmpfs 920M 120M 801M 13% /run
tmpfs 920M 0 920M 0% /sys/fs/cgroup
/dev/sda1 488M 127M 326M 29% /boot
/dev/mapper/os-var 4.0G 2.6G 1.5G 64% /var
/dev/mapper/os-tmp 2.0G 33M 2.0G 2% /tmp
/dev/mapper/apps-app 20G 45M 19G 1% /app
tmpfs 184M 0 184M 0% /run/user/48258
homes:/global/export/home/user 2.9T 2.5T 337G 89% /home/user

Still receving the same error . What can be done to resolve the issue . Is there any configuration changes has to be done to resolve the issue

6 REPLIES 6

Explorer

@Jay Kumar SenSharma Ji any help in this issue. Kindly help

Mentor

@Raj ji

Check this link it answers your query

https://community.hortonworks.com/answers/148611/view.html

Hope that helps

Explorer

Thanks for the reply , Do you want me to comment out the Lines for validation . However , I wasn't able to create any directory under home in centos7 . Will commenting alone will help ? Could u please help me

Mentor

@Raj ji

If I were you I would change the mount point name to some like /shome rather than tweaking the python code, because subsequent upgrades could be more frustrating.

As reiterated that's not good practice neither is it recommended.

Explorer

I have edited the file cat /etc/auto.home. Will it not help . We were able to fresh install into another machine without any strgulle . since we have removed and reinstalled fresh , we are getting this error . I have limited access to changing the name of mount point and mounted as NFS . I would see the possibility of getting approved . Any alternative good ways to tackle it as a best practise

Mentor

@Raj ji

I am afraid there are only 2 options

- tweak the python code NOT recommended !!!!!
- Ask the SysOPS team to rename the mount and notify them of the caveat for using /home as Hadoop mount point.

Hope that helps

Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.