Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Issue In Installing Ambari. CentOS6

Issue In Installing Ambari. CentOS6

New Contributor

Hi,

guys I am facing issue in installation of ambari. The details are as follows.

1. I am using CentOS6

2. I have created a local repo.

3. I have two data nodes and one master/name node.

4. ambari server running in master node.

5. Installation goes fine till the step "Install, Start and Test". After hitting deploy button the process starts but one/two of the three hosts fails randomly for random reasons everytime. Sometimes two of the host goes blue but one of them fails.

Reason for failure that get displayed -

* Failed to install the services (sometimes hive, hdp select , hadoop) [most ofter i encounter these reasons]

* Hearbeat lost [rare]

Am i missing something? Please help ! Thanks in advance.

3 REPLIES 3

Re: Issue In Installing Ambari. CentOS6

@chaitanya ekre

Please provide the error logs for better understanding of the issue

Re: Issue In Installing Ambari. CentOS6

New Contributor

@Sridhar Reddy Thanks for the prompt reply, the issue is solved.

I have one more query. Please refer to the details given below and the attached screenshot.

Total 3 machines.

1 master - Total 1 TB space. Root occupies 50 GB and remaining is for home.

2 data nodes - Total 2TB space. Root occupies 50 GB and remaining is for home.

At the time of CentOS installtion I have allocated 50 GB to root (maximum limit specified in installation process) ext4 format.

After installation of HDP cluster Ambari service is issuing warning for hard disk space. As by default it is using /usr/hdp diretory. Now I want to increase Host Disk Usage (in master node) how do i do that ? (without having to uninstall any service or data or OS). Is there any way


issue.png
Highlighted

Re: Issue In Installing Ambari. CentOS6

New Contributor

One possible solution would reconfigure the LVM for /home. If I understand you have 950GB assigned to home. HDP will use little of this so you can reduce to say 50G using:

lvreduce

This should free up some more space to create a new LVM say: /usr_tmp

Create (100G+), format and mount this LVM.

Shut everything down and switch to run level 1 (init 1)

This should now allow you copy all the data in /usr to usr_tmp.

Delete the original /usr dir and remount your new /usr_tmp LVM as /usr

A reboot might be a wise move at this point. You should now see /usr/hdp has a lot more space.

You might want to repeat this for hdfs.datanode.data.dir location. NOTE: You might find by default that Ambari found all disk mounts and assigned a entry for this config option. On a development VM with just one virtual disk you should assign just one directory or LVM. EXAMPLE: /hdp_data/