Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

HDP 2.6.0 on IBM Power - NameNode Install Error

avatar
New Contributor

Hi,

We're installing HDP 2.6.0 on IBM Power servers, through Ambari 2.5.

We have 8 nodes, 1 EdgeNode, 3 MasterNodes, and 4 DataNodes. We're also trying to install all the Hadoop services that can be installed through Ambari. We changed some of the proposed service node placement beacause Ambari tends to put a lot of them on the EdgeNode.

We're running into a problem where the NameNode service is trying to install, but it makes the whole installation fail; you can see the full error log attached.

In summary, it looks like a Python script tries to grep the HDP version but returns nothing. When we delete the /usr/hdp/hadoop/ directory it works; the grep command will return the HDP version, but somewhere in the installation process, the /usr/hdp/hadoop directory is recreated and it makes the grep command fail again.

# /usr/bin/ambari-python-wrap /usr/bin/hdp-select versions ERROR: Unexpected file/directory found in /usr/hdp: hadoop

We obviously don't want to hardcode the HDP version in the Python script.

namenode-install-error.txt

1 ACCEPTED SOLUTION

avatar
Super Collaborator
@François Vienneau Binette

It looks the problem is with hdp-select versions command. hdp-select command will review the directories under /usr/hdp on each host prior to installing the pkgs.

/usr/hdp should not contain any directories other than hdp version and current/share directory as below.

[root@hdp1 hdp]# pwd
/usr/hdp
[root@hdp1 hdp]# ls
2.5.0.0-1245  current

There can be multiple version directories in case of cluster upgrade under this location. If you have any other directories other than these then please move those directories to other location and make sure "hdp-select versions" command doesnt return any error prior to starting the installation.

View solution in original post

4 REPLIES 4

avatar

Looks like this is failing while running the hdp-select script. Below post also talks about similar issue:

https://community.hortonworks.com/questions/71864/error-while-adding-service.html

Do an ls -l /usr/hdp/current and see if there are some unwanted (extra directories) present inside that.

avatar
Super Collaborator
@François Vienneau Binette

It looks the problem is with hdp-select versions command. hdp-select command will review the directories under /usr/hdp on each host prior to installing the pkgs.

/usr/hdp should not contain any directories other than hdp version and current/share directory as below.

[root@hdp1 hdp]# pwd
/usr/hdp
[root@hdp1 hdp]# ls
2.5.0.0-1245  current

There can be multiple version directories in case of cluster upgrade under this location. If you have any other directories other than these then please move those directories to other location and make sure "hdp-select versions" command doesnt return any error prior to starting the installation.

avatar
New Contributor

Thanks @Namit Maheshwari @rguruvannagari

We managed to solve our problem with your help. I'll explain what we did to make the installation successful, if it can help someone. In the end, it has nothing to do with the Linux distro or the hardware (IBM Power) but the issue was the default configuration for HDFS.

First, we deleted by hand every directory that is not /usr/hdp/2.6.0.x-xxxx/, /usr/hdp/current/, and /usr/hdp/share/. Everything that look like a service directory. Our problem was that we had a /usr/hdp/hadoop/ directory that should not have been there, without understanding why the installation process created it.

Second, we changed the HDFS configurations for the NameNode directories and the DataNodes directories.

14560-hdfs-default-paths.png

During the installation process, Amabri set up these default paths where we can see our culprit in the NameNode directories: /usr/hdp/hadoop. That's probably why the installation script create this directory, because it's defined in the configuration file.

We keep only the local FS /hadoop/hdfs/namenode/ directory for the NameNode (we could have add another one from the network storage but we don't have network storage).

We also added our 12 disks per nodes paths for the DataNode directories and deleted the others.

14561-hdfs-current-paths.png

I'm just surprised that Ambari will propose "faulty" directories as default, but heh, I guess it's not perfect and you still need to know what you're doing!

In the end, the installation process for the NameNodes went through, as the other services.

Thanks again, François

avatar
New Contributor

And also, we had to change another default configuration for Ambari Metrics:

14677-hbase-temp-dir.png

You can see that by default, the HBase temporary directory is created inside /usr/hdp/, which should not.