Support Questions

Find answers, ask questions, and share your expertise
Announcements
Check out our newest addition to the community, the Cloudera Data Analytics (CDA) group hub.

Isilon One Fs and hortonworks deployment

Hi every body,

I m trying to tie together isilon and hortonworks, using this guide https://www.emc.com/collateral/TechnicalDocument/docu71396.pdf

when i execute the code create_users.sh i dont see where does it put passwd and group files, ,'cause when i execute this command " cat <accesszone>.passwd " it doesnt find anything, i checked, but no way and this leades to deployment failure,

thank you very much.

7 REPLIES 7

New Contributor

The isilon_create_users.sh script needs to be run as root on Isilon. So when you download the script from https://github.com/Isilon/isilon_hadoop_tools, and say you place it on Isilon in /ifs/scripts, from the /ifs/scripts directory you would run (using test as a zone name example):

bash ./isilon_create_users.sh --dist hwx --zone test

Script output shown below:

Info: Hadoop distribution: hwx Info: will put users in zone: test Info: HDFS root: /ifs/test Info: passwd file: test.passwd Info: group file: test.group SUCCESS -- Hadoop users created successfully! Done!

At this point, the HDP system accounts are created for you on Isilon under the "test" zone and the two reference files test.passwd and test.group are available in the /ifs/scripts directory for viewing.

If you plan on using NFS on Isilon as well from your HDP cluster, the user and group id's must match for NFS (not required for HDFS), so the reference files test.passwd and test.group would be used to append to the /etc/passwd and /etc/group files on Ambari as well as all the other hosts in the cluster to maintain uid/gid synchronization.

Hello Bruno, thank you for your quick answer, i, run this two scripts, and i copied the two generated files and paste it, in /etc/passwd and /etc/group in hwx , but it shows this error :

###################################### # Host Checks Report # # Generated: Tue Apr 18 2017 09:41:37 GMT+0100 (CET) ###################################### ###################################### # Hosts # # A space delimited list of hosts which have issues. # Provided so that administrators can easily copy hostnames into scripts, email etc. ###################################### HOSTS hdp-master.pfe.test.com ###################################### # Users # # A space delimited list of users who should not exist. # Provided so that administrators can easily copy paths into scripts, email etc. # Example: userdel hdfs ###################################### USERS

New Contributor

This is expected and can be ignored. Basically we are pre-creating all the system users and groups on all the HDP hosts to match the users and groups created on Isilon (needed for NFS purposes only). When you do a fresh install of HDP, HDP doesn't not expect any of the users and groups to exist on a new deployment - hence the messages you are getting which can be ignored. Just proceed with the installation even when you see the HOST CHECK massages above. All other system checks should pass.

Note: You only need to copy the users and groups to the HDP hosts if you plan on using NFS with Isilon, NFS requires UID/GIDs to be in sync, HDFS alone does not. If using only HDFS, let HDP create the UID/GIDs for you during installation and you will not see the messages above during the HOST CHECK. You still have to create the users and groups on Isilon regardless.

Hello Bruno, this issue was resolved, i just added an -append-cluster-name and it workerd, but the deplyment failed, and when i look at ambari-server log it shows essentially this errors:

WARN [qtp-ambari-agent-38] HeartBeatHandler:411 - Received registration request from host with non compatible agent version, hostname=hdfs-hdp.pfe.test.com, agentVersion=1.7.0, serverVersion=2.5.0.3 19 avr. 2017 15:23:05,467 INFO [ambari-client-thread-24] AmbariMetaInfo:1423 - Stack HDP-2.0.6.GlusterFS is not active, skipping VDF

[ambari-action-scheduler] ExecutionCommandWrapper:185 - Unable to lookup the cluster by ID; assuming that there is no cluster and therefore no configs for this execution command: Cluster not found, clusterName=clusterID=-1

Thank you very much !

New Contributor

Glad to hear you resolved your initial issue.

As far as the non compatible agent version, it sounds like you may have forgot to set the ODP version on Isilon, i.e. isi hdfs settings modify --odp-version=2.5.0.3-19 (I'm assuming this is your HDP version) --zone=<zone>. If you did set this, then your DNS server may be missing information for Isilon - you have to make sure you have IN A records and IN PTR records for each Isilon IP address in your assigned Isilon IP pool for your Hadoop Access Zone. Make sure you delegate the SC Zone to Isilon on your DNS server using an NS record. Test with nslookup or dig and ping the SC Zone Name and make sure you get alternating IP addresses. Each IP address should have a PTR record pointing the the SC Zone Name.

Also, I'm assuming you made the appropriate changes in Ambari before deploying HDP with Isilon:

On the Customize Services screen, for the HDFS service, on the Advanced settings tab, update the following settings in the Advanced hdfs site settings section:

a. Change the dfs.namenode.http-address PORT to the FQDN for the SmartConnect Zone Name followed by port 8082 from 50070.

b. Change the dfs.namenode.https-address to the FQDN for the SmartConnect Zone Name followed by port 8080 from 50470.

c. Add a property in the Custom hdfs-site field by the name dfs.client-write-packet-size.

d. Set dfs.client-write-packet-size to 131072.

e. Change the dfs.datanode.http.address port from 50075 to 8082. This setting prevents an error that generates a traceback in ambari-server.log each time you log in to the Ambari server.

The latest version of the installation guide is located at Isilon Hadoop Info Hub.

Hello Bruno ! Thank you very much, , i resolved this issues, it was essentially a problem of version compatibility between ambari and isilon, know i m just having an issue installing ambari metrics : here the output of the log

stderr: Traceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_collector.py", line 148, in <module> AmsCollector().execute() File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute method(env)

...

resource_management.core.exceptions.Fail: Execution of '/usr/bin/yum -d 0 -e 0 -y install ambari-metrics-collector' returned 1. No Presto metadata available for Updates-ambari-2.4.0.1 Error downloading packages: ambari-metrics-collector-2.4.0.1-1.x86_64: [Errno 256] No more mirrors to try.

shall i unchek the service ambari metrics for hortonworks,?

when i wrote : yum repolist, it shows Updates amabri, so i thought the package exists,

have you an idea how to resolv this issue!

Thank you very much !

greetings.

New Contributor

Just to be clear, do not install Ambari Metrics on Isilon, Isilon OneFS v 8.0.1.1 provides Isilon specific updates to Ambari Metrics automatically via OneFS. Just make sure you use OneFS 8.0.1.x code as it has the latest updates for Ambari Metrics.

If you are just adding Ambari Metrics to HDP and you didn't select it during the initial install, just use the add service wizard in Ambari to add Ambari Metrics to your master node (not Isilon!). If the add service wizard is failing you, check you /etc/yum.repos.d to make sure you are pointing to the correct repositories for your version of HDP. Also make sure you have the appropriate internet access for yum to connect to the external repositories.

Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.