Member since
04-13-2017
5
Posts
0
Kudos Received
0
Solutions
03-06-2018
12:49 AM
Are you running Spark 1.6 or Spark 2.1? If 2.1, change SPARK_HOME=/usr/hdp/current/spark2-client, if 1.6, use ../spark-client instead. Also, make sure the minimal-example.conf works first before trying the multi-submit-example to make sure your spark-bench environment is fine.
... View more
04-26-2017
08:32 PM
Just to be clear, do not install Ambari Metrics on Isilon, Isilon OneFS v 8.0.1.1 provides Isilon specific updates to Ambari Metrics automatically via OneFS. Just make sure you use OneFS 8.0.1.x code as it has the latest updates for Ambari Metrics. If you are just adding Ambari Metrics to HDP and you didn't select it during the initial install, just use the add service wizard in Ambari to add Ambari Metrics to your master node (not Isilon!). If the add service wizard is failing you, check you /etc/yum.repos.d to make sure you are pointing to the correct repositories for your version of HDP. Also make sure you have the appropriate internet access for yum to connect to the external repositories.
... View more
04-20-2017
01:00 PM
Glad to hear you resolved your initial issue. As far as the non compatible agent version, it sounds like you may have forgot to set the ODP version on Isilon, i.e. isi hdfs settings modify --odp-version=2.5.0.3-19 (I'm assuming this is your HDP version) --zone=<zone>. If you did set this, then your DNS server may be missing information for Isilon - you have to make sure you have IN A records and IN PTR records for each Isilon IP address in your assigned Isilon IP pool for your Hadoop Access Zone. Make sure you delegate the SC Zone to Isilon on your DNS server using an NS record. Test with nslookup or dig and ping the SC Zone Name and make sure you get alternating IP addresses. Each IP address should have a PTR record pointing the the SC Zone Name. Also, I'm assuming you made the appropriate changes in Ambari before deploying HDP with Isilon: On the Customize Services screen, for the HDFS service, on the Advanced settings tab, update the
following settings in the Advanced hdfs site settings section: a. Change the dfs.namenode.http-address PORT to the FQDN for the SmartConnect Zone Name
followed by port 8082 from 50070. b. Change the dfs.namenode.https-address to the FQDN for the SmartConnect Zone Name followed
by port 8080 from 50470. c. Add a property in the Custom hdfs-site field by the name dfs.client-write-packet-size. d. Set dfs.client-write-packet-size to 131072. e. Change the dfs.datanode.http.address port from 50075 to 8082. This setting prevents an error
that generates a traceback in ambari-server.log each time you log in to the Ambari server.
The latest version of the installation guide is located at Isilon Hadoop Info Hub.
... View more
04-19-2017
08:17 PM
This is expected and can be ignored. Basically we are pre-creating all the system users and groups on all the HDP hosts to match the users and groups created on Isilon (needed for NFS purposes only). When you do a fresh install of HDP, HDP doesn't not expect any of the users and groups to exist on a new deployment - hence the messages you are getting which can be ignored. Just proceed with the installation even when you see the HOST CHECK massages above. All other system checks should pass. Note: You only need to copy the users and groups to the HDP hosts if you plan on using NFS with Isilon, NFS requires UID/GIDs to be in sync, HDFS alone does not. If using only HDFS, let HDP create the UID/GIDs for you during installation and you will not see the messages above during the HOST CHECK. You still have to create the users and groups on Isilon regardless.
... View more
04-14-2017
02:59 PM
The isilon_create_users.sh script needs to be run as root on Isilon. So when you download the script from https://github.com/Isilon/isilon_hadoop_tools, and say you place it on Isilon in /ifs/scripts, from the /ifs/scripts directory you would run (using test as a zone name example): bash ./isilon_create_users.sh --dist hwx --zone test Script output shown below: Info: Hadoop distribution: hwx
Info: will put users in zone: test
Info: HDFS root: /ifs/test
Info: passwd file: test.passwd
Info: group file: test.group
SUCCESS -- Hadoop users created successfully!
Done! At this point, the HDP system accounts are created for you on Isilon under the "test" zone and the two reference files test.passwd and test.group are available in the /ifs/scripts directory for viewing. If you plan on using NFS on Isilon as well from your HDP cluster, the user and group id's must match for NFS (not required for HDFS), so the reference files test.passwd and test.group would be used to append to the /etc/passwd and /etc/group files on Ambari as well as all the other hosts in the cluster to maintain uid/gid synchronization.
... View more