Member since
11-07-2016
637
Posts
253
Kudos Received
144
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2720 | 12-06-2018 12:25 PM | |
| 2862 | 11-27-2018 06:00 PM | |
| 2194 | 11-22-2018 03:42 PM | |
| 3567 | 11-20-2018 02:00 PM | |
| 6274 | 11-19-2018 03:24 PM |
12-05-2017
08:40 AM
@Michael Bronson, What is the value of dfs.ha.namenodes.{ha-cluster-name} in your hdfs-site.xml You can get the {ha-cluster-name} from fs.defaultFS from core-site.xml Assuming fs.defaultFS is hdfs://hortonworks. hortonworks is the ha-cluster-name. Thanks, Aditya
... View more
12-04-2017
05:26 PM
@Chaitanya D, Thanks a lot for your kind words. Glad that it helped you. Can you please accept the answer. This will really help other community users.
... View more
12-04-2017
05:09 PM
@Michael Bronson, This is because of dependency between the services. These are defined in role_command_order.json. If there is no dependency for the component it can run in parallel between the nodes. However in a particular node the execution will be sequential. You can check the role_command_order.json for different services and their dependencies. Just run the command in ambari server to find all the json files. find /var/lib/ambari-server/ -iname role_command_order.json Thanks, Aditya
... View more
12-04-2017
04:14 PM
@thomas cook, Looks like the RPMs got installed. Can you try installing the client manually. Run the below command curl -k -u {username}:{password} -H "X-Requested-By:ambari" -i -X PUT -d '{"HostRoles": {"state": "INSTALLED"}}' http://{ambari-host}:{ambari-port}/api/v1/clusters/{clustername}/hosts/{hostname}/host_components/HCAT Replace the placeholders like username,password etc with original values. You can check the progress of the installation in Ambari UI. Thanks, Aditya
... View more
12-04-2017
08:43 AM
@Kaliyug Antagonist, can you try installing hadooplzo package in all the nodes and try restarting the services yum install -y hadooplzo hadooplzo-native Thanks, Aditya
... View more
12-04-2017
05:48 AM
@Ashikin, 1)Can you try moving the file /usr/hdp/2.6.3.0-235 to some other folder. Make sure that only 2.5.3.0-37 folder is there in /usr/hdp folder. 2) Then run hdp-select set all 2.5.3.0-37 3)Now run hdp-select versions. Should return only 2.5.3.0-37 4) Restart spark history If it still fails . Then try the manual steps for creating the tar file which i have mentioned above. For the failure which you have mentioned while running the manual steps, you can try creating these folders manually hdfs dfs -mkdir /hdp/apps/2.5.3.0-37/
hdfs dfs -mkdir /hdp/apps/2.5.3.0-37/spark2
hdfs dfs -chown -R hdfs:hdfs /hdp/apps/2.5.3.0-37
... View more
12-04-2017
05:05 AM
@Ashikin, Can you try running 'hdp-select' and attach the output. If this command returns multiple versions in the output(ie HDP-2.5.3.0-37 and HDP-2.6.3.0-235 , Can you try running the below hdp-select set all 2.5.3.0-37 and then run hdp-select versions
... View more
12-04-2017
04:06 AM
@Ashikin, 1) Do you have this folder as well ( /usr/hdp/2.6.3.0-235) 2) Did you try upgrade/downgrade of HDP on this cluster? 3) Did you have a HDP 2.6.3 setup on this node by any chance and did not do a proper clean up while deleting the cluster? If the above statements are not true, can you try running the below and see if it works fine cd /usr/hdp/2.5.3.0-37/spark2/jars
# create tar file from existing jars
tar -czvf spark2-hdp-yarn-archive.tar.gz *
# put the new tar file in hdfs
hdfs dfs -put spark2-hdp-yarn-archive.tar.gz /hdp/apps/2.5.3.0-37/spark2
... View more
12-03-2017
09:48 AM
@Ashikin, Can you please list the files under this directory (/usr/hdp) Thanks, Aditya
... View more
12-03-2017
05:05 AM
@Michael Bronson, This looks like a permission issue. The /spark2-history should belong to the spark user. You can change it as below hdfs dfs -chown spark /spark2-history
hdfs dfs -chown spark /spark-history Thanks, Aditya
... View more