Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Spark2 History Server Cant Start using HDP 2.5

avatar
Contributor

Hi, I am having trouble to start Spark2 History Server in Ambari. Below is the standard errors.

stderr: /var/lib/ambari-agent/data/errors-3723.txt

2017-12-01 11:26:34,759 - Found multiple matches for stack version, cannot identify the correct one from: 2.5.3.0-37, 2.6.3.0-235
2017-12-01 11:26:34,759 - Cannot copy spark2 tarball to HDFS because stack version could be be determined.
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/SPARK2/2.0.0/package/scripts/job_history_server.py", line 103, in <module>
    JobHistoryServer().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 329, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/SPARK2/2.0.0/package/scripts/job_history_server.py", line 56, in start
    spark_service('jobhistoryserver', upgrade_type=upgrade_type, action='start')
  File "/var/lib/ambari-agent/cache/common-services/SPARK2/2.0.0/package/scripts/spark_service.py", line 65, in spark_service
    make_tarfile(tmp_archive_file, source_dir)
  File "/var/lib/ambari-agent/cache/common-services/SPARK2/2.0.0/package/scripts/spark_service.py", line 38, in make_tarfile
    os.remove(output_filename)
TypeError: coercing to Unicode: need string or buffer, NoneType found
1 ACCEPTED SOLUTION

avatar
Super Guru

@Ashikin,

1)Can you try moving the file /usr/hdp/2.6.3.0-235 to some other folder. Make sure that only 2.5.3.0-37 folder is there in /usr/hdp folder.

2) Then run hdp-select set all 2.5.3.0-37

3)Now run hdp-select versions. Should return only 2.5.3.0-37

4) Restart spark history

If it still fails . Then try the manual steps for creating the tar file which i have mentioned above.

For the failure which you have mentioned while running the manual steps, you can try creating these folders manually

hdfs dfs -mkdir /hdp/apps/2.5.3.0-37/
hdfs dfs -mkdir /hdp/apps/2.5.3.0-37/spark2
hdfs dfs -chown -R hdfs:hdfs /hdp/apps/2.5.3.0-37

View solution in original post

10 REPLIES 10

avatar
Super Guru

@Ashikin,

Can you please list the files under this directory (/usr/hdp)

Thanks,

Aditya

avatar
Contributor

Hi Aditya, here it is

[root@host ~]# ls /usr/hdp/2.5.3.0-37
atlas             HDP-LICENSE.txt  ranger-hbase-plugin  spark2
datafu            HDP-NOTICES.txt  ranger-hdfs-plugin   storm
etc               hive             ranger-hive-plugin   storm-slider-client
hadoop            hive2            ranger-kafka-plugin  tez
hadoop-hdfs       hive-hcatalog    ranger-storm-plugin  tez_hive2
hadoop-mapreduce  kafka            ranger-yarn-plugin   usr
hadoop-yarn       livy             slider               zeppelin
hbase             pig              spark                zookeeper



avatar
Super Guru

@Ashikin,

1) Do you have this folder as well ( /usr/hdp/2.6.3.0-235)

2) Did you try upgrade/downgrade of HDP on this cluster?

3) Did you have a HDP 2.6.3 setup on this node by any chance and did not do a proper clean up while deleting the cluster?

If the above statements are not true, can you try running the below and see if it works fine

cd /usr/hdp/2.5.3.0-37/spark2/jars
# create tar file from existing jars
tar -czvf spark2-hdp-yarn-archive.tar.gz *
# put the new tar file in hdfs
hdfs dfs -put spark2-hdp-yarn-archive.tar.gz /hdp/apps/2.5.3.0-37/spark2

avatar
Contributor

I downgrade HDP from 2.6 to 2.5. I did proper cleanup. I follow your steps but there is error stated that

put: `/hdp/apps/2.5.3.0-37/spark2': No such file or directory: `hdfs://hostname:8020/hdp/apps/2.5.3.0-37/spark2'

avatar
Contributor

Yes. I also have folder /usr/hdp/2.6.3.0-235

avatar
Super Guru

@Ashikin,

Can you try running 'hdp-select' and attach the output. If this command returns multiple versions in the output(ie HDP-2.5.3.0-37 and HDP-2.6.3.0-235 , Can you try running the below

hdp-select set all 2.5.3.0-37

and then run

hdp-select versions

avatar
Contributor

The following is the result for hdp-select :

[root@host ~]# hdp-select
accumulo-client - None
accumulo-gc - None
accumulo-master - None
accumulo-monitor - None
accumulo-tablet - None
accumulo-tracer - None
atlas-client - 2.5.3.0-37
atlas-server - 2.5.3.0-37
falcon-client - None
falcon-server - None
flume-server - None
hadoop-client - 2.5.3.0-37
hadoop-hdfs-datanode - 2.5.3.0-37
hadoop-hdfs-journalnode - 2.5.3.0-37
hadoop-hdfs-namenode - 2.5.3.0-37
hadoop-hdfs-nfs3 - 2.5.3.0-37
hadoop-hdfs-portmap - 2.5.3.0-37
hadoop-hdfs-secondarynamenode - 2.5.3.0-37
hadoop-hdfs-zkfc - 2.5.3.0-37
hadoop-httpfs - None
hadoop-mapreduce-historyserver - 2.5.3.0-37
hadoop-yarn-nodemanager - 2.5.3.0-37
hadoop-yarn-resourcemanager - 2.5.3.0-37
hadoop-yarn-timelineserver - 2.5.3.0-37
hbase-client - 2.5.3.0-37
hbase-master - 2.5.3.0-37
hbase-regionserver - 2.5.3.0-37
hive-metastore - 2.5.3.0-37
hive-server2 - 2.5.3.0-37
hive-server2-hive2 - 2.5.3.0-37
hive-webhcat - 2.5.3.0-37
kafka-broker - 2.5.3.0-37
knox-server - None
livy-server - 2.5.3.0-37
mahout-client - None
oozie-client - None
oozie-server - None
phoenix-client - None
phoenix-server - None
ranger-admin - None
ranger-kms - None
ranger-tagsync - None
ranger-usersync - None
slider-client - 2.5.3.0-37
spark-client - 2.5.3.0-37
spark-historyserver - 2.5.3.0-37
spark-thriftserver - 2.5.3.0-37
spark2-client - 2.5.3.0-37
spark2-historyserver - 2.5.3.0-37
spark2-thriftserver - 2.5.3.0-37
sqoop-client - None
sqoop-server - None
storm-client - 2.5.3.0-37
storm-nimbus - 2.5.3.0-37
storm-slider-client - 2.5.3.0-37
storm-supervisor - 2.5.3.0-37
zeppelin-server - 2.5.3.0-37
zookeeper-client - 2.5.3.0-37
zookeeper-server - 2.5.3.0-37
[root@slot2 ~]# hdp-select set all 2.5.3.0-37
[root@slot2 ~]# hdp-select versions
2.5.3.0-37
2.6.3.0-235
[root@slot2 ~]# hdp-select set all 2.6.3.0-235
[root@slot2 ~]# hdp-select
accumulo-client - None
accumulo-gc - None
accumulo-master - None
accumulo-monitor - None
accumulo-tablet - None
accumulo-tracer - None
atlas-client - 2.6.3.0-235
atlas-server - 2.6.3.0-235
falcon-client - None
falcon-server - None
flume-server - None
hadoop-client - 2.6.3.0-235
hadoop-hdfs-datanode - None
hadoop-hdfs-journalnode - None
hadoop-hdfs-namenode - None
hadoop-hdfs-nfs3 - None
hadoop-hdfs-portmap - None
hadoop-hdfs-secondarynamenode - None
hadoop-hdfs-zkfc - None
hadoop-httpfs - None
hadoop-mapreduce-historyserver - None
hadoop-yarn-nodemanager - None
hadoop-yarn-resourcemanager - None
hadoop-yarn-timelineserver - None
hbase-client - 2.6.3.0-235
hbase-master - 2.6.3.0-235
hbase-regionserver - 2.6.3.0-235
hive-metastore - 2.6.3.0-235
hive-server2 - 2.6.3.0-235
hive-server2-hive2 - 2.6.3.0-235
hive-webhcat - None
kafka-broker - 2.6.3.0-235
knox-server - None
livy-server - None
mahout-client - None
oozie-client - None
oozie-server - None
phoenix-client - None
phoenix-server - None
ranger-admin - None
ranger-kms - None
ranger-tagsync - None
ranger-usersync - None
slider-client - None
spark-client - 2.6.3.0-235
spark-historyserver - 2.6.3.0-235
spark-thriftserver - 2.6.3.0-235
spark2-client - 2.6.3.0-235
spark2-historyserver - 2.6.3.0-235
spark2-thriftserver - 2.6.3.0-235
sqoop-client - None
sqoop-server - None
storm-client - 2.6.3.0-235
storm-nimbus - 2.6.3.0-235
storm-slider-client - None
storm-supervisor - 2.6.3.0-235
zeppelin-server - 2.6.3.0-235
zookeeper-client - None
zookeeper-server - None


No result if I run hdp-select set all 2.5.3.0-37

The following is the result if I run hdp-select versions

[root@host ~]# hdp-select versions
2.5.3.0-37
2.6.3.0-235

avatar
Super Guru

@Ashikin,

1)Can you try moving the file /usr/hdp/2.6.3.0-235 to some other folder. Make sure that only 2.5.3.0-37 folder is there in /usr/hdp folder.

2) Then run hdp-select set all 2.5.3.0-37

3)Now run hdp-select versions. Should return only 2.5.3.0-37

4) Restart spark history

If it still fails . Then try the manual steps for creating the tar file which i have mentioned above.

For the failure which you have mentioned while running the manual steps, you can try creating these folders manually

hdfs dfs -mkdir /hdp/apps/2.5.3.0-37/
hdfs dfs -mkdir /hdp/apps/2.5.3.0-37/spark2
hdfs dfs -chown -R hdfs:hdfs /hdp/apps/2.5.3.0-37

avatar
Contributor

It works like a charm. Thanks a lot Aditya 🙂