Member since
06-09-2016
529
Posts
129
Kudos Received
104
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1671 | 09-11-2019 10:19 AM | |
9197 | 11-26-2018 07:04 PM | |
2389 | 11-14-2018 12:10 PM | |
5089 | 11-14-2018 12:09 PM | |
3051 | 11-12-2018 01:19 PM |
07-26-2018
01:25 PM
@Tahine Bara No, that is not possible. Please review the documentation here: https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.0.0/bk_ambari-upgrade/content/ambari_upgrade_guide.html Important: Ambari 2.7 only supports fully managing a HDP 3.0 cluster. If you are running any other HDP version, you must first upgrade to HDP 2.6 using Ambari 2.6, then upgrade to Ambari 2.7 and use it to upgrade your cluster to HDP 3.0. Not all cluster management operations are supported when using Ambari 2.7 with HDP 2.6. Please see the next section for limitations. HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
07-25-2018
01:42 PM
@Stanislav Nikitin Usually you will see these errors if the spark interpreter is not able to start/lunch a spark application in yarn. This could be due several reasons. Could you try to restart your sandbox and test again? Also you can check the spark interpreter log under /var/log/zeppelin directory. HTH
... View more
07-24-2018
09:41 PM
@Nikil Katturi So you are using scopt to parse. The key is to pass the arguments after the application jar, which I think you are doing but perhaps there is a problem with the SPARKLE_JARS which is right before. You can review this for more information: https://stackoverflow.com/questions/40535304/how-to-pass-parameters-properties-to-spark-jobs-with-spark-submit I tested with spark pi example like this: /usr/hdp/current/spark2-client/bin/spark-submit --master local --class org.apache.spark.examples.SparkPi --driver-memory 512m --executor-memory 512m --num-executors 2 --executor-cores 1 examples/jars/spark-examples*.jar --job="Hello" And I get an error indicating the actual argument was passed correctly to the main function as expected. Which means spark-submit understood this was an argument to the applications and did not parse it. So in your case I suggest you check the order of the different parameters for spark-submit, use the above link example and make sure the application jar is last. Then in the application itself try to print the arguments in the stdout to troubleshoot in case the error is coming from your custom code. HTH
... View more
07-24-2018
08:41 PM
@Sriram Did it work? Please keep me posted 🙂
... View more
07-24-2018
08:28 PM
@Nanda Kumar Yes, both are compatible and supported! You can install both in the same cluster and use both of them. If you are satisfied with the answer please remember to login and mark the answer as "Accepted" Thanks!
... View more
07-24-2018
08:24 PM
@Nikil Katturi There is no --job option for spark 2. I imagine this is set for the spark application you created, right? And is suppose to pass it as an argument to this application. I see you have set the --verbose to the spark submit command so I was expecting to see more output hopefully before the ERROR. Perhaps since you are redirecting the output to log file you can attach this to the post?
... View more
07-24-2018
08:12 PM
@Nanda Kumar There are major changes between both hive and spark versions. So you need to decide which to install. And is possible to install both in same cluster. For hive the differences between 1.2.1 and 2.1 are big, in case you like to take a look please check here: https://hortonworks.com/blog/announcing-apache-hive-2-1-25x-faster-queries-much/ For spark 1.6.x and 2.x also we have major differences as well, you can read some more here: https://hortonworks.com/blog/welcome-apache-spark-2-0/ HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
07-24-2018
02:50 PM
@Sriram So to summarize in order for impersonation to work in non-kerberized environment for zeppelin jdbc (hive) please follow the following steps: https://community.hortonworks.com/articles/113228/how-to-enable-user-impersonation-for-jdbc-interpre-1.html No need to enable the global settings, just with the defaults follow the steps listed above. I just tested this in my environment and is working fine. HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
07-24-2018
02:35 PM
@Sriram Actually for non kerberos here are the instructions to setup user impersonation: https://community.hortonworks.com/articles/113228/how-to-enable-user-impersonation-for-jdbc-interpre-1.html HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
07-24-2018
01:25 PM
@Sriram So based on the zeppelin configuration for jdbc interpreter I see you are using hive user (this explains why you can see all databases and have full access). Please review the documentation on how to configure zeppelin jdbc interpreter for impersonation as I mentioned above, you can also check this documentation: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.0/bk_zeppelin-component-guide/content/config-hive-access.html HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more