Member since
09-17-2015
436
Posts
736
Kudos Received
81
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3846 | 01-14-2017 01:52 AM | |
5749 | 12-07-2016 06:41 PM | |
6624 | 11-02-2016 06:56 PM | |
2175 | 10-19-2016 08:10 PM | |
5693 | 10-19-2016 08:05 AM |
12-04-2015
07:37 PM
Sure! I have seen the same issue of hdp.version not getting replaced lots of times and it usually always is a result of missing the hdp.version details in interpreter or zeppelin-env.sh or spark-defaults.conf. Following the steps from the blog (or using Ambari service) should help with this issue.
... View more
12-04-2015
07:06 PM
1 Kudo
Check this earlier question
... View more
12-04-2015
05:43 PM
@Dhruv Kumar instead of pasting the raw .md file in AH, try copy/pasting the content from the rendered README.md into AH: it will preserve your github formatting
... View more
12-04-2015
05:19 PM
2 Kudos
@Ameet Paranjape Can you double check you modified the zeppelin-env.sh as mentioned in the blog? The hdp.version should be replaced automatically without users having to change mapred-site configs (check note below about "message related to bad substitution") In the zeppelin-env.sh file, add the following.
Note: you will use PORT to access the Zeppelin Web UI. <HDP-version> corresponds to the version of HDP where you are installing Zeppelin; for example, 2.3.2.0-2950.
export HADOOP_CONF_DIR=/etc/hadoop/conf
export ZEPPELIN_PORT=9995
export ZEPPELIN_JAVA_OPTS="-Dhdp.version=<HDP-version>"
To obtain the HDP version for your HDP cluster, run the following command:
hdp-select status hadoop-client | sed 's/hadoop-client - \(.*\)/\1/' Add the following properties and settings:
spark.driver.extraJavaOptions -Dhdp.version=2.3.2.0-2950
spark.yarn.am.extraJavaOptions -Dhdp.version=2.3.2.0-2950
Note
Make sure that both spark.driver.extraJavaOptions & spark.yarn.am.extraJavaOptions are saved.
Without these properties set, the Spark job will fail with message related to bad substitution
Also note that the version of zeppelin used by in the TP blog is actually an early version of 0.5.5. I tested both this version as well as the released 0.5.5 zeppelin HDP 2.3.2 with both Spark 1.4.1 and Spark 1.5.1TP using the Zeppelin service and did not encounter issues running Spark code. The issues I encountered were around Hive (see here for details)
... View more
12-04-2015
04:28 AM
Thanks! I will change my script to use the current dir so the jar location remains same across releases yarn jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient.jar nnbench -operation create_write
... View more
12-04-2015
03:25 AM
In HDP 2.3.0 this command used to be able to run nnbench: $ yarn jar /usr/hdp/2.3.0.0-2557/hadoop-hdfs/hadoop-hdfs-tests.jar nnbench –operation create_write
In HDP 2.3.2 it seems nnbench is no longer part of hadoop-hdfs-tests.jar? # yarn jar /usr/hdp/2.3.2.0-2950/hadoop-hdfs/hadoop-hdfs-tests.jar nnbench –operation create_write
Exception in thread "main" java.lang.ClassNotFoundException: nnbench
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:278)
at org.apache.hadoop.util.RunJar.run(RunJar.java:214)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
# jar -tvf /usr/hdp/2.3.2.0-2950/hadoop-hdfs/hadoop-hdfs-tests.jar | grep bench
# Tried running a search in /usr for jars containing nnbench class but did not get any results find /usr -iname '*.jar' | xargs -i bash -c "jar -tvf {} | tr / . | grep nnbench && echo {}" Has this class been renamed or depracated?
... View more
Labels:
12-02-2015
07:39 PM
Thanks for asking the question...I learnt something new 🙂
... View more
12-02-2015
12:43 AM
1 Kudo
There are some guidelines available in our doc too: http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.1/bk_ambari_reference_guide/content/_ams_general_guidelines.html
... View more
12-01-2015
06:20 PM
Thanks for sharing. So this would allow users to access the same output/errors available via ambari when you start/stop a service...but its possible that the actual root cause would only be available in the service's log, right?
... View more
12-01-2015
08:24 AM
1 Kudo
+ @Paul Codding and @bdurai Once Logsearch is available you should be able to do this pretty easily from Logsearch UI because you can view all log messages across all components for the specific time window you are interested in (and filter by error level, include/exclude substrings etc). You can try a development version of it using steps here
... View more