Member since
08-15-2019
29
Posts
111
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1506 | 09-08-2017 11:30 PM | |
2310 | 06-08-2017 07:24 PM | |
5959 | 03-28-2017 05:20 PM | |
3562 | 03-17-2017 04:27 AM | |
2888 | 03-09-2017 11:48 PM |
03-13-2018
09:49 PM
5 Kudos
For full stack of error message from hive you could look into hiveserver log like @Slim was mentioning. Since hive interactive is enabled I believe you should look into hsihiveserver log on the node where HiveServer2 Interactive is running
... View more
01-09-2018
11:44 PM
4 Kudos
By connecting to web ui over port 10002 one could check the number of live sessions on that instance of hiveserver
... View more
09-08-2017
11:30 PM
8 Kudos
Not sure if your upgraded cluster and newly installed cluster have the same resources. Please try disabling and then enabling Interactive Query button in Hive configs page, also restart ambari recommended components if needed. This will make sure the newly calculated configs take effect and if memory requirements are met hive.llap.io.enabled will be set to true.
... View more
08-02-2017
06:10 PM
1 Kudo
One hack would be to kill the yarn application for a faster resource reclaim, retrieving the appid from RM UI yarn application -kill <appid>
... View more
07-23-2017
06:50 AM
4 Kudos
Consolidating below some of the errors thrown by Spark Thrift Server during SQL execution, that could be worked around by configuring certain parameters of spark-thrift-sparkconf.conf and hive-site.xml
Error 1:
Join condition is missing or trivial.Use the CROSS JOIN syntax to allow cartesian products between these relations.;
Resolution: spark.sql.crossjoin.enabled: true
Error 2:
Caused by: org.codehaus.janino.JaninoRuntimeException: Code of method "eval(Lorg/apache/spark/sql/catalyst/InternalRow;)Z" of class "org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificPredicate" grows beyond 64 KB
Resolution: spark.sql.codegen.wholeStage : false
Error 3:
java.lang.OutOfMemoryError: Java heap space
Resolution: spark.driver.memory : 10g <to a higher-value>
spark.sql.ui.retainedExecutions: 5 <to some lower-value>
Error 4: org.apache.spark.SparkException: Exception thrown in awaitResult: (state=,code=0) Resolution: hive.metastore.try.direct.sql: false (in hive-site.xml)
To enable heap dump collection for spark driver and executors, for debugging out of memory errors
spark.driver.extraJavaOptions: '-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=<path-to-dump-file-location>'
spark.executor.extraJavaOptions: '-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=<path-to-dump-file-location>'
... View more
Labels:
07-13-2017
10:39 PM
2 Kudos
If the issue is that the number of rows is too high starting beeline with beeline --incremental=true will be of help
... View more
06-29-2017
09:05 PM
Thanks @Kshitij Badani, will configure a local user as I am about to try some spark queries
... View more
06-29-2017
08:29 PM
1 Kudo
Thanks @dhanya it worked
... View more
06-29-2017
08:22 PM
4 Kudos
I am trying to follow the steps to get zeppelin running and connect UI. The home page seems to have a login button. Is there any default username/password or should i create one to get to the Main page mentioned here http://zeppelin.apache.org/docs/0.7.0/quickstart/explorezeppelinui.html
... View more
Labels:
- Labels:
-
Apache Zeppelin