Member since
10-24-2015
171
Posts
379
Kudos Received
23
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2641 | 06-26-2018 11:35 PM | |
4348 | 06-12-2018 09:19 PM | |
2874 | 02-01-2018 08:55 PM | |
1443 | 01-02-2018 09:02 PM | |
6754 | 09-06-2017 06:29 PM |
04-11-2017
08:22 PM
3 Kudos
@Ryan Suarez Exit code 143 is related to Memory/GC issues. Your default Mapper/reducer memory setting may not be sufficient to run the large data set. Thus, try setting up higher AM, MAP and REDUCER memory when a large yarn job is invoked.
... View more
04-04-2017
06:53 PM
7 Kudos
@Saravanan Selvam, In yarn mode you can control the total number of executors needed for an application with --num-executor option. However, if you do not explicitly specify --num-executor for spark application in yarn mode, it would typically start one executor on each Nodemanager. Spark also has a feature called Dynamic resource allocation. It gives spark application a feature to dynamically scale the set of cluster resources allocated to your application up and down based on the workload. This way you can make sure that application is not over utilizing the resources. http://spark.apache.org/docs/1.2.0/job-scheduling.html#dynamic-resource-allocation
... View more
04-03-2017
05:56 PM
1 Kudo
@Bhavin Tandel, when are you hitting this errors? can you please explain the steps which lead to this error?
... View more
03-31-2017
11:30 PM
@Kevin Ng, can you please check the cluster configuration for Spnego authentication? Find the guidelines as below. https://docs.hortonworks.com/HDPDocuments/Ambari-2.2.1.0/bk_Ambari_Security_Guide/content/ch_enable_spnego_auth_for_hadoop.html
... View more
03-31-2017
09:51 PM
1 Kudo
@Juan Manuel Nieto, can you please check if a yarn/MR job related to this oozie workflow still running ?
... View more
03-31-2017
09:44 PM
1 Kudo
@pooja khandelwal, what is mapred.child.java.opts property set to ? Can you please try increasing this value ?
... View more
03-31-2017
09:34 PM
1 Kudo
@Kevin Ng, if you are doing kinit properly, then it can be a configuration issue related to Ranger KMS. Make sure that KMS is configured properly in your cluster. Refer to to below thread for configurations. https://community.hortonworks.com/questions/28052/exception-while-executing-insert-query-on-kerberos.html
... View more
03-31-2017
09:30 PM
1 Kudo
@Kevin Ng, Follow below steps. Suppose you want to run this application as user "a". sudo su a
kinit -kt <a keytab> <a principal>
./spark-submit --class org.apache.spark.examples.SparkPi --master yarn --deploy-mode client ../lib/spark-examples*.jar 100
... View more
03-31-2017
08:24 PM
1 Kudo
@n c, There can be multiple reasons for this issue. 1) Make sure that you have odd number of zookeepers. (example : 3) 2) Also make sure that the ports on which zookeepers are listening are open and used by Zks. 3) Check the firewall setting between hosts to make sure they can communicate with each other. This is also a good read. http://stackoverflow.com/questions/13316776/zookeeper-connection-error
... View more
03-31-2017
06:13 PM
1 Kudo
@Bhavin Tandel, are you using HDP cluster? Can you please check if you have followed steps mentioned in below article? https://community.hortonworks.com/articles/80059/how-to-configure-zeppelin-livy-interpreter-for-sec.html
... View more