Member since
05-27-2014
14
Posts
4
Kudos Received
11
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3687 | 03-28-2017 04:37 PM | |
7839 | 09-14-2016 02:05 PM | |
6101 | 08-18-2016 05:41 PM | |
2911 | 05-23-2016 06:15 PM | |
4807 | 03-14-2016 06:07 PM |
03-28-2017
04:37 PM
There is no plan to support Docker with CDH at the moment.
... View more
09-14-2016
02:05 PM
The new error message related to RejectedExecutionException is most likely due to the fact that threads are being submitted to executor that have been lost/killed hence you see these type of messages (a good read on RejectedExecutionException can be found here [1]):
===
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@6143b0a3 rejected from java.util.concurrent.ThreadPoolExecutor@4eef216[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 1170]
===
[1] https://examples.javacodegeeks.com/core-java/util/concurrent/rejectedexecutionexception/java-util-concurrent-rejectedexecutionexception-how-to-solve-rejectedexecutionexception
... View more
08-18-2016
05:41 PM
1) Increase the executor and executor memory to 5GB and 3GB respectively to fix the OutOfMemory issue
2) Change two properties so that the retry will not be on the same node:
a. spark.scheduler.executorTaskBlacklistTime= 3600000
b. spark.task.maxFailures=10
... View more
05-23-2016
06:15 PM
One way to workaround the 2GB limitation is to increase the number of partitions.
... View more
03-14-2016
06:07 PM
1 Kudo
1) currently, spark does not migrate to a different node (it may, but that would just be by chance). There is work in-progress to add node blacklisting, but that hasn't been committed yet (https://issues.apache.org/jira/browse/SPARK-8426)
2) Task failure - some exception encounter while running a task, e.g user code throws exception or external to the task such as spark cannot read from HDFS, etc…
Job failure - if a particular task fails 4 times then Sparks gives up and cancels the whole job
Stage failure (this is the trickiest) - this happens when a task attempts to read the *shuffle* data from another node. If it fails to read that shuffle data then it assumes that the remote node is dead (failure may happen due to bad disk, network error, bad node, node overload with other tasks and not responding fast enough, etc…). This is when Spark thinks it needs to regenerate the input data so Spark mark the stage as failed and return the previous stage that generate the input data. If the stage retry fails 4 times, Spark will give up assuming there the cluster has issue.
3) No great answer to this one. The best answer is really just using “yarn logs —applicationId ” to get all the logs in one file so it’s a bit easier to search through to find errors (rather than having to click the log one by one)
4) No, you don’t need any setting for that. Spark should be resilient to single node failures. With that said, there could be bugs in this area. if you encounter that is not the case, please provide the applicationId and cluster information so that I can collect logs and pass it on to our Spark team to analyze.
... View more
12-04-2015
10:45 AM
NA since customer opted to use Scalding to implement the solution instead of Spark.
... View more
08-12-2015
08:50 AM
1 Kudo
This support is currently planned for C6 timeframe which is early 2016.
... View more
08-04-2015
04:39 PM
The following steps can be done to get/set configurations:
==== Oozie ActionService Executor Extension Classes ====
>>> from cm_api.api_client import ApiResource
>>> print ApiResource('nightly54-1.vpc.cloudera.com').get_all_clusters()[0].get_all_services()[4].get_all_roles()[0].get_config(view='full')['oozie_executor_extension_classes']
: oozie_executor_extension_classes = none
>>> print ApiResource('nightly54-1.vpc.cloudera.com').get_all_clusters()[0].get_all_services()[4].get_all_roles()[0].update_config({'oozie_executor_extension_classes':'oozie_test.class'})
>>> print ApiResource('nightly54-1.vpc.cloudera.com').get_all_clusters()[0].get_all_services()[4].get_all_roles()[0].get_config(view='full')['oozie_executor_extension_classes']
: oozie_executor_extension_classes = oozie_test.class
====================
==== Oozie SchemaService Workflow Extension Schemas ====
>>> from cm_api.api_client import ApiResource
>>> print ApiResource('nightly54-1.vpc.cloudera.com').get_all_clusters()[0].get_all_services()[4].get_all_roles()[0].get_config(view='full')['oozie_workflow_extension_schemas'] : oozie_workflow_extension_schemas = ssh-action-0.1.xsd,hive-action-0.3.xsd,sqoop-action-0.3.xsd,shell-action-0.2.xsd,shell-action-0.1.xsd
>>> ApiResource('nightly54-1.vpc.cloudera.com').get_all_clusters()[0].get_all_services()[4].get_all_roles()[0].update_config({'oozie_workflow_extension_schemas':'ssh-action-0.1.xsd,hive-action-0.3.xsd,sqoop-action-0.3.xsd,shell-action-0.2.xsd,shell-action-0.1.xsd,oozie-test-action.xsd'})
>>> print ApiResource('nightly54-1.vpc.cloudera.com').get_all_clusters()[0].get_all_services()[4].get_all_roles()[0].get_config(view='full')['oozie_workflow_extension_schemas'] : oozie_workflow_extension_schemas = ssh-action-0.1.xsd,hive-action-0.3.xsd,sqoop-action-0.3.xsd,shell-action-0.2.xsd,shell-action-0.1.xsd,oozie-test-action.xsd
===================
Hardcoded value used for method such as "get_all_clusters()[0]" for brevity. A for-loop would be needed to parse for specific value and
return the object for the next call, etc... [1]. For future reference, all the modules can be found at ".../cm_api/endpoints."
[1] http://cloudera.github.io/cm_api/docs/python-client
... View more
07-07-2015
07:31 AM
1. all classes related to the custom action would need to be in /var/lib/oozie
2. all main and its dependencies would need to be in the sharelib [1] directory
[1] http://blog.cloudera.com/blog/2014/05/how-to-use-the-sharelib-in-apache-oozie-cdh-5/
... View more
06-04-2015
09:57 AM
1 Kudo
- the custom action jar would need to be on the Oozie server hence it needs to go into /var/lib/oozie directory.
- main class needs be in the sharelib
http://blog.cloudera.com/blog/2012/12/how-to-use-the-sharelib-in-apache-oozie/
http://blog.cloudera.com/blog/2014/05/how-to-use-the-sharelib-in-apache-oozie-cdh-5/
... View more