Member since
10-03-2016
51
Posts
1
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1490 | 09-14-2018 01:30 AM | |
1637 | 08-24-2018 03:37 AM | |
3335 | 08-18-2018 02:19 AM |
12-20-2019
07:16 AM
https://github.com/apache/oozie/blob/9c288fe5cea6f2fbbae76f720b9e215acdd07709/webapp/src/main/webapp/oozie-console.js#L384
... View more
09-14-2018
01:30 AM
1 Kudo
Just need to add this tag in wf.xml <property>
<name>oozie.launcher.mapred.child.java.opts</name>
<value>-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=8000</value>
</property>
... View more
09-17-2018
05:46 AM
@A C Just to understand, did you run the spark submit using yarn cluster as master/deploy mode? If so, let's try to check the job properties for the following parameter: ${resourceManager} Also, here it is another example regarding pyspark + oozie (using shell to submit spark). https://github.com/hgrif/oozie-pyspark-workflow Hope this helps
... View more
08-24-2018
03:37 AM
This problem have to add spark.yarn.jars in sparkopts.and It have to start with"hdfs://"
... View more
08-21-2018
07:29 PM
@Felix Albani i am facing the issues while i am running the job in oozie . Below is the error 2018-08-21 14:27:16,632 ERROR [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
java.lang.IllegalArgumentException: Invalid ContainerId: container_e63_1532612094367_17160_02_000001
at org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:182)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1343)
... View more
08-18-2018
02:23 AM
Yaeh,I don't konw why,I change another cluster,every thing is fine.so thank you very much!
... View more
06-06-2017
01:13 PM
Ok,thank you very much.I thought that in the fact there is a way to remove it,just I don't know。
... View more
06-06-2017
02:10 PM
You're very welcome! Yes, for this the setrep command can change the replication factor for existing files and there is no need to change the value globally. For others' reference, the command is: hdfs dfs -setrep [-R] [-w] <numReplicas> <path>
... View more
06-07-2017
12:44 AM
thank you !I will try this.
... View more
11-17-2016
03:31 PM
Thank you
... View more