Member since
05-22-2018
69
Posts
1
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3915 | 06-07-2018 05:33 AM | |
957 | 05-30-2018 06:30 AM |
06-16-2018
06:28 AM
@Aditya Sirna Thank you Aditya. Your observation worked for me. NOTE:
echo "`date` hi">/tmp/output ; hdfs dfs -appendToFile <local_directory_path> <hdfs_directory_path> Regards, Jay.
... View more
06-15-2018
11:07 AM
HI All, I have a shell script on HDFS as well as the locally named script.sh contains echo Hi. I could execute the script.sh file on locally and store output on locally of course. But I want to execute script.sh file (wherever on local or on HDFS) and store output on HDFS. I have done following; script.sh #!/bin/bash
echo "`date` hi" > /tmp/output bash script.sh above command ran successfully. but if I changed the output path it is giving me an error that ; script.sh: line 2: hdfs://<host>:<port>/user/oozie/output/shell: No such file or directory #!/bin/bash
echo "`date` hi" > hdfs://<HOST>:<PORT>/user/oozie/output/shell Kindly help me for this. Thank you, Jay.
... View more
Labels:
- Labels:
-
Apache Hadoop
06-14-2018
07:56 AM
@Felix Albani I tried using putty also. But it also doesn't work for me. Regards, Jay.
... View more
06-13-2018
12:55 PM
@Felix Albani No, I meant to say, I have reviewed my code and took Oozie of out picture. After that, I could able run spark-submit command. My command is also working well. But the problem is with SparkAction for Oozie. Regards, Jay.
... View more
06-13-2018
08:00 AM
@Felix Albani Thanks for respond. I have verified my code. It is working well. I could execute spark-submit using a command line. ./bin/spark-submit --class com.apache.<ClassName> --master local[2] /root/<my_jar.jar> /<input_path_of_HDFS> /<output_path_of_HDFS> But with SparkAction it is giving me errors. Regards, Jay.
... View more
06-12-2018
01:50 PM
@Felix Albani No, it doesn't work for me. Regards, Jay.
... View more
06-12-2018
07:22 AM
@Felix Albani HI, I have updated yarn-site.xml with yarn.nodemanager.delete.debug-delay-sec=600 property. But now I am facing following error; 18/06/12 06:25:32 ERROR ApplicationMaster: SparkContext did not initialize after waiting for 100000 ms. Please check earlier log output for errors. Failing the application. and yes I have changed master=local[2] to master=yarn in job.properties as this question explained. Regards, Jay.
... View more
06-12-2018
06:32 AM
@Vinicius Higa Murakami Sure ! I have attached log below. error.txt Regards Jay.
... View more
06-12-2018
05:18 AM
Hey @Vinicius Higa Murakami, yes, I tried following, C:\InputFileWindows>scp -p 2222 datafile.txt root@localhost: C:\InputFileWindows>scp -p -v 22 datafile.txt root@localhost: C:\InputFileWindows>scp -p -v 2222 datafile.txt root@localhost: But it doesn't help me, facing same error. Thank you. Jay.
... View more
06-11-2018
01:13 PM
Hi all, I want to create Oozie workflow for Spark Action. Anyways, I have created workflow but I am getting the following error: Main class [org.apache.oozie.action.hadoop.SparkMain], exit code [1] I searched up on the error, I saw that most common cause of this error is oozie sharelib. So I installed all new jars and update sharelib by running following command: su oozie
oozie admin -sharelibupdate Ensure that sharelib is installed properly and none none of this has stopped the error occurring. My workflow files are following below; job.properties nameNode=hdfs://sandbox.hortonworks.com:8020
jobTracker=sandbox.hortonworks.com:8050
queueName=default projectRoot=user/root/oozie/sparkoozie
master=local[2]
mode=cluster
class=org.apache.TransformationOper
hiveSite=hive-site.xml
workflowAppUri=${nameNode}/${projectRoot}/lib/TransformationOper.jar
oozie.use.system.libpath=true
oozie.action.sharelib.for.spark=spark,hive
oozie.wf.application.path=${nameNode}/${projectRoot}/ workflow.xml <workflow-app name="SparkAction" xmlns="uri:oozie:workflow:0.4">
<start to="spark-node"/>
<action name="spark-node">
<spark xmlns="uri:oozie:spark-action:0.1">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<prepare>
<delete path="${projectRoot}/output"/>
</prepare>
<job-xml>${nameNode}/${projectRoot}/hive-site.xml</job-xml>
<configuration>
<property>
<name>mapred.compress.map.output</name>
<value>true</value>
</property>
</configuration>
<master>${master}</master>
<mode>${mode}</mode>
<name>Testing Spark Action</name>
<class>${class}</class>
<jar>${nameNode}/${projectRoot}/lib/TransformationOper.jar</jar>
<arg>INPUT=${nameNode}/${projectRoot}/input/error.log</arg>
<arg>OUTPUT=${projectRoot}/output</arg>
</spark>
<ok to="end"/>
<error to="error"/>
</action>
<kill name="error">
<message>Spark Test WF failed. [${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name ="end"/>
</workflow-app> command: oozie job -oozie http://127.0.0.1:11000/oozie -config job.properties -run I also checked yarn log yarn logs -applicationId application_1528173243110_0007 following is logerror; LogType:stderr
Log Upload Time:Tue Jun 05 16:31:27 +0000 2018
LogLength:2329
Log Contents:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/hadoop/yarn/local/filecache/685/spark-assembly-1.6.2.2.5.0.0-1245-hadoop2.7.3.2.5.0.0-1245.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/hadoop/yarn/local/filecache/25/mapreduce.tar.gz/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Using properties file: null
Parsed arguments:
master local[2]
deployMode cluster
executorMemory null
executorCores null
totalExecutorCores null
propertiesFile null
driverMemory null
driverCores null
driverExtraClassPath null
driverExtraLibraryPath null
driverExtraJavaOptions -Dlog4j.configuration=spark-log4j.properties
supervise false
queue null
numExecutors null
files null
pyFiles null
archives null
mainClass org.apache.TransformationOper
primaryResource hdfs://sandbox.hortonworks.com:8020/user/root/oozie/sparkoozie/lib/TransformationOper.jar
name Testing Spark Action
childArgs [INPUT=hdfs://sandbox.hortonworks.com:8020/user/root/oozie/sparkoozie/input/error.log OUTPUT=user/root/oozie/sparkoozie/output]
jars null
packages null
packagesExclusions null
repositories null
verbose true
Spark properties used, including those specified through
--conf and those from the properties file null:
spark.yarn.security.tokens.hive.enabled -> false
spark.executor.extraJavaOptions -> -Dlog4j.configuration=spark-log4j.properties
spark.yarn.security.tokens.hbase.enabled -> false
spark.driver.extraJavaOptions -> -Dlog4j.configuration=spark-log4j.properties
Error: Cluster deploy mode is not compatible with master "local"
Run with --help for usage help or --verbose for debug output
Intercepting System.exit(1)
Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.SparkMain], exit code [1]
End of LogType:stderr
I went thru it, but I m not getting this what is the exact error. It is very grateful If you could help me to solve this issue. Regards, Jay.
... View more
Labels:
- Labels:
-
Apache Oozie
-
Apache Spark