Member since
09-19-2016
36
Posts
5
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2669 | 07-21-2018 08:06 AM | |
1427 | 06-08-2017 09:11 AM |
11-12-2018
06:16 AM
Well...the command is not the same. This time I used oozie.coord.application.path instead of oozie.wf.application.path.
... View more
07-21-2018
08:06 AM
I solved this issue Just posting this to those who may have the same problem. I strengthen the link between my database and my big data servers. The link was slow so sqoop transmission rate got very low.
... View more
07-08-2018
08:53 AM
thanks for your quick reply. Is there any other solution to accomplish the import with less memory but slower?? My memory resources are limited. about 55G is assigned to yarn. an other question, what is the proper size of memory for mappers? I googled a lot for this question and I came with that I need to reduce my mappers memory , and increase them in number. as you said, e.g. 100 mappers. take a look at my ref. does this sound OK to you? p.s. my mapper mem is 3G and my reducers have 2G.
... View more
07-07-2018
01:17 PM
hi,
I enter this command to import some data from oracle. It is ok and the result has 1.3 million records. sqoop-import --connect jdbc:oracle:thin:@//serverIP:Port/xxxx --query "SELECT col1,col2,col3 FROM table where condition AND \$CONDITIONS " --target-dir /user/root/myresult --split-by col1 -m 10 --username xxx --password xxx but when I delete the condition to import whole table which has 12million records, It fails. always the first maps are loged as succeeded and the last one just hangs. but when I check mapreduce logs for succeeded maps, I see that they have been fail with this message: container killed by the applicationmaster. container killed on request. exit code is 143 container exited with a non-zero exit code 143. I googled and I found https://stackoverflow.com/questions/42306865/sqoop-job-get-stuck-when-import-data-from-oracle-to-hive as the same issue I have. but this post hasn't been answered yet. It'd be helpful if you take a look.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Sqoop
-
Apache YARN
07-04-2017
03:08 PM
hi, I need to write in a local file using a action. but similarly I get the permission error. do you have any idea?
... View more
07-04-2017
02:33 PM
hi, I need my job.properties to be edited after each time a sqoop action is done through a oozie coodrinator. I have added a shell action after sqoop action in my workFlow. my job.properties is not in hdfs and is located in node2, where I run oozie job. (i have 5 nodes), but the shell faces 'permission denied' error, and sometimes it says "no such file or directory'. i moved it to hdfs but the same thing happened. has anyone done such a thing? I execute the shell commands locally and they work correctly, but somehow when it's running distributed I can't control permissions.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Oozie
06-24-2017
09:17 AM
I changed my command like: oozie job --oozie http://node3:11000/oozie --config '/usr/hdp/current/oozie-server/sqoop-sample/test02/job.properties' -D oozie.coord.application.path=hdfs://node2:8020/user/root/test/sqoop/coordinator.xml -run and done.
... View more
06-20-2017
05:31 AM
I tried but the same error appeared. have you ran a coordinator successfully? if yes, can you provide your oozie-site.xml file? I think my problem derives from there. but I don't know what is missing or misconfigured.
... View more
06-19-2017
08:31 AM
hi, I have a workflow that I've ran it successfully earlier. now I need a coordinator to schedule it. my command is: oozie job --oozie http://node3:11000/oozie --config '/usr/hdp/current/oozie-server/sqoop-sample/test02/job.properties' -D oozie.wf.application.path=hdfs://node2:8020/user/root/test/sqoop/coordinator.xml -run my workflow is: <?xml version="1.0" encoding="UTF-8"?>
<workflow-app xmlns="uri:oozie:workflow:0.2" name="sqoop-wf">
<start to="sqoop-node"/>
<action name="sqoop-node">
<sqoop xmlns="uri:oozie:sqoop-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<prepare>
<delete path="${nameNode}/user/${wf:user()}/${examplesRoot}/output-data/sqoop"/>
<mkdir path="${nameNode}/user/${wf:user()}/${examplesRoot}/output-data"/>
</prepare>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration>
<command>import --driver com.mysql.jdbc.Driver --connect jdbc:mysql://node1/mydb --table topop --target-dir /user/root/testdata2 --username user --password mypassword -m 1</command>
</sqoop>
<ok to="end"/>
<error to="fail"/>
</action>
<kill name="fail">
<message>Sqoop failed, error message</message>
</kill>
<end name="end"/>
</workflow-app> here is coordinator.xml: <?xml version="1.0" encoding="UTF-8"?>
<coordinator-app xmlns="uri:oozie:coordinator:0.2" name="sqoop-wf" frequency="${coord:days(1)}" start="2017-06-18T12:50Z" end="2018-06-18T12:15Z" timezone="United_kingdom/London">
<action>
<workflow>
<app-path>${nameNode}/user/root/test/sqoop</app-path>
</workflow>
</action>
</coordinator-app> and finally oozie-site.xml: <configuration>
<property>
<name>oozie.action.retry.interval</name>
<value>30</value>
</property>
<property>
<name>oozie.authentication.simple.anonymous.allowed</name>
<value>true</value>
</property>
<property>
<name>oozie.authentication.type</name>
<value>simple</value>
</property>
<property>
<name>oozie.base.url</name>
<value>http://node3:11000/oozie</value>
</property>
<property>
<name>oozie.credentials.credentialclasses</name>
<value>hcat=org.apache.oozie.action.hadoop.HCatCredentials,hive2=org.apache.oozie.action.hadoop.Hive2Credentials</value>
</property>
<property>
<name>oozie.db.schema.name</name>
<value>oozie</value>
</property>
<property>
<name>oozie.service.ActionService.executor.ext.classes</name>
<value>org.apache.oozie.action.email.EmailActionExecutor,
org.apache.oozie.action.hadoop.HiveActionExecutor,
org.apache.oozie.action.hadoop.ShellActionExecutor,
org.apache.oozie.action.hadoop.SqoopActionExecutor</value>
</property>
<property>
<name>oozie.service.AuthorizationService.security.enabled</name>
<value>true</value>
</property>
<property>
<name>oozie.service.HadoopAccessorService.hadoop.configurations</name>
<value>*=/usr/hdp/current/hadoop-client/conf</value>
</property>
<property>
<name>oozie.service.HadoopAccessorService.kerberos.enabled</name>
<value>false</value>
</property>
<property>
<name>oozie.service.JPAService.jdbc.driver</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>oozie.service.JPAService.jdbc.password</name>
<value>SECRET:oozie-site:12:oozie.service.JPAService.jdbc.password</value>
</property>
<property>
<name>oozie.service.JPAService.jdbc.url</name>
<value>jdbc:mysql://node1/oozie</value>
</property>
<property>
<name>oozie.service.JPAService.jdbc.username</name>
<value>root</value>
</property>
<property>
<name>oozie.service.SchemaService.wf.ext.schemas</name>
<value>shell-action-0.1.xsd,email-action-0.1.xsd,hive-action-0.2.xsd,sqoop-action-0.2.xsd,ssh-action-0.1.xsd,oozie-coordinator-0.2.xsd,oozie-workflow-0.5.xsd</value>
</property>
<property>
<name>oozie.service.SparkConfigurationService.spark.configurations</name>
<value>*=spark-conf</value>
</property>
<property>
<name>oozie.service.URIHandlerService.uri.handlers</name>
<value>org.apache.oozie.dependency.FSURIHandler,org.apache.oozie.dependency.HCatURIHandler</value>
</property>
<property>
<name>oozie.services.ext</name>
<value>org.apache.oozie.service.JMSAccessorService,org.apache.oozie.service.PartitionDependencyManagerService,org.apache.oozie.service.HCatAccessorService,org.apache.oozie.service.ActionService</value>
</property>
</configuration> and the error msg: Error: E0723 : E0723: Unsupported action type, node [workflow] type [org.apache.oozie.service.ActionService] any idea?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Oozie
06-08-2017
09:11 AM
Since error mg contains "invalid user: falcon", I tried to create user falcon manually: adduser -g falcon falcon but there was an error about /etc/gshadow.lock. I figured out that there was a uncomplete try of creating falcon user, it was not successful and gshadow.lock was created but not deleted.(normally it must be deleted after creating a user). So: rm /etc/gshadow.lock
yum install falcon And the problem is gone!
... View more