Member since
05-19-2016
216
Posts
20
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4188 | 05-29-2018 11:56 PM | |
7021 | 07-06-2017 02:50 AM | |
3765 | 10-09-2016 12:51 AM | |
3530 | 05-13-2016 04:17 AM |
05-02-2016
01:33 PM
@Christian Guegi : No, I still get the same error. Not even list databases or table works.
... View more
05-02-2016
12:25 PM
@Christian Guegi
... View more
05-02-2016
12:05 PM
This is what I have in my sharelib: -shareliblist
[Available ShareLib] hive
mapreduce-streaming
oozie
sqoop
pig Still I keep getting the same error. Yes, I do have the lib folder along with the connector jar
... View more
05-02-2016
09:57 AM
Yes, I sure do. What else could possibly be the problem?
... View more
05-02-2016
07:14 AM
I have an oozie task that uses sqoop. /user/oozie folder has /shared/lib/lib_xyz/ fodler which has the sqoop folder and relevant jars in it. My job.properties looks like this: nameNode=hdfs://serverFQDN:8020
jobTracker=serverFQDN:8050
queueName=default
oozie.use.system.libpath=true
oozie.action.sharelib.for.sqoop=hive,hcatalog,sqoop
ozie.libpath=hdfs://serverFQDN:8020/user/oozie/share/lib
oozie.coord.application.path=${nameNode}/user/${user.name}/scheduledimport
start=2016-04-26T00:00Z
end=2016-12-31T00:00Z
workflowAppUri=${nameNode}/user/${user.name}/scheduledimport I get an error on sqoop task: java.lang.ClassNotFoundException: Class org.apache.oozie.action.hadoop.SqoopMain not found
How do I fix this?
... View more
Labels:
- Labels:
-
Apache Oozie
04-29-2016
02:03 PM
workflow.xml: <workflow-app name="once-a-day" xmlns="uri:oozie:workflow:0.1">
<start to="sqoopAction"/>
<action name="sqoopAction">
<sqoop xmlns="uri:oozie:sqoop-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<command>import-all-tables --connect jdbc:mysql://xyz.syz/erp --username hive --password hive
--export-dir /user/hive/warehouse/sbl
</command>
</sqoop>
<ok to="end"/>
<error to="killJob"/>
</action>
<kill name="killJob">
<message>"Killed job due to error: ${wf:errorMessage(wf:lastErrorNode())}"</message>
</kill>
<end name="end" />
</workflow-app> job.properties: nameNode=hdfs://syz.syz.com:8020
jobTracker=xyz.syz.com:8050
queueName=default
oozie.use.system.libpath=true
oozie.coord.application.path=${nameNode}/user/${user.name}/scheduledimport
start=2013-09-01T00:00Z
end=2013-12-31T00:00Z
workflowAppUri=${nameNode}/user/${user.name}/scheduledimport I get an error on sqoop task: java.lang.ClassNotFoundException: Class org.apache.oozie.action.hadoop.SqoopMain not found
I have share/lib inside /user/oozie. How do I fix this?
... View more
Labels:
- Labels:
-
Apache Oozie
04-29-2016
06:54 AM
Thank you for your response 🙂 That helped. No, I am not running through a sandbox and have installed hdp on a centos machine. Could you please tell what could be the possible reasons for DN capacity to be so low?
... View more
04-29-2016
06:11 AM
1 Kudo
I am using Ambari and it shows that my data node capacity is only
991.83 MB and has 283 blocks. (Surprisingly), Even if it is the default, why is it as low as 991 MB? I hear that having too many blocks isn't such a good idea. I do not really have space constraints on the machine I am on and we are not planning to have datanode distributed across multiple hosts. My question is: 1. Is there a maximum limit to size of a datanode? If yes, what is it? 2. What is the easiest and robust way to have multiple datanodes on the same machine without breaking what is up and running in the existing cluster? 3. I understand that we need to add more directories for new data nodes and specify the path in ambari but what next? 4. what is the optimum block size in ambari? (or if there is some datanode/block size ration for the optimized number?) 5. How to configure the block size through ambari? 6. How to increase size of an existing datanode in ambari?
... View more
Labels:
- Labels:
-
Apache Hadoop
04-28-2016
02:31 PM
I have hdp installed on server with ambari. HDFS disk space is 100% utilized after I have run service check. As per my understanding, some folders were created during the process. I have minimal understanding of it though. Now, there are .staging folder created in the hdfs directory which I believe is because service check could not be completed? My disk space is only 1GB I guess in the cluster. Do I need to increase it? (I guess I do). If yes, How do I increase it? What would be the ideal amount? Also, theoretically, there are always multiple data nodes in the cluster. Ambari shows only one. Do I need to create new ones myself? What are the advantages and how do I create them? Also, what should be the ideal number of data nodes and why?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
04-28-2016
11:42 AM
I am using ambari installation and trying to run a coordinator oozie job that imports data to hive using sqoop. I have them installed, up and running on server. My workflow.xml looks like this: <workflow-app name="once-a-day" xmlns="uri:oozie:workflow:0.1">
<start to="sqoopAction"/>
<action name="sqoopAction">
<sqoop xmlns="uri:oozie:sqoop-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<command>import-all-tables --connect jdbc:mysql://HOST_NAME/erp --username hiveusername --password hivepassword
--</command>
</sqoop>
<ok to="end"/>
<error to="killJob"/>
</action>
<kill name="killJob">
<message>"Killed job due to error: ${wf:errorMessage(wf:lastErrorNode())}"</message>
</kill>
<end name="end" />
</workflow-app> I get this error: How do I fix this? I have tried everything suggested on Internet but nothing fixes it [0001059-160427195624911-oozie-oozi-W] ACTION[0001059-160427195624911-oozie-oozi-W@sqoopAction] Error starting action [sqoopAction]. ErrorType [TRANSIENT], ErrorCode [JA009], Message [JA009: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.]
org.apache.oozie.action.ActionExecutorException: JA009: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
at org.apache.oozie.action.ActionExecutor.convertExceptionHelper(ActionExecutor.java:456)
at org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:436)
at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1139)
at org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1293)
at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:250)
at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:64)
at org.apache.oozie.command.XCommand.call(XCommand.java:286)
at org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:175)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:120)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:82)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:75)
at org.apache.hadoop.mapred.JobClient.init(JobClient.java:475)
at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:454)
at org.apache.oozie.service.HadoopAccessorService$3.run(HadoopAccessorService.java:462)
at org.apache.oozie.service.HadoopAccessorService$3.run(HadoopAccessorService.java:460)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.oozie.service.HadoopAccessorService.createJobClient(HadoopAccessorService.java:460)
at org.apache.oozie.action.hadoop.JavaActionExecutor.createJobClient(JavaActionExecutor.java:1336)
at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1087)
... 8 more
... View more
Labels:
- Labels:
-
Apache Oozie
- « Previous
- Next »