Member since
02-17-2016
9
Posts
6
Kudos Received
0
Solutions
02-22-2016
12:30 AM
1 Kudo
@Neeraj Sabharwal I'm not sure if I miss something..? It is appreciated if you can help.
... View more
02-20-2016
02:13 AM
1 Kudo
@Neeraj Sabharwal Thanks a lot. Current status is as you mentioned. Could you suggest more about MAPREDUCE-3056? I don't quite understand what should I do to fix it.
... View more
02-20-2016
12:29 AM
@Neeraj Sabharwal Yes, I also used admin or other user I created on hue to submit but fail.
... View more
02-18-2016
05:46 AM
1 Kudo
@Neeraj Sabharwal Thanks for your suggestion, but it is better if all user can submit the job successfully.
... View more
02-18-2016
03:43 AM
@Neeraj Sabharwal I only found a property "dfs.encryption.key.provider.uri" and it doesn't have value. I also notice that the value of property "fs.defaultFS" is "hdfs://bigdata01:8020/" Is it what you mentioned?
... View more
02-18-2016
03:31 AM
1 Kudo
@Neeraj Sabharwal Thanks. It works fine when I use yarn user to submit the job. How do I know the HDF url is correct or not?
... View more
02-18-2016
03:18 AM
@Artem Ervits It seems that the bug is due to the value of yarn.app.mapreduce.am.staging-dir. I found the property "yarn.app.mapreduce.am.staging-dir" and the default value is "/user", but I have no idea what value I should use.
... View more
02-17-2016
08:18 AM
1 Kudo
I add user named "hdfs" and use it to run the job.
... View more
02-17-2016
06:03 AM
1 Kudo
Hi All, I have 5 node cluster with HDP 2.3.4. I use shell script with hive command to create a table and copy some information from the other table. It works fine when I run it under hive command line, but it show error below when I use workflow to run it. I found some information on net such as disable permission on dfs and set permission of the staging folder to 777, but they didn't work. could you help on this? Error message from cluster management page--------------------------------------------------------------------------------------------------------- Job init failed : org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.io.FileNotFoundException: File does not exist: hdfs://bigdata01:8020/user/hdfs/.staging/job_1455495681392_0006/job.splitmetainfo
at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.createSplits(JobImpl.java:1568)
at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.transition(JobImpl.java:1432)
at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.transition(JobImpl.java:1390)
at org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:996)
at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:138)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:1346)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1121)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$5.run(MRAppMaster.java:1557)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1553)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1486)
Caused by: java.io.FileNotFoundException: File does not exist: hdfs://bigdata01:8020/user/hdfs/.staging/job_1455495681392_0006/job.splitmetainfo
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1319)
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1311)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1311)
at org.apache.hadoop.mapreduce.split.SplitMetaInfoReader.readSplitMetaInfo(SplitMetaInfoReader.java:51)
at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.createSplits(JobImpl.java:1563)
... 17 more Error message from hue-------------------------------------------------------------------------------------------------------------------------------------- Log Length: 1433
WARNING: Use "yarn jar" to launch YARN applications.
16/02/15 09:36:22 WARN conf.HiveConf: HiveConf of name hive.server2.enable.impersonation does not exist
Logging initialized using configuration in file:/etc/hive/2.3.4.0-3485/0/hive-log4j.properties
Query ID = yarn_20160215093628_55a46c17-356e-41f9-a9ff-363ab845877f
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Starting Job = <a href="http://10.195.226.173:8888/jobbrowser/jobs/job_1455495681392_0006">job_1455495681392_0006</a>, Tracking URL = http://bigdata02:8088/proxy/application_1455495681392_0006/
Kill Command = <a href="http://10.195.226.173:8888/filebrowser/view/usr/hdp/2.3.4.0-3485/hadoop/bin/hadoop">/usr/hdp/2.3.4.0-3485/hadoop/bin/hadoop</a> job -kill <a href="http://10.195.226.173:8888/jobbrowser/jobs/job_1455495681392_0006">job_1455495681392_0006</a>
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
2016-02-15 09:37:01,459 Stage-1 map = 0%, reduce = 0%
Ended Job = <a href="http://10.195.226.173:8888/jobbrowser/jobs/job_1455495681392_0006">job_1455495681392_0006</a> with errors
Error during job, obtaining debugging information...
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Stage-Stage-1: HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.ShellMain], exit code [1]
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Cloudera Hue