Member since
04-28-2016
7
Posts
3
Kudos Received
0
Solutions
11-14-2016
08:49 AM
2 Kudos
I use win7 submit mapreduce job(hdp-2.3.4.7-4) ,hdp has been installed ,and I use eclipse (import all necessary jars) and there is a problem 2016-11-14 16:47:56,047 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2016-11-14 16:47:57,389 WARN [main] shortcircuit.DomainSocketFactory (DomainSocketFactory.java:<init>(117)) - The short-circuit local reads feature cannot be used because UNIX Domain sockets are not available on Windows.
2016-11-14 16:47:58,763 INFO [main] impl.TimelineClientImpl (TimelineClientImpl.java:serviceInit(352)) - Timeline service address: http://master.bmsoft.com:8188/ws/v1/timeline/
2016-11-14 16:47:59,029 INFO [main] client.RMProxy (RMProxy.java:createRMProxy(98)) - Connecting to ResourceManager at master.bmsoft.com/10.10.10.36:8050
2016-11-14 16:47:59,996 WARN [main] mapreduce.JobResourceUploader (JobResourceUploader.java:uploadFiles(64)) - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
2016-11-14 16:48:00,043 WARN [main] mapreduce.JobResourceUploader (JobResourceUploader.java:uploadFiles(171)) - No job jar file set. User classes may not be found. See Job or Job#setJar(String).
2016-11-14 16:48:00,091 INFO [main] input.FileInputFormat (FileInputFormat.java:listStatus(283)) - Total input paths to process : 1
2016-11-14 16:48:00,512 INFO [main] mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(198)) - number of splits:1
2016-11-14 16:48:00,871 INFO [main] mapreduce.JobSubmitter (JobSubmitter.java:printTokens(287)) - Submitting tokens for job: job_1479003632635_0030
2016-11-14 16:48:01,074 INFO [main] mapred.YARNRunner (YARNRunner.java:createApplicationSubmissionContext(371)) - Job jar is not present. Not adding any jar to the list of resources.
2016-11-14 16:48:01,402 INFO [main] impl.YarnClientImpl (YarnClientImpl.java:submitApplication(274)) - Submitted application application_1479003632635_0030
2016-11-14 16:48:01,449 INFO [main] mapreduce.Job (Job.java:submit(1294)) - The url to track the job: http://master.bmsoft.com:8088/proxy/application_1479003632635_0030/
2016-11-14 16:48:01,449 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1339)) - Running job: job_1479003632635_0030
2016-11-14 16:48:04,507 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1360)) - Job job_1479003632635_0030 running in uber mode : false
2016-11-14 16:48:04,507 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1367)) - map 0% reduce 0%
2016-11-14 16:48:04,539 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1380)) - Job job_1479003632635_0030 failed with state FAILED due to: Application application_1479003632635_0030 failed 2 times due to AM Container for appattempt_1479003632635_0030_000002 exited with exitCode: 1
For more detailed output, check application tracking page:http://master.bmsoft.com:8088/cluster/app/application_1479003632635_0030Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_e28_1479003632635_0030_02_000001
Exit code: 1
Exception message: /bin/bash: line 0: fg: no job control
Stack trace: ExitCodeException exitCode=1: /bin/bash: line 0: fg: no job control
at org.apache.hadoop.util.Shell.runCommand(Shell.java:576)
at org.apache.hadoop.util.Shell.run(Shell.java:487)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:753)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
2016-11-14 16:48:04,570 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1385)) - Counters: 0
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache YARN
05-30-2016
04:54 AM
I make a cluster used ambari included 1 namenode and 4 datanode on vmware. A datanode was broken ,so I deleted this one and created a new datanode .But cluster created CORRUPT blocks .I try to "hdfs fsck -delete / " to fix it, but lost lots of data. how to fix corrupt blocks ?
... View more
Labels:
- Labels:
-
Apache Hadoop
05-23-2016
11:18 AM
I love you so much.. but I want to know how it happens . I don't change other arguments except yarn's .
... View more
05-23-2016
10:09 AM
1 Kudo
when I debug yarn for a good performance on my cluster. The history server cannot be started ,and I cannot find logs in /var/log/hadoop or /var/log/hadoop-mapreduce.now error starting the history server .Only ambari-web shows some problems ,like this: Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/historyserver.py", line 182, in <module>
HistoryServer().execute()
File "/usr/lib/python2.7/site-packages/resource_management/libraries/script/script.py", line 219, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/historyserver.py", line 92, in start
self.configure(env) # FOR SECURITY
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/historyserver.py", line 55, in configure
yarn(name="historyserver")
File "/usr/lib/python2.7/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/yarn.py", line 98, in yarn
mode=0777
File "/usr/lib/python2.7/site-packages/resource_management/core/base.py", line 125, in __new__
env.resources[r_type][name] = obj
File "/usr/lib/python2.7/site-packages/resource_management/libraries/script/config_dictionary.py", line 81, in __getattr__
raise Fail("Configuration parameter '" + self.name + "' was not found in configurations dictionary!")
resource_management.core.exceptions.Fail: Configuration parameter 'mapreduce.jobhistory.done-dir' was not found in configurations dictionary!
... View more
Labels:
- Labels:
-
Apache Hadoop
04-28-2016
09:44 AM
@Brandon Wilson thanks
... View more
04-28-2016
09:26 AM
I want to integrate spark1.6 with ambari2.2.1 ,but I don't know how to build spark1.6's rpm .Ambari2.2.1 can only support spark1.5 .Please suggest
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Spark