Member since
07-18-2016
262
Posts
12
Kudos Received
21
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
6595 | 09-21-2018 03:16 AM | |
3128 | 07-25-2018 05:03 AM | |
4080 | 02-13-2018 02:00 AM | |
1899 | 01-21-2018 02:47 AM | |
37759 | 08-08-2017 10:32 AM |
05-30-2018
09:56 AM
its same as how we keep/save data on Linux file system. However HDFS is distributed file system. Data will be shared across cluster . Copying data from Your Local System :- hdfs dfs -copyFromLocal data.txt /hddspath
... View more
02-13-2018
02:00 AM
After further Analysis found following things and changed , then "MR2 service check/Container killed on request Exit code is 143" went fine.
1) yarn-site.xml :-
=>Initial container not able to allocate the memory and size was yarn.scheduler.minimum-allocation-mb(178 MB) and yarn.scheduler.maximum-allocation-mb (512 MB) only.
=>Checked HDFS Block size =128 MB, as initial container not able to allocate, increased the minimum/maximum to multiple of 128 MB block size as below . => Changed the following initial container size from yarn.scheduler.minimum-allocation-mb(178 to 512 MB) and yarn.scheduler.maximum-allocation-mb (512 to 1024 MB) in yarn-site.xml.
2) mapred-site.xml:-
Once above parameter changed in yarn-site.xml, below parameter required to change in mapred-site.xml => mapreduce.task.io.sort.mb from 95 to 286 MB,mapreduce.map.memory.mb/mapreduce.reduce.memory.mb to 512 MB =>yarn.app.mapreduce.am.resource.mb from 170 to 512 MB. increase these parameter value multiple of 128 MB block size to get out of container killed error . As above we required to change parameter in yarn-site.xml, mapred-site.xml through ambari due resource constraint on existing till we get the out of error "Container killed on request. Exit code is 143. We can apply same rule to get out of below error Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Failing this attempt. Failing the application.
INFO mapreduce.Job: Counters: 0
... View more
02-13-2018
01:04 AM
when launching Spark-shell getting below error
[root@centos4 ~]# spark-shell --master yarn \
> --deploy-mode client \
> --conf spark.ui.port=12335 \
> --num-executors 1 \
> --executor-memory 512M
18/02/14 18:58:58 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/02/14 18:58:59 INFO SecurityManager: Changing view acls to: root
18/02/14 18:58:59 INFO SecurityManager: Changing modify acls to: root
18/02/14 18:58:59 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
18/02/14 18:59:00 INFO HttpServer: Starting HTTP Server
18/02/14 18:59:00 INFO Server: jetty-8.y.z-SNAPSHOT
18/02/14 18:59:01 INFO AbstractConnector: Started SocketConnector@0.0.0.0:39940
18/02/14 18:59:01 INFO Utils: Successfully started service 'HTTP class server' on port 39940.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 1.6.2
/_/
Using Scala version 2.10.5 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_60)
Type in expressions to have them evaluated.
Type :help for more information.
18/02/14 18:59:17 INFO SparkContext: Running Spark version 1.6.2
18/02/14 18:59:17 INFO SecurityManager: Changing view acls to: root
18/02/14 18:59:17 INFO SecurityManager: Changing modify acls to: root
18/02/14 18:59:17 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
18/02/14 18:59:19 INFO Utils: Successfully started service 'sparkDriver' on port 45694.
18/02/14 18:59:23 INFO Slf4jLogger: Slf4jLogger started
18/02/14 18:59:24 INFO Remoting: Starting remoting
18/02/14 18:59:25 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@192.168.154.114:43865]
18/02/14 18:59:25 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 43865.
18/02/14 18:59:25 INFO SparkEnv: Registering MapOutputTracker
18/02/14 18:59:25 INFO SparkEnv: Registering BlockManagerMaster
18/02/14 18:59:25 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-acbf69ba-bf4b-4fae-9d28-ae78d9b60aca
18/02/14 18:59:25 INFO MemoryStore: MemoryStore started with capacity 511.1 MB
18/02/14 18:59:26 INFO SparkEnv: Registering OutputCommitCoordinator
18/02/14 18:59:26 INFO Server: jetty-8.y.z-SNAPSHOT
18/02/14 18:59:26 INFO AbstractConnector: Started SelectChannelConnector@0.0.0.0:12335
18/02/14 18:59:26 INFO Utils: Successfully started service 'SparkUI' on port 12335.
18/02/14 18:59:26 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.154.114:12335
spark.yarn.driver.memoryOverhead is set but does not apply in client mode.
18/02/14 18:59:28 INFO TimelineClientImpl: Timeline service address: http://centos1.test.com:8188/ws/v1/timeline/
18/02/14 18:59:29 INFO RMProxy: Connecting to ResourceManager at centos1.test.com/192.168.154.112:8050
18/02/14 18:59:31 WARN DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
18/02/14 18:59:31 INFO Client: Requesting a new application from cluster with 2 NodeManagers
18/02/14 18:59:31 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (1024 MB per container)
18/02/14 18:59:31 INFO Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
18/02/14 18:59:31 INFO Client: Setting up container launch context for our AM
18/02/14 18:59:31 INFO Client: Setting up the launch environment for our AM container
18/02/14 18:59:31 INFO Client: Using the spark assembly jar on HDFS because you are using HDP, defaultSparkAssembly:hdfs://centos.test.com:8020/hdp/apps/2.4.3.0-227/spark/spark-hdp-assembly.jar
18/02/14 18:59:31 INFO Client: Preparing resources for our AM container
18/02/14 18:59:31 INFO Client: Using the spark assembly jar on HDFS because you are using HDP, defaultSparkAssembly:hdfs://centos.test.com:8020/hdp/apps/2.4.3.0-227/spark/spark-hdp-assembly.jar
18/02/14 18:59:31 INFO Client: Source and destination file systems are the same. Not copying hdfs://centos.test.com:8020/hdp/apps/2.4.3.0-227/spark/spark-hdp-assembly.jar
18/02/14 18:59:31 INFO Client: Uploading resource file:/tmp/spark-611a1716-891c-4b5b-84aa-3eeebb204084/__spark_conf__3734921673150808950.zip -> hdfs://centos.test.com:8020/user/root/.sparkStaging/application_1518599648055_0002/__spark_conf__3734921673150808950.zip
18/02/14 18:59:31 INFO SecurityManager: Changing view acls to: root
18/02/14 18:59:31 INFO SecurityManager: Changing modify acls to: root
18/02/14 18:59:31 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
18/02/14 18:59:31 INFO Client: Submitting application 2 to ResourceManager
18/02/14 18:59:32 INFO YarnClientImpl: Submitted application application_1518599648055_0002
18/02/14 18:59:32 INFO SchedulerExtensionServices: Starting Yarn extension services with app application_1518599648055_0002 and attemptId None
18/02/14 18:59:33 INFO Client: Application report for application_1518599648055_0002 (state: ACCEPTED)
18/02/14 18:59:33 INFO Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1518600178504
final status: UNDEFINED
tracking URL: http://centos1.test.com:8088/proxy/application_1518599648055_0002/
user: root
18/02/14 18:59:34 INFO Client: Application report for application_1518599648055_0002 (state: ACCEPTED)
18/02/14 18:59:35 INFO Client: Application report for application_1518599648055_0002 (state: FAILED)
18/02/14 18:59:35 INFO Client:
client token: N/A
diagnostics: Application application_1518599648055_0002 failed 2 times due to Error launching appattempt_1518599648055_0002_000002. Got exception: org.apache.hadoop.yarn.exceptions.YarnException: Unauthorized request to start container.
This token is expired. current time is 1518614975052 found 1518600781196
Note: System times on machines may be out of sync. Check system time and time zones.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168)
at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)
at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:122)
at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:250)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
. Failing the application.
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1518600178504
final status: FAILED
tracking URL: http://centos1.test.com:8088/cluster/app/application_1518599648055_0002
user: root
18/02/14 18:59:35 INFO Client: Deleting staging directory .sparkStaging/application_1518599648055_0002
18/02/14 18:59:35 ERROR SparkContext: Error initializing SparkContext.
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:122)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:62)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:530)
at org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:1017)
at $line3.$read$$iwC$$iwC.<init>(<console>:15)
at $line3.$read$$iwC.<init>(<console>:24)
at $line3.$read.<init>(<console>:26)
at $line3.$read$.<init>(<console>:30)
at $line3.$read$.<clinit>(<console>)
at $line3.$eval$.<init>(<console>:7)
at $line3.$eval$.<clinit>(<console>)
at $line3.$eval.$print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:125)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159)
at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:108)
at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
at org.apache.spark.repl.Main$.main(Main.scala:31)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/kill,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/api,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/static,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump/json,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/json,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment/json,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd/json,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/json,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool/json,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/json,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/json,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job/json,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/json,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs,null}
18/02/14 18:59:35 INFO SparkUI: Stopped Spark web UI at http://192.168.154.114:12335
18/02/14 18:59:35 WARN YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered!
18/02/14 18:59:35 INFO YarnClientSchedulerBackend: Shutting down all executors
18/02/14 18:59:35 INFO YarnClientSchedulerBackend: Asking each executor to shut down
18/02/14 18:59:35 INFO SchedulerExtensionServices: Stopping SchedulerExtensionServices
(serviceOption=None,
services=List(),
started=false)
18/02/14 18:59:35 INFO YarnClientSchedulerBackend: Stopped
18/02/14 18:59:35 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
18/02/14 18:59:35 INFO MemoryStore: MemoryStore cleared
18/02/14 18:59:35 INFO BlockManager: BlockManager stopped
18/02/14 18:59:35 INFO BlockManagerMaster: BlockManagerMaster stopped
18/02/14 18:59:35 WARN MetricsSystem: Stopping a MetricsSystem that is not running
18/02/14 18:59:35 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
18/02/14 18:59:35 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
18/02/14 18:59:35 INFO SparkContext: Successfully stopped SparkContext
18/02/14 18:59:35 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
18/02/14 18:59:36 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:122)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:62)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:530)
at org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:1017)
at $iwC$$iwC.<init>(<console>:15)
at $iwC.<init>(<console>:24)
at <init>(<console>:26)
at .<init>(<console>:30)
at .<clinit>(<console>)
at .<init>(<console>:7)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:125)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159)
at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:108)
at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
at org.apache.spark.repl.Main$.main(Main.scala:31)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
java.lang.NullPointerException
at org.apache.spark.sql.SQLContext$.createListenerAndUI(SQLContext.scala:1367)
at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:101)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at org.apache.spark.repl.SparkILoop.createSQLContext(SparkILoop.scala:1028)
at $iwC$$iwC.<init>(<console>:15)
at $iwC.<init>(<console>:24)
at <init>(<console>:26)
at .<init>(<console>:30)
at .<clinit>(<console>)
at .<init>(<console>:7)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:132)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159)
at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:108)
at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
at org.apache.spark.repl.Main$.main(Main.scala:31)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
<console>:16: error: not found: value sqlContext
import sqlContext.implicits._
^
<console>:16: error: not found: value sqlContext
import sqlContext.sql
... View more
02-06-2018
05:04 AM
[root@ conf]# spark-shell --master yarn --conf spark.ui.port=12234
18/01/31 09:58:55 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/01/31 09:58:56 INFO SecurityManager: Changing view acls to: root
18/01/31 09:58:56 INFO SecurityManager: Changing modify acls to: root
18/01/31 09:58:56 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
18/01/31 09:58:56 INFO HttpServer: Starting HTTP Server
18/01/31 09:58:57 INFO Server: jetty-8.y.z-SNAPSHOT
18/01/31 09:58:57 INFO AbstractConnector: Started SocketConnector@0.0.0.0:43155
18/01/31 09:58:57 INFO Utils: Successfully started service 'HTTP class server' on port 43155.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 1.6.2
/_/
.
.
.
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159)
at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:108)
at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
at org.apache.spark.repl.Main$.main(Main.scala:31)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
<console>:16: error: not found: value sqlContext
import sqlContext.implicits._
^
<console>:16: error: not found: value sqlContext
import sqlContext.sql
^
scala>
... View more
Labels:
02-06-2018
02:24 AM
When Client try to read(SELECT) on which all ready open to Write(INSERT/DELETE) 1)Client request for READ a block from HDFS file system. 2) That given block is all ready open for write and it wait till WRITE Operation complete, because its Start/End block ID will change during Write, hence Client read wait till complete. 3) Client wait till "dfs.client.failover.max.attempts" in HDFS-SITE.xml , Ex:- 10 attempt , it try for 10 attempt to read the operation , mean time if Write HDFS complete , client will read & complete. 4) if Client not able to read within max "dfs.client.failover.max.attempts" attempt Client read request will fail.
... View more
02-06-2018
01:39 AM
Service check getting fail Container killed on request.Exit code is143 Container exited with a non-zero exit code 143 Failingthis attempt.Failing the application.
... View more
02-01-2018
02:03 AM
missed to update even after set above parameter service check getting failed with same error. Please advise
... View more
01-31-2018
02:04 PM
Thanks Jay for reply. Currently parameter has been set and values as below. mapreduce.reduce.memory.mb: 400 MB mapreduce.map.memory.mb:- 350 MB mapreduce.reduce.java.opts : 240 MB mapreduce.map.java.opts:- 250 MB
... View more
01-31-2018
10:54 AM
18/01/30 05:36:48 INFO impl.YarnClientImpl: Submitted application application_1517270734071_0001
18/01/30 05:36:49 INFO mapreduce.Job: The url to track the job: http://centos1.test.com:8088/proxy/application_1517270734071_0001/
18/01/30 05:36:49 INFO mapreduce.Job: Running job: job_1517270734071_0001
18/01/30 05:37:28 INFO mapreduce.Job: Job job_1517270734071_0001 running in uber mode : false
18/01/30 05:37:28 INFO mapreduce.Job: map 0% reduce 0%
18/01/30 05:37:28 INFO mapreduce.Job: Job job_1517270734071_0001 failed with state FAILED due to: Application application_1517270734071_0001 failed 2 times due to AM Container for appattempt_1517270734071_0001_000002 exited with exitCode: -104
For more detailed output, check application tracking page:http://centos1.test.com:8088/cluster/app/application_1517270734071_0001Then, click on links to logs of each attempt.
Diagnostics: Container [pid=49027,containerID=container_e03_1517270734071_0001_02_000001] is running beyond physical memory limits. Current usage: 173 MB of 170 MB physical memory used; 1.9 GB of 680 MB virtual memory used. Killing container.
Dump of the process-tree for container_e03_1517270734071_0001_02_000001 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 49027 49025 49027 49027 (bash) 0 0 108630016 187 /bin/bash -c /usr/jdk64/jdk1.8.0_60/bin/java -Djava.io.tmpdir=/hadoop/yarn/local/usercache/ambari-qa/appcache/application_1517270734071_0001/container_e03_1517270734071_0001_02_000001/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/hadoop/yarn/log/application_1517270734071_0001/container_e03_1517270734071_0001_02_000001 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog -Dhdp.version=2.4.3.0-227 -Xmx136m -Dhdp.version=2.4.3.0-227 org.apache.hadoop.mapreduce.v2.app.MRAppMaster 1>/hadoop/yarn/log/application_1517270734071_0001/container_e03_1517270734071_0001_02_000001/stdout 2>/hadoop/yarn/log/application_1517270734071_0001/container_e03_1517270734071_0001_02_000001/stderr
|- 49041 49027 49027 49027 (java) 1111 341 1950732288 44101 /usr/jdk64/jdk1.8.0_60/bin/java -Djava.io.tmpdir=/hadoop/yarn/local/usercache/ambari-qa/appcache/application_1517270734071_0001/container_e03_1517270734071_0001_02_000001/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/hadoop/yarn/log/application_1517270734071_0001/container_e03_1517270734071_0001_02_000001 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog -Dhdp.version=2.4.3.0-227 -Xmx136m -Dhdp.version=2.4.3.0-227 org.apache.hadoop.mapreduce.v2.app.MRAppMaster
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Failing this attempt. Failing the application.
18/01/30 05:37:28 INFO mapreduce.Job: Counters: 0
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache YARN
01-29-2018
05:16 AM
One of Powerful clean up by below python script on hadoop , It clean all directory , including user and files [root@.ssh]# python /usr/lib/python2.6/site-packages/ambari_agent/HostCleanup.py --silent
INFO:HostCleanup:
Killing pid's: ['']
INFO:HostCleanup:Deleting packages: ['']
INFO:HostCleanup:
Deleting users: ['ams', 'ambari-qa', 'yarn', 'mapred', 'tez', 'hbase', 'sqoop', 'oozie', 'falcon', 'flume', 'hive', 'hcat']
INFO:HostCleanup:Executing command: /var/lib/ambari-agent/ambari-sudo.sh userdel -rf ams
INFO:HostCleanup:Successfully deleted user: ams
INFO:HostCleanup:Executing command: /var/lib/ambari-agent/ambari-sudo.sh userdel -rf ambari-qa
INFO:HostCleanup:Successfully deleted user: ambari-qa
INFO:HostCleanup:Executing command: /var/lib/ambari-agent/ambari-sudo.sh userdel -rf yarn
INFO:HostCleanup:Successfully deleted user: yarn
INFO:HostCleanup:Executing command: /var/lib/ambari-agent/ambari-sudo.sh userdel -rf mapred
INFO:HostCleanup:Successfully deleted user: mapred
INFO:HostCleanup:Executing command: /var/lib/ambari-agent/ambari-sudo.sh userdel -rf tez
INFO:HostCleanup:Successfully deleted user: tez
INFO:HostCleanup:Executing command: /var/lib/ambari-agent/ambari-sudo.sh userdel -rf hbase
INFO:HostCleanup:Successfully deleted user: hbase
INFO:HostCleanup:Executing command: /var/lib/ambari-agent/ambari-sudo.sh userdel -rf sqoop
INFO:HostCleanup:Successfully deleted user: sqoop
INFO:HostCleanup:Executing command: /var/lib/ambari-agent/ambari-sudo.sh userdel -rf oozie
INFO:HostCleanup:Successfully deleted user: oozie
INFO:HostCleanup:Executing command: /var/lib/ambari-agent/ambari-sudo.sh userdel -rf falcon
INFO:HostCleanup:Successfully deleted user: falcon
INFO:HostCleanup:Executing command: /var/lib/ambari-agent/ambari-sudo.sh userdel -rf flume
INFO:HostCleanup:Successfully deleted user: flume
INFO:HostCleanup:Executing command: /var/lib/ambari-agent/ambari-sudo.sh userdel -rf hive
INFO:HostCleanup:Successfully deleted user: hive
INFO:HostCleanup:Executing command: /var/lib/ambari-agent/ambari-sudo.sh userdel -rf hcat
INFO:HostCleanup:Successfully deleted user: hcat
INFO:HostCleanup:Executing command: /var/lib/ambari-agent/ambari-sudo.sh groupdel hadoop
WARNING:HostCleanup:Cannot delete group : hadoop, groupdel: cannot remove the primary group of user 'hduser'
INFO:HostCleanup:Path doesn't exists: /home/ams
INFO:HostCleanup:Path doesn't exists: /home/ambari-qa
INFO:HostCleanup:Path doesn't exists: /home/yarn
INFO:HostCleanup:Path doesn't exists: /home/mapred
INFO:HostCleanup:Path doesn't exists: /home/tez
INFO:HostCleanup:Path doesn't exists: /home/hbase
INFO:HostCleanup:Path doesn't exists: /home/sqoop
INFO:HostCleanup:Path doesn't exists: /home/oozie
INFO:HostCleanup:Path doesn't exists: /home/falcon
INFO:HostCleanup:Path doesn't exists: /home/flume
INFO:HostCleanup:Path doesn't exists: /home/hive
INFO:HostCleanup:Path doesn't exists: /home/hcat
INFO:HostCleanup:Deleting file/folder: /tmp/Jetty_0_0_0_0_60010_master____q3nwom
INFO:HostCleanup:Deleting file/folder: /tmp/Jetty_0_0_0_0_34300_mapreduce____.dx7bll
INFO:HostCleanup:Deleting file/folder: /tmp/hbase-hbase
INFO:HostCleanup:Deleting file/folder: /tmp/Jetty_0_0_0_0_34025_mapreduce____jgsznr
INFO:HostCleanup:Deleting file/folder: /tmp/Jetty_centos4_test_com_8088_cluster____.cl0kmf
INFO:HostCleanup:Deleting file/folder: /tmp/Jetty_0_0_0_0_8042_node____19tj0x
INFO:HostCleanup:
Deleting directories: ['']
INFO:HostCleanup:Path doesn't exists:
INFO:HostCleanup:
Deleting repo files: []
INFO:HostCleanup:
Erasing alternatives:{'symlink_list': [''], 'target_list': ['']}
INFO:HostCleanup:Path doesn't exists:
INFO:HostCleanup:Clean-up completed. The output is at /var/lib/ambari-agent/data/hostcleanup.result
... View more