Member since
07-18-2016
262
Posts
12
Kudos Received
21
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
6602 | 09-21-2018 03:16 AM | |
3133 | 07-25-2018 05:03 AM | |
4082 | 02-13-2018 02:00 AM | |
1900 | 01-21-2018 02:47 AM | |
37771 | 08-08-2017 10:32 AM |
04-03-2021
01:46 PM
Check the heap assigned for the node manager in the hadoop/etc/hadoop/yarn-env.sh define the heap for node manager JAVA=$JAVA_HOME/bin/java
JAVA_HEAP_MAX=-Xmx256m
# For setting YARN specific HEAP sizes please use this
# Parameter and set appropriately
YARN_HEAPSIZE=256 After that, you can configure the resources in yarn-site.xml <configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>2000</value>
</property>
<property>
<name>yarn.scheduler.minimum-allocation-vcores</name>
<value>1</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>2000</value>
</property>
<property>
<name>yarn.nodemanager.resource.cpu-vcores</name>
<value>1</value>
</property>
</configuration> please like and confirm if it helps Thanks Ashish
... View more
11-28-2019
12:05 AM
Hi, the link seems broken. Can you share with us the working one? Thanks.
... View more
09-26-2019
12:03 AM
I also faced the same issue.Found the issue was with mysql-connector-java.jar. I followed the below steps 1. Check whether you are able to connect remotely to mysql database. 2.If you are able to connect , then , its mysql-connector-java.jar in ambari 3. Download the correct version of mysql jar from https://dev.mysql.com/downloads/connector/j/ 4. Stop ambari server . 5.Remove the mysql connector jar from ambari 6. Set up again using ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/share/java/ mysql-connector-java-8.0.16
... View more
09-21-2018
03:16 AM
Updating late , After further checking information as below. 1) Hadoop fs -copyFromLocal file1.dat /home/hadoop/file1.dat :- its linux server local command You can check its local server process by #ps -ef|grep file1.dat |grep -i copyFromLocal, you will find the process id ,Hence again we can its local process. 2) How to find yarn application ID for this copyformlocal command :- Its linux server local command and use the local server resource, hence you wont able to find MR/Yarn Jobs. While data copy RM assign the resources however its for datacopy only. Hence "hadoop fs " command occupy the resource from local linux server and hadoop cluster as well for copy only. Where proces is local only , it wont create MR/Yarn Jobs.
... View more
07-25-2018
05:03 AM
Finally I did the following and Certification team has refunded the amount. 1) Reached to Hortonwords Customer care on contact number. 2) Shared Cerfitification registered number and name of person. 3) Raised the complaint on Issue with certification and raised ticket on our request. 4) After 2 week certification team has refunded the amount. As they confirmed , upgrading the Certification Platform from Aug 2018. Will check review, if no complaints will try to take cerfitication again. Hope it helps. thank you.
... View more
02-13-2018
01:04 AM
when launching Spark-shell getting below error
[root@centos4 ~]# spark-shell --master yarn \
> --deploy-mode client \
> --conf spark.ui.port=12335 \
> --num-executors 1 \
> --executor-memory 512M
18/02/14 18:58:58 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/02/14 18:58:59 INFO SecurityManager: Changing view acls to: root
18/02/14 18:58:59 INFO SecurityManager: Changing modify acls to: root
18/02/14 18:58:59 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
18/02/14 18:59:00 INFO HttpServer: Starting HTTP Server
18/02/14 18:59:00 INFO Server: jetty-8.y.z-SNAPSHOT
18/02/14 18:59:01 INFO AbstractConnector: Started SocketConnector@0.0.0.0:39940
18/02/14 18:59:01 INFO Utils: Successfully started service 'HTTP class server' on port 39940.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 1.6.2
/_/
Using Scala version 2.10.5 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_60)
Type in expressions to have them evaluated.
Type :help for more information.
18/02/14 18:59:17 INFO SparkContext: Running Spark version 1.6.2
18/02/14 18:59:17 INFO SecurityManager: Changing view acls to: root
18/02/14 18:59:17 INFO SecurityManager: Changing modify acls to: root
18/02/14 18:59:17 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
18/02/14 18:59:19 INFO Utils: Successfully started service 'sparkDriver' on port 45694.
18/02/14 18:59:23 INFO Slf4jLogger: Slf4jLogger started
18/02/14 18:59:24 INFO Remoting: Starting remoting
18/02/14 18:59:25 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@192.168.154.114:43865]
18/02/14 18:59:25 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 43865.
18/02/14 18:59:25 INFO SparkEnv: Registering MapOutputTracker
18/02/14 18:59:25 INFO SparkEnv: Registering BlockManagerMaster
18/02/14 18:59:25 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-acbf69ba-bf4b-4fae-9d28-ae78d9b60aca
18/02/14 18:59:25 INFO MemoryStore: MemoryStore started with capacity 511.1 MB
18/02/14 18:59:26 INFO SparkEnv: Registering OutputCommitCoordinator
18/02/14 18:59:26 INFO Server: jetty-8.y.z-SNAPSHOT
18/02/14 18:59:26 INFO AbstractConnector: Started SelectChannelConnector@0.0.0.0:12335
18/02/14 18:59:26 INFO Utils: Successfully started service 'SparkUI' on port 12335.
18/02/14 18:59:26 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.154.114:12335
spark.yarn.driver.memoryOverhead is set but does not apply in client mode.
18/02/14 18:59:28 INFO TimelineClientImpl: Timeline service address: http://centos1.test.com:8188/ws/v1/timeline/
18/02/14 18:59:29 INFO RMProxy: Connecting to ResourceManager at centos1.test.com/192.168.154.112:8050
18/02/14 18:59:31 WARN DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
18/02/14 18:59:31 INFO Client: Requesting a new application from cluster with 2 NodeManagers
18/02/14 18:59:31 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (1024 MB per container)
18/02/14 18:59:31 INFO Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
18/02/14 18:59:31 INFO Client: Setting up container launch context for our AM
18/02/14 18:59:31 INFO Client: Setting up the launch environment for our AM container
18/02/14 18:59:31 INFO Client: Using the spark assembly jar on HDFS because you are using HDP, defaultSparkAssembly:hdfs://centos.test.com:8020/hdp/apps/2.4.3.0-227/spark/spark-hdp-assembly.jar
18/02/14 18:59:31 INFO Client: Preparing resources for our AM container
18/02/14 18:59:31 INFO Client: Using the spark assembly jar on HDFS because you are using HDP, defaultSparkAssembly:hdfs://centos.test.com:8020/hdp/apps/2.4.3.0-227/spark/spark-hdp-assembly.jar
18/02/14 18:59:31 INFO Client: Source and destination file systems are the same. Not copying hdfs://centos.test.com:8020/hdp/apps/2.4.3.0-227/spark/spark-hdp-assembly.jar
18/02/14 18:59:31 INFO Client: Uploading resource file:/tmp/spark-611a1716-891c-4b5b-84aa-3eeebb204084/__spark_conf__3734921673150808950.zip -> hdfs://centos.test.com:8020/user/root/.sparkStaging/application_1518599648055_0002/__spark_conf__3734921673150808950.zip
18/02/14 18:59:31 INFO SecurityManager: Changing view acls to: root
18/02/14 18:59:31 INFO SecurityManager: Changing modify acls to: root
18/02/14 18:59:31 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
18/02/14 18:59:31 INFO Client: Submitting application 2 to ResourceManager
18/02/14 18:59:32 INFO YarnClientImpl: Submitted application application_1518599648055_0002
18/02/14 18:59:32 INFO SchedulerExtensionServices: Starting Yarn extension services with app application_1518599648055_0002 and attemptId None
18/02/14 18:59:33 INFO Client: Application report for application_1518599648055_0002 (state: ACCEPTED)
18/02/14 18:59:33 INFO Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1518600178504
final status: UNDEFINED
tracking URL: http://centos1.test.com:8088/proxy/application_1518599648055_0002/
user: root
18/02/14 18:59:34 INFO Client: Application report for application_1518599648055_0002 (state: ACCEPTED)
18/02/14 18:59:35 INFO Client: Application report for application_1518599648055_0002 (state: FAILED)
18/02/14 18:59:35 INFO Client:
client token: N/A
diagnostics: Application application_1518599648055_0002 failed 2 times due to Error launching appattempt_1518599648055_0002_000002. Got exception: org.apache.hadoop.yarn.exceptions.YarnException: Unauthorized request to start container.
This token is expired. current time is 1518614975052 found 1518600781196
Note: System times on machines may be out of sync. Check system time and time zones.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168)
at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)
at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:122)
at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:250)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
. Failing the application.
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1518600178504
final status: FAILED
tracking URL: http://centos1.test.com:8088/cluster/app/application_1518599648055_0002
user: root
18/02/14 18:59:35 INFO Client: Deleting staging directory .sparkStaging/application_1518599648055_0002
18/02/14 18:59:35 ERROR SparkContext: Error initializing SparkContext.
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:122)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:62)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:530)
at org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:1017)
at $line3.$read$$iwC$$iwC.<init>(<console>:15)
at $line3.$read$$iwC.<init>(<console>:24)
at $line3.$read.<init>(<console>:26)
at $line3.$read$.<init>(<console>:30)
at $line3.$read$.<clinit>(<console>)
at $line3.$eval$.<init>(<console>:7)
at $line3.$eval$.<clinit>(<console>)
at $line3.$eval.$print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:125)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159)
at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:108)
at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
at org.apache.spark.repl.Main$.main(Main.scala:31)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/kill,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/api,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/static,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump/json,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/json,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment/json,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd/json,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/json,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool/json,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/json,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/json,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job/json,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/json,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs,null}
18/02/14 18:59:35 INFO SparkUI: Stopped Spark web UI at http://192.168.154.114:12335
18/02/14 18:59:35 WARN YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered!
18/02/14 18:59:35 INFO YarnClientSchedulerBackend: Shutting down all executors
18/02/14 18:59:35 INFO YarnClientSchedulerBackend: Asking each executor to shut down
18/02/14 18:59:35 INFO SchedulerExtensionServices: Stopping SchedulerExtensionServices
(serviceOption=None,
services=List(),
started=false)
18/02/14 18:59:35 INFO YarnClientSchedulerBackend: Stopped
18/02/14 18:59:35 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
18/02/14 18:59:35 INFO MemoryStore: MemoryStore cleared
18/02/14 18:59:35 INFO BlockManager: BlockManager stopped
18/02/14 18:59:35 INFO BlockManagerMaster: BlockManagerMaster stopped
18/02/14 18:59:35 WARN MetricsSystem: Stopping a MetricsSystem that is not running
18/02/14 18:59:35 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
18/02/14 18:59:35 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
18/02/14 18:59:35 INFO SparkContext: Successfully stopped SparkContext
18/02/14 18:59:35 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
18/02/14 18:59:36 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:122)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:62)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:530)
at org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:1017)
at $iwC$$iwC.<init>(<console>:15)
at $iwC.<init>(<console>:24)
at <init>(<console>:26)
at .<init>(<console>:30)
at .<clinit>(<console>)
at .<init>(<console>:7)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:125)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159)
at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:108)
at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
at org.apache.spark.repl.Main$.main(Main.scala:31)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
java.lang.NullPointerException
at org.apache.spark.sql.SQLContext$.createListenerAndUI(SQLContext.scala:1367)
at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:101)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at org.apache.spark.repl.SparkILoop.createSQLContext(SparkILoop.scala:1028)
at $iwC$$iwC.<init>(<console>:15)
at $iwC.<init>(<console>:24)
at <init>(<console>:26)
at .<init>(<console>:30)
at .<clinit>(<console>)
at .<init>(<console>:7)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:132)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159)
at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:108)
at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
at org.apache.spark.repl.Main$.main(Main.scala:31)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
<console>:16: error: not found: value sqlContext
import sqlContext.implicits._
^
<console>:16: error: not found: value sqlContext
import sqlContext.sql
... View more
02-13-2018
02:00 AM
After further Analysis found following things and changed , then "MR2 service check/Container killed on request Exit code is 143" went fine.
1) yarn-site.xml :-
=>Initial container not able to allocate the memory and size was yarn.scheduler.minimum-allocation-mb(178 MB) and yarn.scheduler.maximum-allocation-mb (512 MB) only.
=>Checked HDFS Block size =128 MB, as initial container not able to allocate, increased the minimum/maximum to multiple of 128 MB block size as below . => Changed the following initial container size from yarn.scheduler.minimum-allocation-mb(178 to 512 MB) and yarn.scheduler.maximum-allocation-mb (512 to 1024 MB) in yarn-site.xml.
2) mapred-site.xml:-
Once above parameter changed in yarn-site.xml, below parameter required to change in mapred-site.xml => mapreduce.task.io.sort.mb from 95 to 286 MB,mapreduce.map.memory.mb/mapreduce.reduce.memory.mb to 512 MB =>yarn.app.mapreduce.am.resource.mb from 170 to 512 MB. increase these parameter value multiple of 128 MB block size to get out of container killed error . As above we required to change parameter in yarn-site.xml, mapred-site.xml through ambari due resource constraint on existing till we get the out of error "Container killed on request. Exit code is 143. We can apply same rule to get out of below error Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Failing this attempt. Failing the application.
INFO mapreduce.Job: Counters: 0
... View more
01-21-2018
01:01 PM
After Fix Repository
... View more
12-17-2017
08:04 AM
1 Kudo
your comments are appreciated, thanks you. as you mentioned and in addition we can change the input splits size according to our requirement by using the the below parameters. MAPRED.MAX.SPLIT.SIZE :- If we want to increase the inputsplit size ,use this parameter while running the job.
DFS.BLOCK.SIZE :- Is global HDFS block size parameter, while storing the data in cluster .
... View more
12-12-2018
01:15 PM
Hi, for me, juste restarting all services in Ambari fixed the issue
... View more