Member since
07-18-2016
262
Posts
12
Kudos Received
21
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4170 | 09-21-2018 03:16 AM | |
1667 | 07-25-2018 05:03 AM | |
2321 | 02-13-2018 02:00 AM | |
1144 | 01-21-2018 02:47 AM | |
30377 | 08-08-2017 10:32 AM |
09-21-2018
03:16 AM
Updating late , After further checking information as below. 1) Hadoop fs -copyFromLocal file1.dat /home/hadoop/file1.dat :- its linux server local command You can check its local server process by #ps -ef|grep file1.dat |grep -i copyFromLocal, you will find the process id ,Hence again we can its local process. 2) How to find yarn application ID for this copyformlocal command :- Its linux server local command and use the local server resource, hence you wont able to find MR/Yarn Jobs. While data copy RM assign the resources however its for datacopy only. Hence "hadoop fs " command occupy the resource from local linux server and hadoop cluster as well for copy only. Where proces is local only , it wont create MR/Yarn Jobs.
... View more
08-30-2018
01:42 AM
Entries will updated in logs, however is there any command to check application id for Hadoop Command i am looking like that. Example :- for Yarn we can check list of running jobs by using YARN command #yarn application -list
... View more
08-23-2018
09:52 PM
I have a job which copy data from Local file system and HDFS 1) Hadoop fs -copyFromLocal file1.dat /home/hadoop/file1.dat 2) How to find yarn application ID for this copyformlocal command thanks,
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache YARN
07-25-2018
05:03 AM
Finally I did the following and Certification team has refunded the amount. 1) Reached to Hortonwords Customer care on contact number. 2) Shared Cerfitification registered number and name of person. 3) Raised the complaint on Issue with certification and raised ticket on our request. 4) After 2 week certification team has refunded the amount. As they confirmed , upgrading the Certification Platform from Aug 2018. Will check review, if no complaints will try to take cerfitication again. Hope it helps. thank you.
... View more
07-04-2018
08:32 AM
Falcon was running fine and didn't fine any error in log
... View more
07-03-2018
03:39 PM
$falcon entity -type process -file oozie.xml -submitAndSchedule
ERROR: Bad Request;default/submit command is already issued for (process)
Unable to connect to Falcon server, please check if the URL is correct and Falcon server is up and running
Stacktrace:
com.sun.jersey.api.client.ClientHandlerException: java.net.SocketTimeoutException: Read timed out
at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:149)
at com.sun.jersey.api.client.Client.handle(Client.java:648)
at com.sun.jersey.api.client.WebResource.handle(WebResource.java:670)
at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74)
at com.sun.jersey.api.client.WebResource$Builder.method(WebResource.java:623)
at org.apache.falcon.client.FalconClient$ResourceBuilder.call(FalconClient.java:861)
at org.apache.falcon.client.FalconClient.submitAndSchedule(FalconClient.java:446)
at org.apache.falcon.cli.FalconEntityCLI.entityCommand(FalconEntityCLI.java:268)
at org.apache.falcon.cli.FalconCLI.run(FalconCLI.java:125)
at org.apache.falcon.cli.FalconCLI.main(FalconCLI.java:66)
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:704)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:647)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1536)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1441)
at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
at com.sun.jersey.client.urlconnection.URLConnectionClientHandler._invoke(URLConnectionClientHandler.java:240)
at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:147)
... 9 more
... View more
Labels:
06-13-2018
08:20 AM
Could you help no response since 15 days
... View more
06-03-2018
02:02 PM
Please can some from Hortonworks can Respond its been 3 days no information
... View more
06-02-2018
01:12 AM
Send mail to Certification team and this is the request number # Your request (14199) has been received and is being reviewed by our support staff.
... View more
05-31-2018
01:51 PM
Certification team: please respond with your inputs.
... View more
05-31-2018
02:41 AM
@William Gonzalez Please advise with your comments and for next what shall need to do.
... View more
05-30-2018
06:09 PM
Hortonworks is a great company, great product but I think it should recognize it has a big problem with its exam provider.Complaints about the exam environment and the subsequent delivery is very poor and appaling. Personally, My friend sat for exam but I encountered the same problem on first attempt network issues. It need attention hortonworks developers/engineers are getting frustrated with the PSI exam environment but hortonworks is doing nothing to resolve the problem, delays in delivering exam results are unacceptable when there is an SLA. If hortonworks wants consultants to deliver your products and compete with other vendors for its promotion I think they should rethink the exam delivery process for God's sake. 1) For simple error/output not able to check in Given Window . 2) For scroll up/down is very slow and not as user friendly . 3) Not alone person to complaint like this, many people are there and you can check in community. 4) If you not able to kindly close the Certification, at least we wont try to get certified from Hortonworks. Dear don't screw up such a nice product! Report to the responsible managers 🙂 Exam Sponsor: Hortonworks Exam: HDP Certified Developer: Spark Exam Code: HDPCD:Spark Scheduled Date: May 30, 2018 Scheduled Time: 11:00 PM Malay Peninsula Standard Time Confirmation Code: 351-669 Candidate Id: 3994184016
... View more
Labels:
- Labels:
-
Apache Hadoop
05-30-2018
09:56 AM
its same as how we keep/save data on Linux file system. However HDFS is distributed file system. Data will be shared across cluster . Copying data from Your Local System :- hdfs dfs -copyFromLocal data.txt /hddspath
... View more
02-13-2018
02:00 AM
After further Analysis found following things and changed , then "MR2 service check/Container killed on request Exit code is 143" went fine.
1) yarn-site.xml :-
=>Initial container not able to allocate the memory and size was yarn.scheduler.minimum-allocation-mb(178 MB) and yarn.scheduler.maximum-allocation-mb (512 MB) only.
=>Checked HDFS Block size =128 MB, as initial container not able to allocate, increased the minimum/maximum to multiple of 128 MB block size as below . => Changed the following initial container size from yarn.scheduler.minimum-allocation-mb(178 to 512 MB) and yarn.scheduler.maximum-allocation-mb (512 to 1024 MB) in yarn-site.xml.
2) mapred-site.xml:-
Once above parameter changed in yarn-site.xml, below parameter required to change in mapred-site.xml => mapreduce.task.io.sort.mb from 95 to 286 MB,mapreduce.map.memory.mb/mapreduce.reduce.memory.mb to 512 MB =>yarn.app.mapreduce.am.resource.mb from 170 to 512 MB. increase these parameter value multiple of 128 MB block size to get out of container killed error . As above we required to change parameter in yarn-site.xml, mapred-site.xml through ambari due resource constraint on existing till we get the out of error "Container killed on request. Exit code is 143. We can apply same rule to get out of below error Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Failing this attempt. Failing the application.
INFO mapreduce.Job: Counters: 0
... View more
02-13-2018
01:04 AM
when launching Spark-shell getting below error
[root@centos4 ~]# spark-shell --master yarn \
> --deploy-mode client \
> --conf spark.ui.port=12335 \
> --num-executors 1 \
> --executor-memory 512M
18/02/14 18:58:58 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/02/14 18:58:59 INFO SecurityManager: Changing view acls to: root
18/02/14 18:58:59 INFO SecurityManager: Changing modify acls to: root
18/02/14 18:58:59 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
18/02/14 18:59:00 INFO HttpServer: Starting HTTP Server
18/02/14 18:59:00 INFO Server: jetty-8.y.z-SNAPSHOT
18/02/14 18:59:01 INFO AbstractConnector: Started SocketConnector@0.0.0.0:39940
18/02/14 18:59:01 INFO Utils: Successfully started service 'HTTP class server' on port 39940.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 1.6.2
/_/
Using Scala version 2.10.5 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_60)
Type in expressions to have them evaluated.
Type :help for more information.
18/02/14 18:59:17 INFO SparkContext: Running Spark version 1.6.2
18/02/14 18:59:17 INFO SecurityManager: Changing view acls to: root
18/02/14 18:59:17 INFO SecurityManager: Changing modify acls to: root
18/02/14 18:59:17 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
18/02/14 18:59:19 INFO Utils: Successfully started service 'sparkDriver' on port 45694.
18/02/14 18:59:23 INFO Slf4jLogger: Slf4jLogger started
18/02/14 18:59:24 INFO Remoting: Starting remoting
18/02/14 18:59:25 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@192.168.154.114:43865]
18/02/14 18:59:25 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 43865.
18/02/14 18:59:25 INFO SparkEnv: Registering MapOutputTracker
18/02/14 18:59:25 INFO SparkEnv: Registering BlockManagerMaster
18/02/14 18:59:25 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-acbf69ba-bf4b-4fae-9d28-ae78d9b60aca
18/02/14 18:59:25 INFO MemoryStore: MemoryStore started with capacity 511.1 MB
18/02/14 18:59:26 INFO SparkEnv: Registering OutputCommitCoordinator
18/02/14 18:59:26 INFO Server: jetty-8.y.z-SNAPSHOT
18/02/14 18:59:26 INFO AbstractConnector: Started SelectChannelConnector@0.0.0.0:12335
18/02/14 18:59:26 INFO Utils: Successfully started service 'SparkUI' on port 12335.
18/02/14 18:59:26 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.154.114:12335
spark.yarn.driver.memoryOverhead is set but does not apply in client mode.
18/02/14 18:59:28 INFO TimelineClientImpl: Timeline service address: http://centos1.test.com:8188/ws/v1/timeline/
18/02/14 18:59:29 INFO RMProxy: Connecting to ResourceManager at centos1.test.com/192.168.154.112:8050
18/02/14 18:59:31 WARN DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
18/02/14 18:59:31 INFO Client: Requesting a new application from cluster with 2 NodeManagers
18/02/14 18:59:31 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (1024 MB per container)
18/02/14 18:59:31 INFO Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
18/02/14 18:59:31 INFO Client: Setting up container launch context for our AM
18/02/14 18:59:31 INFO Client: Setting up the launch environment for our AM container
18/02/14 18:59:31 INFO Client: Using the spark assembly jar on HDFS because you are using HDP, defaultSparkAssembly:hdfs://centos.test.com:8020/hdp/apps/2.4.3.0-227/spark/spark-hdp-assembly.jar
18/02/14 18:59:31 INFO Client: Preparing resources for our AM container
18/02/14 18:59:31 INFO Client: Using the spark assembly jar on HDFS because you are using HDP, defaultSparkAssembly:hdfs://centos.test.com:8020/hdp/apps/2.4.3.0-227/spark/spark-hdp-assembly.jar
18/02/14 18:59:31 INFO Client: Source and destination file systems are the same. Not copying hdfs://centos.test.com:8020/hdp/apps/2.4.3.0-227/spark/spark-hdp-assembly.jar
18/02/14 18:59:31 INFO Client: Uploading resource file:/tmp/spark-611a1716-891c-4b5b-84aa-3eeebb204084/__spark_conf__3734921673150808950.zip -> hdfs://centos.test.com:8020/user/root/.sparkStaging/application_1518599648055_0002/__spark_conf__3734921673150808950.zip
18/02/14 18:59:31 INFO SecurityManager: Changing view acls to: root
18/02/14 18:59:31 INFO SecurityManager: Changing modify acls to: root
18/02/14 18:59:31 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
18/02/14 18:59:31 INFO Client: Submitting application 2 to ResourceManager
18/02/14 18:59:32 INFO YarnClientImpl: Submitted application application_1518599648055_0002
18/02/14 18:59:32 INFO SchedulerExtensionServices: Starting Yarn extension services with app application_1518599648055_0002 and attemptId None
18/02/14 18:59:33 INFO Client: Application report for application_1518599648055_0002 (state: ACCEPTED)
18/02/14 18:59:33 INFO Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1518600178504
final status: UNDEFINED
tracking URL: http://centos1.test.com:8088/proxy/application_1518599648055_0002/
user: root
18/02/14 18:59:34 INFO Client: Application report for application_1518599648055_0002 (state: ACCEPTED)
18/02/14 18:59:35 INFO Client: Application report for application_1518599648055_0002 (state: FAILED)
18/02/14 18:59:35 INFO Client:
client token: N/A
diagnostics: Application application_1518599648055_0002 failed 2 times due to Error launching appattempt_1518599648055_0002_000002. Got exception: org.apache.hadoop.yarn.exceptions.YarnException: Unauthorized request to start container.
This token is expired. current time is 1518614975052 found 1518600781196
Note: System times on machines may be out of sync. Check system time and time zones.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168)
at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)
at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:122)
at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:250)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
. Failing the application.
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1518600178504
final status: FAILED
tracking URL: http://centos1.test.com:8088/cluster/app/application_1518599648055_0002
user: root
18/02/14 18:59:35 INFO Client: Deleting staging directory .sparkStaging/application_1518599648055_0002
18/02/14 18:59:35 ERROR SparkContext: Error initializing SparkContext.
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:122)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:62)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:530)
at org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:1017)
at $line3.$read$$iwC$$iwC.<init>(<console>:15)
at $line3.$read$$iwC.<init>(<console>:24)
at $line3.$read.<init>(<console>:26)
at $line3.$read$.<init>(<console>:30)
at $line3.$read$.<clinit>(<console>)
at $line3.$eval$.<init>(<console>:7)
at $line3.$eval$.<clinit>(<console>)
at $line3.$eval.$print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:125)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159)
at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:108)
at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
at org.apache.spark.repl.Main$.main(Main.scala:31)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/kill,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/api,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/static,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump/json,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/json,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment/json,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd/json,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/json,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool/json,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/json,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/json,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job/json,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/json,null}
18/02/14 18:59:35 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs,null}
18/02/14 18:59:35 INFO SparkUI: Stopped Spark web UI at http://192.168.154.114:12335
18/02/14 18:59:35 WARN YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered!
18/02/14 18:59:35 INFO YarnClientSchedulerBackend: Shutting down all executors
18/02/14 18:59:35 INFO YarnClientSchedulerBackend: Asking each executor to shut down
18/02/14 18:59:35 INFO SchedulerExtensionServices: Stopping SchedulerExtensionServices
(serviceOption=None,
services=List(),
started=false)
18/02/14 18:59:35 INFO YarnClientSchedulerBackend: Stopped
18/02/14 18:59:35 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
18/02/14 18:59:35 INFO MemoryStore: MemoryStore cleared
18/02/14 18:59:35 INFO BlockManager: BlockManager stopped
18/02/14 18:59:35 INFO BlockManagerMaster: BlockManagerMaster stopped
18/02/14 18:59:35 WARN MetricsSystem: Stopping a MetricsSystem that is not running
18/02/14 18:59:35 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
18/02/14 18:59:35 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
18/02/14 18:59:35 INFO SparkContext: Successfully stopped SparkContext
18/02/14 18:59:35 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
18/02/14 18:59:36 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:122)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:62)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:530)
at org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:1017)
at $iwC$$iwC.<init>(<console>:15)
at $iwC.<init>(<console>:24)
at <init>(<console>:26)
at .<init>(<console>:30)
at .<clinit>(<console>)
at .<init>(<console>:7)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:125)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159)
at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:108)
at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
at org.apache.spark.repl.Main$.main(Main.scala:31)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
java.lang.NullPointerException
at org.apache.spark.sql.SQLContext$.createListenerAndUI(SQLContext.scala:1367)
at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:101)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at org.apache.spark.repl.SparkILoop.createSQLContext(SparkILoop.scala:1028)
at $iwC$$iwC.<init>(<console>:15)
at $iwC.<init>(<console>:24)
at <init>(<console>:26)
at .<init>(<console>:30)
at .<clinit>(<console>)
at .<init>(<console>:7)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:132)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159)
at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:108)
at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
at org.apache.spark.repl.Main$.main(Main.scala:31)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
<console>:16: error: not found: value sqlContext
import sqlContext.implicits._
^
<console>:16: error: not found: value sqlContext
import sqlContext.sql
... View more
02-06-2018
05:04 AM
[root@ conf]# spark-shell --master yarn --conf spark.ui.port=12234
18/01/31 09:58:55 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/01/31 09:58:56 INFO SecurityManager: Changing view acls to: root
18/01/31 09:58:56 INFO SecurityManager: Changing modify acls to: root
18/01/31 09:58:56 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
18/01/31 09:58:56 INFO HttpServer: Starting HTTP Server
18/01/31 09:58:57 INFO Server: jetty-8.y.z-SNAPSHOT
18/01/31 09:58:57 INFO AbstractConnector: Started SocketConnector@0.0.0.0:43155
18/01/31 09:58:57 INFO Utils: Successfully started service 'HTTP class server' on port 43155.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 1.6.2
/_/
.
.
.
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159)
at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:108)
at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
at org.apache.spark.repl.Main$.main(Main.scala:31)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
<console>:16: error: not found: value sqlContext
import sqlContext.implicits._
^
<console>:16: error: not found: value sqlContext
import sqlContext.sql
^
scala>
... View more
Labels:
02-06-2018
02:24 AM
When Client try to read(SELECT) on which all ready open to Write(INSERT/DELETE) 1)Client request for READ a block from HDFS file system. 2) That given block is all ready open for write and it wait till WRITE Operation complete, because its Start/End block ID will change during Write, hence Client read wait till complete. 3) Client wait till "dfs.client.failover.max.attempts" in HDFS-SITE.xml , Ex:- 10 attempt , it try for 10 attempt to read the operation , mean time if Write HDFS complete , client will read & complete. 4) if Client not able to read within max "dfs.client.failover.max.attempts" attempt Client read request will fail.
... View more
02-06-2018
01:39 AM
Service check getting fail Container killed on request.Exit code is143 Container exited with a non-zero exit code 143 Failingthis attempt.Failing the application.
... View more
02-01-2018
02:03 AM
missed to update even after set above parameter service check getting failed with same error. Please advise
... View more
01-31-2018
02:04 PM
Thanks Jay for reply. Currently parameter has been set and values as below. mapreduce.reduce.memory.mb: 400 MB mapreduce.map.memory.mb:- 350 MB mapreduce.reduce.java.opts : 240 MB mapreduce.map.java.opts:- 250 MB
... View more
01-31-2018
10:54 AM
18/01/30 05:36:48 INFO impl.YarnClientImpl: Submitted application application_1517270734071_0001
18/01/30 05:36:49 INFO mapreduce.Job: The url to track the job: http://centos1.test.com:8088/proxy/application_1517270734071_0001/
18/01/30 05:36:49 INFO mapreduce.Job: Running job: job_1517270734071_0001
18/01/30 05:37:28 INFO mapreduce.Job: Job job_1517270734071_0001 running in uber mode : false
18/01/30 05:37:28 INFO mapreduce.Job: map 0% reduce 0%
18/01/30 05:37:28 INFO mapreduce.Job: Job job_1517270734071_0001 failed with state FAILED due to: Application application_1517270734071_0001 failed 2 times due to AM Container for appattempt_1517270734071_0001_000002 exited with exitCode: -104
For more detailed output, check application tracking page:http://centos1.test.com:8088/cluster/app/application_1517270734071_0001Then, click on links to logs of each attempt.
Diagnostics: Container [pid=49027,containerID=container_e03_1517270734071_0001_02_000001] is running beyond physical memory limits. Current usage: 173 MB of 170 MB physical memory used; 1.9 GB of 680 MB virtual memory used. Killing container.
Dump of the process-tree for container_e03_1517270734071_0001_02_000001 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 49027 49025 49027 49027 (bash) 0 0 108630016 187 /bin/bash -c /usr/jdk64/jdk1.8.0_60/bin/java -Djava.io.tmpdir=/hadoop/yarn/local/usercache/ambari-qa/appcache/application_1517270734071_0001/container_e03_1517270734071_0001_02_000001/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/hadoop/yarn/log/application_1517270734071_0001/container_e03_1517270734071_0001_02_000001 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog -Dhdp.version=2.4.3.0-227 -Xmx136m -Dhdp.version=2.4.3.0-227 org.apache.hadoop.mapreduce.v2.app.MRAppMaster 1>/hadoop/yarn/log/application_1517270734071_0001/container_e03_1517270734071_0001_02_000001/stdout 2>/hadoop/yarn/log/application_1517270734071_0001/container_e03_1517270734071_0001_02_000001/stderr
|- 49041 49027 49027 49027 (java) 1111 341 1950732288 44101 /usr/jdk64/jdk1.8.0_60/bin/java -Djava.io.tmpdir=/hadoop/yarn/local/usercache/ambari-qa/appcache/application_1517270734071_0001/container_e03_1517270734071_0001_02_000001/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/hadoop/yarn/log/application_1517270734071_0001/container_e03_1517270734071_0001_02_000001 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog -Dhdp.version=2.4.3.0-227 -Xmx136m -Dhdp.version=2.4.3.0-227 org.apache.hadoop.mapreduce.v2.app.MRAppMaster
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Failing this attempt. Failing the application.
18/01/30 05:37:28 INFO mapreduce.Job: Counters: 0
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache YARN
01-30-2018
02:09 AM
After Up-grade failed to HDP 2.4 no option left with cleaned HDP and Ambari 2.2 on all server and re-install Ambari 2.2 and HDP 2.4 version 1) Ambari Cleanup on server and slave nodes for ambari-agent ****Ambari-server****
#ambari-server stop
#yum erase ambari-server
*****Ambari-agent server***
$ambari-agent stop
#yum erase ambari-agent 2) Repo Clean-up an all server #cd /etc/yum.repo.d/
#rm -rf hdp* ambari*
#yum clean all 3) for Clean-up of user, directory and logs (its delete all HDP users required to run on all server ) # python /usr/lib/python2.6/site-packages/ambari_agent/HostCleanup.py --silent 4) Verify still any rpm on server and if exist uninstall the rpm #rpm -qa |grep hdp*
#rpm -qa grep hadoop*
#yum remove hadoop*
#yum remove hdp*
... View more
01-29-2018
05:16 AM
One of Powerful clean up by below python script on hadoop , It clean all directory , including user and files [root@.ssh]# python /usr/lib/python2.6/site-packages/ambari_agent/HostCleanup.py --silent
INFO:HostCleanup:
Killing pid's: ['']
INFO:HostCleanup:Deleting packages: ['']
INFO:HostCleanup:
Deleting users: ['ams', 'ambari-qa', 'yarn', 'mapred', 'tez', 'hbase', 'sqoop', 'oozie', 'falcon', 'flume', 'hive', 'hcat']
INFO:HostCleanup:Executing command: /var/lib/ambari-agent/ambari-sudo.sh userdel -rf ams
INFO:HostCleanup:Successfully deleted user: ams
INFO:HostCleanup:Executing command: /var/lib/ambari-agent/ambari-sudo.sh userdel -rf ambari-qa
INFO:HostCleanup:Successfully deleted user: ambari-qa
INFO:HostCleanup:Executing command: /var/lib/ambari-agent/ambari-sudo.sh userdel -rf yarn
INFO:HostCleanup:Successfully deleted user: yarn
INFO:HostCleanup:Executing command: /var/lib/ambari-agent/ambari-sudo.sh userdel -rf mapred
INFO:HostCleanup:Successfully deleted user: mapred
INFO:HostCleanup:Executing command: /var/lib/ambari-agent/ambari-sudo.sh userdel -rf tez
INFO:HostCleanup:Successfully deleted user: tez
INFO:HostCleanup:Executing command: /var/lib/ambari-agent/ambari-sudo.sh userdel -rf hbase
INFO:HostCleanup:Successfully deleted user: hbase
INFO:HostCleanup:Executing command: /var/lib/ambari-agent/ambari-sudo.sh userdel -rf sqoop
INFO:HostCleanup:Successfully deleted user: sqoop
INFO:HostCleanup:Executing command: /var/lib/ambari-agent/ambari-sudo.sh userdel -rf oozie
INFO:HostCleanup:Successfully deleted user: oozie
INFO:HostCleanup:Executing command: /var/lib/ambari-agent/ambari-sudo.sh userdel -rf falcon
INFO:HostCleanup:Successfully deleted user: falcon
INFO:HostCleanup:Executing command: /var/lib/ambari-agent/ambari-sudo.sh userdel -rf flume
INFO:HostCleanup:Successfully deleted user: flume
INFO:HostCleanup:Executing command: /var/lib/ambari-agent/ambari-sudo.sh userdel -rf hive
INFO:HostCleanup:Successfully deleted user: hive
INFO:HostCleanup:Executing command: /var/lib/ambari-agent/ambari-sudo.sh userdel -rf hcat
INFO:HostCleanup:Successfully deleted user: hcat
INFO:HostCleanup:Executing command: /var/lib/ambari-agent/ambari-sudo.sh groupdel hadoop
WARNING:HostCleanup:Cannot delete group : hadoop, groupdel: cannot remove the primary group of user 'hduser'
INFO:HostCleanup:Path doesn't exists: /home/ams
INFO:HostCleanup:Path doesn't exists: /home/ambari-qa
INFO:HostCleanup:Path doesn't exists: /home/yarn
INFO:HostCleanup:Path doesn't exists: /home/mapred
INFO:HostCleanup:Path doesn't exists: /home/tez
INFO:HostCleanup:Path doesn't exists: /home/hbase
INFO:HostCleanup:Path doesn't exists: /home/sqoop
INFO:HostCleanup:Path doesn't exists: /home/oozie
INFO:HostCleanup:Path doesn't exists: /home/falcon
INFO:HostCleanup:Path doesn't exists: /home/flume
INFO:HostCleanup:Path doesn't exists: /home/hive
INFO:HostCleanup:Path doesn't exists: /home/hcat
INFO:HostCleanup:Deleting file/folder: /tmp/Jetty_0_0_0_0_60010_master____q3nwom
INFO:HostCleanup:Deleting file/folder: /tmp/Jetty_0_0_0_0_34300_mapreduce____.dx7bll
INFO:HostCleanup:Deleting file/folder: /tmp/hbase-hbase
INFO:HostCleanup:Deleting file/folder: /tmp/Jetty_0_0_0_0_34025_mapreduce____jgsznr
INFO:HostCleanup:Deleting file/folder: /tmp/Jetty_centos4_test_com_8088_cluster____.cl0kmf
INFO:HostCleanup:Deleting file/folder: /tmp/Jetty_0_0_0_0_8042_node____19tj0x
INFO:HostCleanup:
Deleting directories: ['']
INFO:HostCleanup:Path doesn't exists:
INFO:HostCleanup:
Deleting repo files: []
INFO:HostCleanup:
Erasing alternatives:{'symlink_list': [''], 'target_list': ['']}
INFO:HostCleanup:Path doesn't exists:
INFO:HostCleanup:Clean-up completed. The output is at /var/lib/ambari-agent/data/hostcleanup.result
... View more
01-28-2018
03:02 PM
On given path files are exist , reason for service restart failure ? root@~]# ll /var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py -rwxr-xr-x 1 root root 1108 May 6 2016 /var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py [root@~]#
... View more
01-28-2018
02:59 PM
As part of up-grade 2.1 to 2.4.2 , services getting failed with below error var/lib/ambari-agent/data/output-2791.txt
2018-01-27 23:35:26,108 - In the middle of a stack upgrade/downgrade for Stack HDP and destination version 2.4.3.0-227, determining which hadoop conf dir to use.
2018-01-27 23:35:26,108 - Hadoop conf dir: /usr/hdp/2.4.3.0-227/hadoop/conf
2018-01-27 23:35:26,108 - The hadoop conf dir /usr/hdp/2.4.3.0-227/hadoop/conf exists, will call conf-select on it for version 2.4.3.0-227
2018-01-27 23:35:26,108 - Checking if need to create versioned conf dir /etc/hadoop/2.4.3.0-227/0
2018-01-27 23:35:26,109 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.3.0-227 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2018-01-27 23:35:26,137 - call returned (1, '/etc/hadoop/2.4.3.0-227/0 exist already', '')
2018-01-27 23:35:26,137 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.3.0-227 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2018-01-27 23:35:26,164 - checked_call returned (0, '')
2018-01-27 23:35:26,165 - Ensuring that hadoop has the correct symlink structure
2018-01-27 23:35:26,165 - Using hadoop conf dir: /usr/hdp/2.4.3.0-227/hadoop/conf
2018-01-27 23:35:26,290 - In the middle of a stack upgrade/downgrade for Stack HDP and destination version 2.4.3.0-227, determining which hadoop conf dir to use.
2018-01-27 23:35:26,290 - Hadoop conf dir: /usr/hdp/2.4.3.0-227/hadoop/conf
2018-01-27 23:35:26,290 - The hadoop conf dir /usr/hdp/2.4.3.0-227/hadoop/conf exists, will call conf-select on it for version 2.4.3.0-227
2018-01-27 23:35:26,290 - Checking if need to create versioned conf dir /etc/hadoop/2.4.3.0-227/0
2018-01-27 23:35:26,290 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.3.0-227 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2018-01-27 23:35:26,318 - call returned (1, '/etc/hadoop/2.4.3.0-227/0 exist already', '')
2018-01-27 23:35:26,318 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.3.0-227 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2018-01-27 23:35:26,346 - checked_call returned (0, '')
2018-01-27 23:35:26,346 - Ensuring that hadoop has the correct symlink structu
... View more
Labels:
01-22-2018
03:55 AM
Now its showing as up-grade In process, its running since last 8+ hour.
... View more
01-22-2018
03:44 AM
For The up-grade from 2.1 to 2.4 HDP 1) Register the HDP target version on Ambari API. 2) Just Installed the packages, not finalized up-grade. 3) After installed HDP2.4+ packages, not able to start the service error as stderr: /var/lib/ambari-agent/data/errors-3037.txt
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py", line 35, in <module>
BeforeAnyHook().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
method(env)
File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py", line 29, in hook
setup_users()
File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/shared_initialization.py", line 75, in setup_users
create_tez_am_view_acls()
File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/shared_initialization.py", line 99, in create_tez_am_view_acls
if not params.tez_am_view_acls.startswith("*"):
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/config_dictionary.py", line 81, in __getattr__
raise Fail("Configuration parameter '" + self.name + "' was not found in configurations dictionary!")
resource_management.core.exceptions.Fail: Configuration parameter 'tez.am.view-acls' was not found in configurations dictionary!
Error: Error: Unable to run the custom hook script ['/usr/bin/python', '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py', 'ANY', '/var/lib/ambari-agent/data/command-3037.json', '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY', '/var/lib/ambari-agent/data/structured-out-3037.json', 'INFO', '/var/lib/ambari-agent/tmp']
stdout: /var/lib/ambari-agent/data/output-3037.txt
2018-01-20 21:30:04,465 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.1.0.0-0001
2018-01-20 21:30:04,466 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2018-01-20 21:30:04,618 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.1.0.0-0001
2018-01-20 21:30:04,619 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2018-01-20 21:30:04,620 - Group['hadoop'] {}
2018-01-20 21:30:04,623 - Group['users'] {}
2018-01-20 21:30:04,623 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2018-01-20 21:30:04,624 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2018-01-20 21:30:04,625 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2018-01-20 21:30:04,626 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2018-01-20 21:30:04,627 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2018-01-20 21:30:04,628 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2018-01-20 21:30:04,628 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2018-01-20 21:30:04,629 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2018-01-20 21:30:04,630 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2018-01-20 21:30:04,631 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2018-01-20 21:30:04,633 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2018-01-20 21:30:04,633 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2018-01-20 21:30:04,635 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2018-01-20 21:30:04,635 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2018-01-20 21:30:04,636 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-01-20 21:30:04,639 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2018-01-20 21:30:04,646 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2018-01-20 21:30:04,647 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'recursive': True, 'mode': 0775, 'cd_access': 'a'}
2018-01-20 21:30:04,648 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-01-20 21:30:04,650 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2018-01-20 21:30:04,665 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2018-01-20 21:30:04,665 - Group['hdfs'] {}
2018-01-20 21:30:04,666 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'hdfs']}
Error: Error: Unable to run the custom hook script ['/usr/bin/python', '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py', 'ANY', '/var/lib/ambari-agent/data/command-3037.json', '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY', '/var/lib/ambari-agent/data/structured-out-3037.json', 'INFO', '/var/lib/ambari-agent/tmp']
4) Verified the current version of HDP it changed to HDP 2.4, as it required 2.1 HDP and not finalized the up-grade. Now not able to start any service, getting error as step 3) Please advise and comments are welcome 🙂
... View more
Labels:
01-21-2018
01:01 PM
After Fix Repository
... View more
01-21-2018
02:47 AM
Issue resolved , performed below steps are worked me, after that i able register the target version on Ambari. As we need to update the stack version , after ambari-server up-grade 1.6 to 2.+ . https://ambari.apache.org/1.2.3/installing-hadoop-using-ambari/content/ambari-chap9-3.html [root@server ~]# ambari-server upgradestack HDP-2.4
Using python /usr/bin/python
Upgrading stack of ambari-server
Ambari Server 'upgradestack' completed successfully.
[root@server ~]# ambari-server start
Using python /usr/bin/python
Starting ambari-server
Ambari Server running with administrator privileges.
Organizing resource files at /var/lib/ambari-server/resources...
Server PID at: /var/run/ambari-server/ambari-server.pid
Server out at: /var/log/ambari-server/ambari-server.out
Server log at: /var/log/ambari-server/ambari-server.log
Waiting for server start....................
Ambari Server 'start' completed successfully.
[root@server ~]#
... View more
01-20-2018
03:08 PM
An internal system exception occurred: Stack HDP-2.4 doesn't have upgrade packages
... View more
Labels: