Member since
08-08-2015
8
Posts
0
Kudos Received
0
Solutions
04-12-2018
03:32 PM
Initially when the environment was built there were around 327.33 GB out of total 1 TB Disk Capacity Hence the HDFS dfsadmin report showed non-dfs usage as 327.33 GB But after cleaning up of 300GB of on data from the Fileystem , dfsadmin report still show the non-dfs usage as 327.33 GB itself , while the reserved diskspace is 10GB how could i get the Non-DFS utilisation refreshed post clean-up of local files on Linux Filesystem ? hdfs dfsadmin -report Name: 20.21.208.21:2004 (hpc123.xyz.com) Hostname:hpc123.xyz.com Rack: /Row7/Rack2 Decommission Status : Normal Configured Capacity: 1154570731520 (1.05 TB) DFS Used: 449671168 (428.84 MB) Non DFS Used: 351465279488 (327.33 GB) DFS Remaining: 802655780864 (747.53 GB) DFS Used%: 0.04% DFS Remaining%: 69.52% Configured Cache Capacity: 4294967296 (4 GB) Cache Used: 0 (0 B) Cache Remaining: 4294967296 (4 GB) Cache Used%: 0.00% Cache Remaining%: 100.00% Xceivers: 2 Last contact: Thu Apr 12 08:18:16 PDT 2018
... View more
Labels:
- Labels:
-
Apache Hadoop
03-23-2018
03:57 PM
Thank you... I shall make the required changes and keep an watch on the same
... View more
03-23-2018
06:49 AM
YARN Resource Manager Halts with the OOM : Unable to create native thread and the Job fails over to standby Resource Manager in completing the Task. How could i get this resolved ? ERROR Message : 2018-03-22 02:30:09,637 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_e189_1521451854044_2288_01_000002 Container Transitioned from ALLOCATED to ACQUIRED 2018-03-22 02:30:10,413 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_e189_1521451854044_2288_01_000002 Container Transitioned from ACQUIRED to RUNNING 2018-03-22 02:30:10,695 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: checking for deactivate... 2018-03-22 02:30:19,354 INFO org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: hue is accessing unchecked http://server1:43045/ws/v1/mapreduce/jobs/job_1521451854044_2288 which is the app master GUI of application_1521451854044_2288 owned by edh_srv_prod 2018-03-22 02:30:30,212 INFO org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: hue is accessing unchecked http://server1:43045/ws/v1/mapreduce/jobs/job_1521451854044_2288 which is the app master GUI of application_1521451854044_2288 owned by edh_srv_prod 2018-03-22 02:30:34,090 FATAL org.apache.hadoop.yarn.YarnUncaughtExceptionHandler: Thread Thread[2101925946@qtp-1878992188-14302,5,main] threw an Error. Shutting down now... java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:714) at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1095) at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1375) at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1403) at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1387) at org.mortbay.jetty.security.SslSocketConnector$SslConnection.run(SslSocketConnector.java:723) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) 2018-03-22 02:30:34,093 INFO org.apache.hadoop.util.ExitUtil: Halt with status -1 Message: HaltException yarn application -status application_1521451854044_2288 Application Report : Application-Id : application_1521451854044_2288 Application-Name : oozie:launcher:T=shell:W=OS_Changes_incremental_workflow:A=shell-b8b2:ID=0006766-180222181315002-oozie-oozi-W Application-Type : MAPREDUCE User : edh_srv_prod Queue : root.edh_srv_prod Start-Time : 1521710999557 Finish-Time : 1521711593154 Progress : 100% State : FINISHED Final-State : SUCCEEDED Tracking-URL : https://server1:19890/jobhistory/job/job_1521451854044_2288 RPC Port : 40930 AM Host : server3 Aggregate Resource Allocation : 1809548 MB-seconds, 1181 vcore-seconds Log Aggregation Status : SUCCEEDED Diagnostics : Attempt recovered after RM restart
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache YARN
12-10-2015
09:36 PM
Under what circumstances we could notice map-reduce job getting failed/terminated when one of datanode goes down ?
... View more
Labels:
- Labels:
-
Apache Hadoop