Member since
06-06-2016
185
Posts
12
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2854 | 07-20-2016 07:47 AM | |
2367 | 07-12-2016 12:59 PM |
08-16-2016
02:26 PM
@Sagar Shimpi Thank you ...i will try to tune ..i let you know the result ...
... View more
08-16-2016
02:15 PM
2 Kudos
yarn-cacity-scheduler.pngHi..i am using HDP 2.1.2 with 7 node PROD cluster(5 data nodes and 2 name nodes) and name nodes having 32Core s and 256 gb ram ,data nodes have 24 cores and 125gb rams
... View more
Labels:
- Labels:
-
Apache YARN
08-16-2016
02:04 AM
@mqureshi NICI Nice..i got the answer thanks you so much...
... View more
08-15-2016
05:21 PM
@mqureshi Thanks for your time..I am new to this hadoop env and i just want to know that i am using commodity or appliance hardware? how can i call my hardware? can i call it as commodity or appliance hardware? what is commodity hardware and appliance hardware? what is different b/w them?
... View more
08-15-2016
04:27 PM
what commodity or appliance hardware ? and how can i know that what my cluster is ? I am using HDP 2.1.2 with support of TERADATA i am using DEV and PROD clusters(130TB) Prod cluster has 4 data node(24 cores +125gb ram)2 name nodes(32 cores +300gb ram) 1 Edge node with configure what are the other hardware configuration need to verify?
... View more
Labels:
08-05-2016
07:18 PM
@Jitendra Yadav Thanks you .. i will let you the result after restart ambari server..
... View more
08-05-2016
12:04 PM
@Jitendra Yadav Thank you for quick response jitendra..i could not see any version matches..i am using hdp 2.1 with ambari 1.6 can you suggest any other alternative solution for this issue...?
... View more
08-05-2016
11:48 AM
@Vinod Bonthu Thanks Vinod..but restart happens 20 days back and still in ambari shows that critical warning and 109 corrupted blocks.....
... View more
08-05-2016
11:37 AM
I have 4 nodes cluster with NN-HA ..when i restart two datanodes i have 100+ corrupted blocks(shown in ambari>hdfs>servieces ) and i have replication factor 3 so i want to deleted these corrupted blocks..before i deleted this i want to check which are files are corrupted so i run below command and got below ouput ,here did not get any corrupted blocks or files? please help me ..i have this issue since long .. P1-230-8:/root> hdfs fsck / | egrep -v '^\.+
Connecting to namenode via http://stlpr8711:50070
FSCK started by hdfs (auth:SIMPLE) from /39.6.64.8 for path / at Fri Aug 05 06:27:52 CDT 2016 ...........................Status: HEALTHY
T otal size: 39189208876540 B (Total open files size: 83162 B) Total dirs: 710383
Total files: 2184527 Total symlinks: 0 (Files currently being written: 7)
Total blocks (validated): 2309173 (avg. block size 16971101 B) (Total open file blocks (not validated): 7) Corrupt blocks: 0 Number of data-nodes: 4 Number of racks: 1 FSCK ended at Fri Aug 05 06:28:40 CDT 2016 in 47634 milliseconds
The filesystem under path '/' is HEALTHY
... View more
Labels:
08-05-2016
09:59 AM
@Michael Young Thanks you.i got some sense..i am trying to deleted my old files manually...can you suggest me any script which is automatically deleted old files hadoop /tmp..i know that ,script is there for linux tmp files..is there same like script for hdfs tmp files? last thing from side is I could not see below property in hive-site.xml in hdp 2.1.2 <property>
<name>hive.exec.scratchdir</name>
<value>/tmp/mydir</value>
<description>Scratch space for Hive jobs</description>
</property>
... View more