Member since 
    
	
		
		
		01-21-2015
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                16
            
            
                Posts
            
        
                1
            
            
                Kudos Received
            
        
                1
            
            
                Solution
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 73850 | 01-25-2015 08:28 PM | 
			
    
	
		
		
		10-04-2015
	
		
		01:53 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi Team,     In my cluster , I am facing issue with cached memory in hadoop client server .  I have hadoop clients like oozie, hue, hive, impala ... running on this machine ( Flume and HBASE is not configured in this server ) . Also my CM services are running on the same machine.     If I look at my Memory my cache is utlized highly  and this cache is not released even if i stop all my hadoop process and CM process.     Why my cache is not flused automatically . How to identify is there any memory leak ?     Memory in (GB )-  Total : 94  used : 77  free : 16  shared : 0  buffres : 0  cached : 59  -/+ buffers/cache : 17,76  Swap: 7 , 0, 7        Is this expected ?   The reason I posted this query is - my application is hanging after some stage and hope this is due to the memory unavailiablity.              Thanks,  Rathish A M 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-26-2015
	
		
		10:08 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thanks Gautham for your valuable comments       
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-25-2015
	
		
		08:28 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 HI Gautham,     There was some issue logged in my log file   WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid dfs.datanode.data.dir /hdfs/data     and noticed that it was due to some permission issue for that folder.  ( my /hdfs/data directory owner was root , initally i gave all the complete permisson to that folder for all users and group which dindt work )     Based on the below link I excueted the commands which I mentioned below and now its working fine. Thanks for all your support.  http://solaimurugan.blogspot.in/2013/10/hadoop-multi-node-cluster-configuration.html     sudo chown hdfs:hadoop -R /hdfs/data  sudo chmod 777 -R /hdfs/data  hadoop namenode -format        Thanks,  Rath       
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-25-2015
	
		
		07:40 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi Gautham ,      Thanks for the reply.     I am looking into the below log file and details are below. I am not able to figure why its happening from the logs. I installed hadoop ( name node and datanode as root user and starting the daemons from root user , Is there any issue in that.     Log details   cat /var/log/hadoop-hdfs/hadoop-hdfs-datanode-data-node-01.out  cat: cat: No such file or directory  ulimit -a for user hdfs  core file size (blocks, -c) 0  data seg size (kbytes, -d) unlimited  scheduling priority (-e) 0  file size (blocks, -f) unlimited  pending signals (-i) 256725  max locked memory (kbytes, -l) 64  max memory size (kbytes, -m) unlimited  open files (-n) 32768  pipe size (512 bytes, -p) 8  POSIX message queues (bytes, -q) 819200  real-time priority (-r) 0  stack size (kbytes, -s) 10240  cpu time (seconds, -t) unlimited  max user processes (-u) 65536  virtual memory (kbytes, -v) unlimited  file locks (-x) unlimited     Thanks,  Rath 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-23-2015
	
		
		08:34 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi Team,     I am able to start the Namendoe . The issue was with permission for dfs.namenode.name.dir folder ( mentined in hdfs-site.xml ). I started the namenode as root user. I gave chmod 777 to that particular folder and it started working fine.     Now I am facing the same issue while starting datanode ... Any thought on this ???     Thanks,  Rath 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-23-2015
	
		
		07:10 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi Team,     I am not able to start namenode. Am I missing anything in the config file setting  ?  or any port issue ?     service hadoop-hdfs-namenode start     starting namenode, logging to /var/log/hadoop-hdfs/hadoop-hdfs-namenode-nn-node-01.out  Failed to start Hadoop namenode. Return value: 1 [FAILED     cat /var/log/hadoop-hdfs/hadoop-hdfs-namenode-nn-node-01.out  ulimit -a for user hdfs  core file size (blocks, -c) 0  data seg size (kbytes, -d) unlimited  scheduling priority (-e) 0  file size (blocks, -f) unlimited  pending signals (-i) 256725  max locked memory (kbytes, -l) 64  max memory size (kbytes, -m) unlimited  open files (-n) 32768  pipe size (512 bytes, -p) 8  POSIX message queues (bytes, -q) 819200  real-time priority (-r) 0  stack size (kbytes, -s) 10240  cpu time (seconds, -t) unlimited  max user processes (-u) 65536  virtual memory (kbytes, -v) unlimited  file locks (-x) unlimited     Thanks,  Rath 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Apache Hadoop
- 
						
							
		
			HDFS
 
        





