Member since 
    
	
		
		
		09-15-2015
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                294
            
            
                Posts
            
        
                764
            
            
                Kudos Received
            
        
                81
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 2125 | 07-27-2017 04:12 PM | |
| 5420 | 06-29-2017 10:50 PM | |
| 2596 | 06-21-2017 06:29 PM | |
| 3148 | 06-20-2017 06:22 PM | |
| 2772 | 06-16-2017 06:46 PM | 
			
    
	
		
		
		02-28-2017
	
		
		12:00 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		3 Kudos
		
	
				
		
	
		
					
							 I think you need to create a symlink rather than copying files:  https://www.mail-archive.com/dev@ambari.apache.org/msg60487.html  ln -s /usr/hdp/2.5.0.0-1245/zookeeper  /usr/hdp/current/zookeeper-server 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-27-2017
	
		
		05:18 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Can you try if the below fix works for you:  https://community.hortonworks.com/questions/31086/last-step-of-ambari-hdp-installation-fails-for-zoo.html 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-26-2017
	
		
		10:53 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 On my current working cluster, the output is something like:  [root@mycluster ~]# ls  -l /usr/hdp/2.5.0.0-1245/zookeeper/conf
lrwxrwxrwx. 1 root root 28 Feb 26 01:52 /usr/hdp/2.5.0.0-1245/zookeeper/conf -> /etc/zookeeper/2.5.0.0-1245/0
[root@myclsuter ~]#  So, the output should not have the other folders as shown above.  Also, I see the link you have is something like   0 -> /etc/zookeeper/2.5.0.0-1245/0  which does not seem to be correct. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-26-2017
	
		
		10:20 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Seems like a symlink issue. Can you post the content of  ls -l /usr/hdp/2.5.0.0-1245/zookeeper/conf
  It should have permissions like  lrwxrwxrwx. 1 root root   Also, to be safe can you run the ln command on all the hosts manually once. Make sure the permissions are also correct. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-26-2017
	
		
		02:20 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 HDFS metadata consists of two parts:   Base filesystem table (stored in a file called fsimage)   
The edit logs which lists changes made to the base table stored in files called edits.    Checkpointing is a process of reconciling fsimage with edits to produce a new version of fsimage. 
There are two benefits arising out of this:    A more recent version of fsimage, and a truncated edit logs.    The following properties can help to set how often Checkpointing happens:    dfs.namenode.checkpoint.period - Controls how often this reconciliation will be triggered.  The number of seconds between two periodic checkpoints; fsimage will be updated and edit log truncated. Checkpiont is not cheap, so there is a balance between running it too often and letting the edit log grow too large. This parameter should be set to get a good balance assuming typical filesystem use in your cluster.     dfs.namenode.checkpoint.edits.dir - Determines where on the local filesystem the DFS secondary name node should store the temporary edits to merge. If this is a comma-delimited list of directories then the edits is replicated in all of the directories for redundancy. Default value is same as dfs.namenode.checkpoint.dir     dfs.namenode.checkpoint.txns - The Secondary NameNode or CheckpointNode will create a checkpoint of the namespace every 'dfs.namenode.checkpoint.txns' transactions, regardless of whether 'dfs.namenode.checkpoint.period' has expired.     dfs.ha.standby.checkpoints - If true, a NameNode in Standby state periodically takes a checkpoint of the namespace, saves it to its local storage and then upload to the remote NameNode.  
 Also, if you would like to manually checkpoint you can follow:  https://community.hortonworks.com/content/supportkb/49438/how-to-manually-checkpoint.html 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-26-2017
	
		
		01:45 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		12 Kudos
		
	
				
		
	
		
					
							 @Aruna Sameera- It all depends whether you have set up the cluster manually or using Ambari   If you have set it up manually you would need to start all the services manually on the cluster in this order:
  https://community.hortonworks.com/questions/41253/whats-the-best-order-of-startingstoping-hdfs-servi.html#answer-42783    If you have used Ambari to setup the cluster and the version of Ambari is 2.4.x or newer, you can use the auto start services feature as mentioned above   Else you can use follow the below order:  https://community.hortonworks.com/questions/10316/what-is-best-way-to-reboot-machines-in-the-hadoop.html
    
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-24-2017
	
		
		12:05 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Great to hear you resolved the issue 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-23-2017
	
		
		10:58 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Like mentioned in above answers, the issue due to BindException i.e some other process is running on the 50070 port.   You can either kill the process running on 50070 port, or you can modify the property dfs.namenode.http-address in hdfs-site.xml property file. Set the port value to some other port which is not running on the machine like 20070, and restart HDFS using Ambari.  Let me know if this helps.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-23-2017
	
		
		10:39 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		4 Kudos
		
	
				
		
	
		
					
							 Running distcp against encrypted files will not work because of the checksum mismatch. 
The reason is as following:   Each file within an encryption zone has its own encryption key, called the Data Encryption Key (DEK). These DEKs are encrypted with their respective encryption zone's EZ key, to form an Encrypted Data Encryption Key (EDEK). EDEKs are stored persistently on the NameNode as part of each file's metadata, using HDFS extended attributes.   So, the raw file contents of the src/target file will be different and thus mismatching checksums.   This problem can however be solved by running distcp without the checksum check i.e. Try running -   hadoop distcp -skipcrccheck -update src dest
  Let me know if this helps. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
        













