Member since 
    
	
		
		
		01-19-2017
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                3676
            
            
                Posts
            
        
                632
            
            
                Kudos Received
            
        
                372
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 472 | 06-04-2025 11:36 PM | |
| 998 | 03-23-2025 05:23 AM | |
| 531 | 03-17-2025 10:18 AM | |
| 1867 | 03-05-2025 01:34 PM | |
| 1240 | 03-03-2025 01:09 PM | 
			
    
	
		
		
		03-10-2016
	
		
		01:11 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @Michael Rife  Run a checksum against the file you might have a corrupt file. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-10-2016
	
		
		09:50 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @Harshal Joshi  An Ambari managed cluster should be stopped gracefully  just like an oracle database you . A reboot is the equivalent of shutdown abort in Oracle.When you reboot your cluster  its advisable to start the components manually in the order Ambari server,HDFS,YARN     Otherwise have a look at this link 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-10-2016
	
		
		05:29 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @Prakash Punj  Sorry what you ave done on the new host is not enough . There are are more important steps that you should never ignore, Assuming that the your corporate or whatever  network rules allows   the new host and the Ambari server  to communicate no proxy or firewall rules etc.  
 Assuming you are on linux both hosts should have FQDN entries in the /etc/host  Configure passwordless connection for the user installing hetween the 2 host etc   Just have a look at attached document and again IGNORE nothing all have to be successful implemented to succeed your new host integration to your cluster  PS.remember to clean the failed registration too  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-09-2016
	
		
		03:46 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @Prakash Punj    What's the purpose of Region Server? Read this   Where is should be located ? Every datanode ?   You run RegionServers on the same servers as DataNodes.   What's the purpose of HBase Master?   HBase provides low-latency random reads and writes on top of HDFS and it’s able to handle petabytes of data. One of the interesting capabilities in HBase is Auto-Sharding, which simply means that tables are dynamically distributed by the system when they become too large.
The HBase Architecture has two main services: HMaster that is responsible for coordinating Regions in the cluster and execute administrative operations; HRegionServer responsible to handle a subset of the table’s data.   HBase is NoSql database. What does it store ?   HBase is a distributed, nonrelational (columnar) database that utilizes HDFS as its persistence store for data.  Hope that answers you .. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-08-2016
	
		
		07:51 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Vincent McGarry  Solution 1   1. Please check that the daemons have write privileges to the log directory   Stop and start namenode and datanode daemons in debug mode,   following command can be used.   Solution 2   You need to do something like this:   bin/stop-all.sh (or stop-dfs.sh and stop-yarn.sh in the 2.x serie)   rm -Rf /app/tmp/hadoop-your-username/*   bin/hadoop namenode -format (or hdfs in the 2.x serie)   Solution 3   /usr/local/hadoop/sbin/hadoop-daemon.sh stop namenode ; hadoop namenode   
On datanode host, Execute the following command    /usr/local/hadoop/sbin/hadoop-daemon.sh stop datanode ; hadoop datanode   Check the logs messages from both daemons. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-06-2016
	
		
		09:35 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 reinstall-the-mysql.pdf@Robin Dong  You pick it when you launch the ambari-server setup   see attached doc database vendor please see the reinstall first  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-06-2016
	
		
		09:29 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @Robin Dong  By default uses an Ambari-installed Derby instance which is not robust  for production hence HA   The recommended DB's are   - PostgreSQL 8 and above
- MySQL 5.6 
- Oracle 11gr2, 12c  Configuring non-default databases  More doc    
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-04-2016
	
		
		08:15 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @Roberto Sancho   Can you upload the contents of Ambari Server logs  found at  /var/log/ambari-server/ambari-server.log  Ambari Agent logs are found at  /var/log/ambari-agent/ambari-agent.log  .  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-02-2016
	
		
		10:01 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @Mark Thorson  1. Start Hive. On the Hive Metastore host machine, run the following commands: using nohup and &  to run the processes in the background  su - hive 
nohup /usr/hdp/current/hive-metastore/bin/hive --service metastore>/var/log/hive/hive.out 2>/var/log/hive/hive.log &   2. Start Hive Server2.
On the Hive Server2 host machine, run the following commands:   su - hive nohup 
/usr/hdp/current/hive-server2/bin/hiveserver2 >/var/log/hive/hiveserver2.out 2> /var/log/hive/hiveserver2.log&   where $HIVE_USER is the Hive Service user. eg hive. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-02-2016
	
		
		08:19 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @Mark Thorson  Whats the value of  fs.defaultFS in core-site.xml?  Can you restart  the metastore ! And retry  $service hive-metastore restart 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
        













