Member since 
    
	
		
		
		02-01-2019
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                650
            
            
                Posts
            
        
                143
            
            
                Kudos Received
            
        
                117
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 3502 | 04-01-2019 09:53 AM | |
| 1812 | 04-01-2019 09:34 AM | |
| 8901 | 01-28-2019 03:50 PM | |
| 1971 | 11-08-2018 09:26 AM | |
| 4468 | 11-08-2018 08:55 AM | 
			
    
	
		
		
		07-15-2016
	
		
		10:00 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							@Kartik Vashishta Tuning java heap size completely depends on your usecase. Are you seeing any performance related issues with your current heap configs ?   Here is the recommendation from hortonworks for namenode : https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.2/bk_installing_manually_book/content/ref-80953924-1cbf-4655-9953-1e744290a6c3.1.html 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-15-2016
	
		
		09:56 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 @ANSARI FAHEEM AHMED
  From above i understood that you have enabled both Namenode HA (HDFS) and Resource Manager HA (YARN)  Now in   1) HDFS, your namenode in nn01 is active and nn02 is standby  2) YARN, your resource manager in nn01 is standby and nn02 is active  
  You might be confusing with the hostnames and this shouldn't be a problem.  Let me know if this is what you are looking for. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-12-2016
	
		
		01:17 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 @Niraj Parmar
  Please follow instructions provided in  : http://hortonworks.com/hadoop-tutorial/apache-zeppelin/  Edit : Above tutorial is for installing zeppelin manually.  I don't think we can install zeppelin through ambari in HDP 2.3 .  Ambari Managed installation is supported in HDP 2.4 : http://hortonworks.com/hadoop-tutorial/apache-zeppelin-hdp-2-4/ 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-10-2016
	
		
		07:35 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Mamta Chawla  Here is the command to import data from mysql to local FS (documentation) :   sqoop import -fs local -jt local --connect jdbc:mysql://<host>/sqoop --username <user> --password <password> --table <table-name> 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-07-2016
	
		
		06:39 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Parameswara Nangi  if [ "$SERVICE" = "cli" ] && [ "$USER" != "ambari-qa" ]; then 
echo "Sorry! hive cli has been disabled for security purpose, please use beeline instead." 
exit 1 
fi 
  This gives ambari-qa the hive cli access and therefore the hive metastore health check passes. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-07-2016
	
		
		06:27 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							@Neha Jain   Use   df.write.format("parquet").partitionBy('..').saveAsTable(...)
(or)
df.write.format("parquet").partitionBy('...').insertInto(...)
 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-05-2016
	
		
		06:58 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Venkat Ankam Here is the latest HDP 2.5 sandbox : https://hortonworks.com/tech-preview-hdp-2-5/  Steps to follow are at https://github.com/hortonworks/tutorials/blob/hdp-2.5/tutorials/hortonworks/spark-hbase-a-dataframe-based-hbase-connector/tutorial.md  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-02-2016
	
		
		02:32 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 user name will be "admin" before this you need to reset the password form command line using : ambari-admin-password-reset   
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-02-2016
	
		
		12:32 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Praveen K Singh find the ip of the sandbox using "ifconfig" and use that ip instead of 127.0.0.1 in your browser (Launch the browser in your local system). 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-02-2016
	
		
		12:23 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Alexander Do we have similar script to install Hbase on Spark HD Insights cluster ?  
						
					
					... View more