Member since 
    
	
		
		
		06-24-2016
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                111
            
            
                Posts
            
        
                8
            
            
                Kudos Received
            
        
                0
            
            
                Solutions
            
        
			
    
	
		
		
		06-29-2017
	
		
		08:21 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Try like this command.  spark-shell --jars /app/spark/a.jar,/app/spark/b.jar 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-29-2017
	
		
		06:41 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Is that the same distribution hadoop version between CentOS and Ubuntu? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-22-2017
	
		
		02:07 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 The best way to connect to Hadoop Cluster as client server, register client server to ambari-server.  If you don't have same OS version between ambari-server and client server, then you should setup same version's hadoop library and config files in client server.  To handle easily for HA services like NameNode, ReourceManager, Hive,...etc, I'll recommend to use ZooKeeper Curatorframework. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-21-2017
	
		
		07:08 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thanks, prsingh. I think, CURL command is the correct way to clean delete old fqdn list in HST.  And it works clearly. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-21-2017
	
		
		06:47 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hm,,It's not working.  I run that procedures, and I got same issues.  Duplicated fqdn's Uppercase and Lowercase names still exist.  Backup ambari-db server.  Stop all services  Stop ambari-server  Stop all amabri-agent  ambari-server update-host-names new_hosts.json  After update-host-names completed successfully, I got same return values from hst list-agents 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-21-2017
	
		
		02:05 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Is there a nicely way to clean up old fqdns only from hst? Cause' originally, as I know, hadoop achitecture is handling Uppercase and Lowercase FQDN names well without any issues. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-21-2017
	
		
		01:03 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 HDP 2.4.3  SmartSense 1.3.1.0-136  For example,  I got 5 nodes and setup like this cluster firstly.  Hadoop Cluster  /etc/hosts  10.10.x.x     MASTER1.hadoop.com    master1  10.10.x.x     MASTER2.hadoop.com    master2  10.10.x.x     SLAVE1.hadoop.com        slave1  10.10.x.x     SLAVE2.hadoop.com        slave2  10.10.x.x     SLAVE3.hadoop.com        slave3  /etc/sysconfig/network  hostname=FQDN for all servers  And installed ambari and hdp with smartsense.  And then changed every node's FQDN name in "/etc/hosts and /etc/sysconfig/network" Uppercase to Lowercase.  ex. MASTER1.hadoop.com -> master1.hadoop.com  But I got these issues from smartsense view.  Hosts registered in Ambari and SmartSense do not match  ........ command hostname -f ........  Also returned these results after executed command "hst list-agents"  MASTER1.hadoop.com  master1.hadoop.com  MASTER2.hadoop.com  master2.hadoop.com  SLAVE1.hadoop.com  slave3.hadoop.com  SLAVE2.hadoop.com  slave2.hadoop.com  SLAVE3.hadoop.com  slvae3.hadoop.com  How to delete all upppercase fqdn lists? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Hortonworks SmartSense
			
    
	
		
		
		06-19-2017
	
		
		05:08 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 1. Connect to mysql as root.  2. Execute these queries.   mysql> use mysql;   mysql> select * User,Host,Password from mysql;  Normally, that returned two or three results for user 'hive'.  hive | localhost | *encodedPassword  hive | adrien.cluster | *encodedPassword  hive | % | *encodedPassword  if you can't get above results, then execute queries.   mysql> create user 'hive'@'adrien.cluster' identified by 'PASSWORD'   mysql> create user 'hive'@'%' identified by 'PASSWORD'   mysql> grant all privileges on *.* to 'hive'@'adrien.cluster'   mysql> grant all privileges on *.* to 'hive'@'%'   mysql> flush privileges;  3. Try connection test in Ambari Web's Hive menu. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-12-2017
	
		
		04:38 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Did you check the option "dfs.namenode.acls.enabled=true"? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-24-2017
	
		
		01:47 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 You should run that command with this parameter "--jars spark-csv_2.10-1.4.0.jar",  and check your spark and scala version which are compatible. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
        











