Member since 
    
	
		
		
		06-24-2016
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                111
            
            
                Posts
            
        
                8
            
            
                Kudos Received
            
        
                0
            
            
                Solutions
            
        
			
    
	
		
		
		09-24-2017
	
		
		02:47 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 What is the application_1504517816511_0001~4? I'm not sure, if a run application_1504517816511 before enabled ResourceManager HA, then kill this application first. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-22-2017
	
		
		12:40 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Did you install HDP Client on that server? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-04-2017
	
		
		07:55 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 It's a weird. Where is these partitions "/grid/data2, /grid/data3" on slave1? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-04-2017
	
		
		03:47 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Then, check the hostname which is installed with hiveserver2 on a node. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-04-2017
	
		
		01:21 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 It's a wrong jdbc class name for uri.  Try to use this "beeline jdbc:hive2://hostname:10000/default" instead of. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-04-2017
	
		
		12:53 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I'm gonna have some questions.  Q1. Did you install DataNode service on slave1?  Q2. Could you let me know values of DataNode directories on "Ambari > HDFS > Configs > Settings > DataNode"?  Q3, Did you check the disk mount list on slave? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-28-2017
	
		
		02:23 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I'm just interesting.  Why does it setup these options in hive config from ambari web?      Well, actually, that means is if I want to use hive ORC file format with advanced TBLPROPERTIES such as "orc.compress, orc.compress.size, orc.stripe.size, orc,create.index....etc", I have to specify these tblproperties options every times when I'm trying to create hive table ORC file format. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-28-2017
	
		
		06:28 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I'm using HDP 2.5.3.  Hive Setting  ACID Transacctions ON  Execution Engine TEZ  CBO ON  Fetch column stats at compiler ON  Default ORC Stripe Size  64MB  ORC Compression Algorithm ZLIB  ORC Storage Strategy SPEED  Here's my question.  If I created hive table like this,  CREATE TABLE test01  no int,   id string,   code, string   ROW FORMAT DELIMITED   FIELDS TERMININATED BY '|'  STORED AS ORC  then, what is default tableproperties of test01 table's ORC options?  TBLPROPERTIES (  'orc.compress' = '?',  'orc.create.index'='?',  'orc.stripe.size'='?',  'orc.row.index.stride'='?'  )  for example.  TBLPROPERTIES (  'orc.compress' = 'ZLIB',  'orc.create.index'='true',  'orc.stripe.size'='67108864',  'orc.row.index.stride'='50000'  ) 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Apache Hive
- 
						
							
		
			Apache Tez
			
    
	
		
		
		07-10-2017
	
		
		06:40 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 1st. If you were executed spark cmd with master(local), then check the connection host and port in that local server.  2nd. Check your firewall & iptables status whether it is of or off. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-30-2017
	
		
		06:41 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 If you're using hadoop cluster with ambari of hortonworks, then you don't have to use that --master yarn parameter. Cause' spark service mode of HDP cluster is installed to yarn mode basically. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
        











