Member since 
    
	
		
		
		04-11-2016
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                535
            
            
                Posts
            
        
                148
            
            
                Kudos Received
            
        
                77
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 9185 | 09-17-2018 06:33 AM | |
| 2415 | 08-29-2018 07:48 AM | |
| 3397 | 08-28-2018 12:38 PM | |
| 2906 | 08-03-2018 05:42 AM | |
| 2622 | 07-27-2018 04:00 PM | 
			
    
	
		
		
		10-05-2018
	
		
		11:42 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							@yogesh turkane Seems like your coordinator process is not registered or some configuration issues, from logs I did observe below error  2018-10-04T06:43:50,101 ERROR [main] io.druid.curator.discovery.ServerDiscoverySelector - No server instance found for [druid/coordinator]
2018-10-04T06:43:50,101 WARN [main] io.druid.java.util.common.RetryUtils - Failed on try 1, retrying in 886ms.
io.druid.java.util.common.IOE: No known server
  Check the configurations and restart Druid services. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-05-2018
	
		
		09:40 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							@Anil Varghese I suspect that the table is partitioned because of which the "describe formatted" does not show any stats related information.  Try running "describe extended" for particular partition spec. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-18-2018
	
		
		07:05 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 
	@Teddy Brewski  
	Below are the properties which control the logs and other files written to /tmp/<username> folder.  <property>
<name>hive.exec.scratchdir</name>
<value>/tmp/hive</value>
</property>
<property>
<name>hive.exec.local.scratchdir</name>
<value>/var/log/hadoop/hive/tmp/${user.name}</value>
</property>
<property>
<name>hive.downloaded.resources.dir</name>
<value>/var/log/hadoop/hive/tmp/hive/${hive.session.id}_resources</value>
</property>
<property>
<name>hive.server2.logging.operation.log.location</name>
<value>/var/log/hadoop/hive/tmp/operations_logs</value>
</property>  
	You can add/modify under Ambari -> Hive configs. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-17-2018
	
		
		10:48 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Vikash Kumar  The properties 'mapreduce.job.*' are only applicable to MR jobs. In Tez, the number of mappers and controlled by below parameters:  
 tez.grouping.max-size(default 1073741824 which is 1GB)  tez.grouping.min-size(default 52428800 which is 50MB)  tez.grouping.split-count(not set by default)   And, reducers are controlled in Hive with properties:  
 hive.exec.reducers.bytes.per.reducer(default 256000000)  hive.exec.reducers.max(default 1009)  hive.tez.auto.reducer.parallelism(default false)   For more details, refer link. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-17-2018
	
		
		10:37 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Sudharsan
 Ganeshkumar
  Yes, you can increase the number of mappers to improve parallelism depending on your cluster resources. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-17-2018
	
		
		10:37 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Sudharsan
 Ganeshkumar
  Yes, you can increase the number of mappers to improve parallelism depending on your cluster resources. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-17-2018
	
		
		06:33 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Sudharsan
 Ganeshkumar
  -m represents the number of mappers run to extract the data from the source database. Here '-m 1' means running one mapper. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-29-2018
	
		
		07:48 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Vinuraj M  Below is the workaround for the issue:  1. In /usr/hdp/current/superset/lib/python3.4/site-packages/superset/models.py, replace: 
password = Column(EncryptedType(String(1024), config.get('SECRET_KEY'))) with password = Column(String(1024))
2. Then drop and re-create the database. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-29-2018
	
		
		07:41 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							@Benhail
 Muthyala
 Sqoop eval with select query basically returns the output of the query on the terminal and storing the same onto a variable is not possible. You can do following:    sqoop eval \ -libjars $LIB_JARS -Dteradata.db.input.job.type=hive \ --connect "jdbc:teradata://XXXXXXx" \ --username XXXXXX \ --password XXXXX \ --query "select count(*) from database_name.table_name 1> sqoop.out 2>sqoop.err    hive -S -e "select count(*)from database_name.table_name ;" 1> hive.out 2>hive.err    The files sqoop.out and hive.out would include some log messages as well which could be grepped and removed. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-29-2018
	
		
		07:28 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							@James Creating Hive bucketed table is supported from Spark 2.3 (Jira SPARK-17729). Spark will disallow users from writing outputs to hive bucketed tables, by default.   Setting `hive.enforce.bucketing=false` and `hive.enforce.sorting=false` will allow you to save to hive bucketed tables.     If you want, you can set those two properties in Custom spark2-hive-site-override on Ambari, then all spark2 application will pick the configurations.     For more details,refer Slideshare.   
						
					
					... View more