Member since 
    
	
		
		
		10-31-2016
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                81
            
            
                Posts
            
        
                1
            
            
                Kudos Received
            
        
                0
            
            
                Solutions
            
        
			
    
	
		
		
		09-03-2019
	
		
		05:13 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 0: jdbc:mysql://hdp1.demo.lab/hive> set mapred.job.queue.name=root.admin;  Error: Unknown system variable 'job' (state=HY000,code=1193)  0: jdbc:mysql://hdp1.demo.lab/hive> set tez.queue.name=root.admin;  Error: Unknown system variable 'queue' (state=HY000,code=1193)  0: jdbc:mysql://hdp1.demo.lab/hive> SET tez.queue.name=root.admin;  Error: Unknown system variable 'queue' (state=HY000,code=1193)  0: jdbc:mysql://hdp1.demo.lab/hive> SET TEZ.QUEUE.NAME=ROOT.ADMIN  0: jdbc:mysql://hdp1.demo.lab/hive> SET TEZ.QUEUE.NAME=ROOT.ADMIN;  Error: Unknown system variable 'QUEUE' (state=HY000,code=1193)  0: jdbc:mysql://hdp1.demo.lab/hive> set TEZ.queue.name=root.admin;  Error: Unknown system variable 'queue' (state=HY000,code=1193)  0: jdbc:mysql://hdp1.demo.lab/hive> SET tez.queue.name=ROOT.ADMIN;  Error: Unknown system variable 'queue' (state=HY000,code=1193)  0: jdbc:mysql://hdp1.demo.lab/hive> SET tez.queue.name=admin;  Error: Unknown system variable 'queue' (state=HY000,code=1193)  0: jdbc:mysql://hdp1.demo.lab/hive> SET tez.job.queue.name=admin;  Error: Unknown system variable 'job' (state=HY000,code=1193)  0: jdbc:mysql://hdp1.demo.lab/hive> 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-13-2018
	
		
		10:59 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Assign to @Jonathan Sneep, You can also create a new temp table with JSON file format and use "insert into json table form existing table;" Post creating the new table, the files can be read form hdfs in json format.   
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-05-2018
	
		
		08:38 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 workflow-log.txt@Sivaprasanna   I am use informatica BDM 10.0.1 version.  Source is mysql and target is HIVE table.  While executing the Infoematica workflow iam getting the error as mentioned.  FYR   I am attaching the workflow log.  Thanks. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-20-2019
	
		
		09:27 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Any solution on this? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-09-2017
	
		
		10:50 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @kotesh banoth  New data will be replicated to the newly added data node.  But if you want to rebalance your cluster then it is best if you run the "HDFS Rebalancer" from ambari UI or via command line.  HDFS provides a “balancer” utility to help balance the blocks across DataNodes in the cluster.   Ambari UI --> HDFS --> Service Actions --> HDFS Rebalance<br>  (OR)  # su - hdfs -c "hdfs --config /usr/hdp/current/hadoop-client/conf balancer -threshold 10"
  https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.0.0/bk_ambari-operations/content/rebalancing_hdfs.html 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-16-2017
	
		
		07:06 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @kotesh banoth    Can you try without SASL,  or somehow it need to be imported in your python.  Pleas try this with NOSASL to see what you get next:  (or verify if the "cyrus-sasl-devel" is installed properly)  conn = hive.connect(host='172.16.0.XXX', port=10000, username='kotesh', auth='NOSASL')  .  It will need to following config on your hive side.  <property>
   <name>hive.server2.authentication</name>
   <value>NOSASL</value>
</property>  . 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-06-2017
	
		
		12:08 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thanks so much,  steps followed:  1) psql ambari -U ambari -W -p 5432  2)  pg_dump ambari -U ambari -W -p 5432 > ambari_bkup  3) Delete commands  4) ambari-server start 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-09-2018
	
		
		03:33 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hello Kotesh,  Were you able to resolve this issue after setting all the properties as suggested? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-09-2017
	
		
		07:04 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @rguruvannagari thanks a lot. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-29-2016
	
		
		06:21 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 The query fails here. Below is the stack for the error generated   public static RecordUpdater getAcidRecordUpdater(JobConf jc, TableDesc tableInfo, int bucket,
  FileSinkDesc conf, Path outPath,
  ObjectInspector inspector,
  Reporter reporter, int rowIdColNum)
  throws HiveException, IOException {
HiveOutputFormat<?, ?> hiveOutputFormat = getHiveOutputFormat(jc, tableInfo);
AcidOutputFormat<?, ?> acidOutputFormat = null;
if (hiveOutputFormat instanceof AcidOutputFormat) {
  acidOutputFormat = (AcidOutputFormat)hiveOutputFormat;
} else {
  throw new HiveException("Unable to create RecordUpdater for HiveOutputFormat that does not " +
      "implement AcidOutputFormat");  Are these tables have ACID support enabled ? 
						
					
					... View more