Member since 
    
	
		
		
		04-11-2016
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                535
            
            
                Posts
            
        
                148
            
            
                Kudos Received
            
        
                77
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 9155 | 09-17-2018 06:33 AM | |
| 2397 | 08-29-2018 07:48 AM | |
| 3393 | 08-28-2018 12:38 PM | |
| 2891 | 08-03-2018 05:42 AM | |
| 2613 | 07-27-2018 04:00 PM | 
			
    
	
		
		
		02-24-2017
	
		
		09:47 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							@Aruna Sameera I am suspecting the issue to be with the difference in the hdfs URI table using and the actual URI. Compare the output of below commands:  metatool -listFSRoot  hdfs getconf -nnRpcAddresses 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-22-2017
	
		
		06:07 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							@Aruna Sameera Can you share the output of following?  hive> describe formatted telecom.recharge;  hadoop fs -ls /user/hive/warehouse  hadoop fs -ls /user/hive/warehouse/telecom.db 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-21-2017
	
		
		07:59 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							   SYMPTOM   When running Hive queries from Resource Manager log the following error is displayed:  2017-02-07 15:08:32,140 ERROR impl.MetricsSinkAdapter (MetricsSinkAdapter.java:publishMetricsFromQueue(148)) - Got sink exception, retry in 4600ms
org.apache.hadoop.metrics2.MetricsException: Failed to putMetrics
  at org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink.putMetrics(HadoopTimelineMetricsSink.java:216)
  at org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:186)
  at org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:43)
  at org.apache.hadoop.metrics2.impl.SinkQueue.consumeAll(SinkQueue.java:87)
  at org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.publishMetricsFromQueue(MetricsSinkAdapter.java:134)
  at org.apache.hadoop.metrics2.impl.MetricsSinkAdapter$1.run(MetricsSinkAdapter.java:88)
Caused by: java.net.UnknownHostException: http
  at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:178)
  at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
  at java.net.Socket.connect(Socket.java:579)
  at java.net.Socket.connect(Socket.java:528)
  at java.net.Socket.<init>(Socket.java:425)
  at java.net.Socket.<init>(Socket.java:280)  ROOT CAUSE   This issue occurs when reverse lookup returns incorrect hostname for the IP address.      RESOLUTION   To resolve this issue, fix the DNS issue under /etc/resolv.conf. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		02-21-2017
	
		
		07:50 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Joshua Adeleke  Seems like some tables are present on hive_dev which is causing the issue. Try using clean database for metastore and try. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-21-2017
	
		
		06:47 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Joshua Adeleke  It may be due to already existing the database. Share the output of schematool -initSchema -dbType mysql -dryRun and output of following query from Hive metastore db:  select * from "VERSION"; 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-21-2017
	
		
		04:47 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							@Raj Kadel The issue might be due to incorrect parameters for ACID, try below set:  hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager  hive.compactor.initiator.on=true.  hive.compactor.worker.threads=10  hive.support.concurrency=true 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-17-2017
	
		
		05:28 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							@motohiro mito For now, connectivity to Hiveserver2 from Hue is not supported. Jira HUE-2738 is already in place to track the same. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-14-2017
	
		
		07:13 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							@Srikanth Puli You need to use "describe extended <table_name>;" as below:  describe extended tableex5 ;  | Detailed Table Information  | Table(tableName:tableex5, dbName:default, owner:hive, createTime:1487032307, lastAccessTime:0, retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:col1, type:string, comment:null), FieldSchema(name:col2, type:int, comment:null)], location:hdfs://ssnode253stats.openstacklocal:8020/apps/hive/warehouse/tableex5, inputFormat:org.apache.hadoop.mapred.TextInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat, compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe, parameters:{serialization.format=1, field.delim=%|%}), bucketCols:[], sortCols:[], parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], skewedColValueLocationMaps:{}), storedAsSubDirectories:false), partitionKeys:[], parameters:{totalSize=24, numRows=2, rawDataSize=0, COLUMN_STATS_ACCURATE={"BASIC_STATS":"true"}, numFiles=1, transient_lastDdlTime=1487032937}, viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE)  |   
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-13-2017
	
		
		07:26 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Bala Vignesh N V
  Yes, there would be performance difference between select * and select column as 'select *' would bring in all the column data and with indexing not affecting much.   ORC would have better performance compared to textformat irrespective of select query being run. This is because the data is stored in splits and each split header containing the details regarding data within the split. ORC also has predicate pushdown which facilitates the better performance.  Refer to link1 and link2 for details on increasing performance. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-07-2017
	
		
		07:43 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							@David Halik This is not a Bug, basically the 'Releases notes for HDP 2.5.3' does information for the supported metastore databases.  Release_MR 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
         
					
				













