Member since 
    
	
		
		
		10-05-2015
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                105
            
            
                Posts
            
        
                83
            
            
                Kudos Received
            
        
                25
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 1738 | 07-13-2017 09:13 AM | |
| 2052 | 07-11-2017 10:23 AM | |
| 1168 | 07-10-2017 10:43 AM | |
| 4633 | 03-23-2017 10:32 AM | |
| 4121 | 03-23-2017 10:04 AM | 
			
    
	
		
		
		09-30-2024
	
		
		11:16 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 I am using HBASE 2.3.5 but tephra is not getting supported as i am unable to add Coprocessor on hbase table using this query   create 'my_table', 'cf', {METHOD => 'coprocessor', 'CLASSNAME' => 'org.apache.tephra.hbase.coprocessor.TransactionProcessor'} after firing this query getting below version incompability error,  ERROR: table_att is currently the only supported method. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-17-2021
	
		
		08:52 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @Narendra_, as this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-08-2020
	
		
		11:21 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @TonyQiu, as this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-01-2020
	
		
		01:23 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 The Phoenix-Hive storage handler as of v4.14.0 (CDH 5.12) seems buggy.  I was able to get the Hive external wrapper table working for simple queries, after tweaking column mapping around upper/lower case gotchas.  However, it fails to work when I tried the "INSERT OVERWRITE DIRECTORY ... SELECT ..." command to export to file:  org.apache.phoenix.schema.ColumnNotFoundException: ERROR 504 (42703): Undefined column. columnName=<table name>     This is a known problem that no one is apparently looking at:  https://issues.apache.org/jira/browse/PHOENIX-4804    
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-26-2020
	
		
		01:50 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @rchintaguntla      All the logs are here     https://we.tl/t-cCQWq6tdmZ 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-13-2017
	
		
		06:12 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Roni  I was facing same kind of issue. I have resolve this issue by using following steps:-  1) Edit Ambari->Hive->Configs->Advanced->Custom hive-site->Add Property..., add the following properties based on your HBase configurations(you can search in Ambari->HBase->Configs): custom hive-site.xml  hbase.zookeeper.quorum=xyz (find this property value from hbase )  zookeeper.znode.parent=/hbase-unsecure (find this property value from hbase )  phoenix.schema.mapSystemTablesToNamespace=true  phoenix.schema.isNamespaceMappingEnabled=true  2) Copy jar to /usr/hdp/current/hive-server2/auxlib from  /usr/hdp/2.5.6.0-40/phoenix/phoenix-4.7.0.2.5.6.0-40-hive.jar  /usr/hdp/2.5.6.0-40/phoenix/phoenix-hive-4.7.0.2.5.6.0-40-sources.jar If he jar is not working for you then just try to get following jar phoenix-hive-4.7.0.2.5.3.0-37.jar and copy this to /usr/hdp/current/hive-server2/auxlib  3) add property to custom-hive-env  HIVE_AUX_JARS_PATH=/usr/hdp/current/hive-server2/auxlib/4) Add follwoing property to custom-hbase-site.xmlphoenix.schema.mapSystemTablesToNamespace=true phoenix.schema.isNamespaceMappingEnabled=true
  5) Also run following command  1) jar uf /usr/hdp/current/hive-server2/auxlib/phoenix-4.7.0.2.5.6.0-40-client.jar /etc/hive/conf/hive-site.xml  2) jar uf /usr/hdp/current/hive-server2/auxlib/phoenix-4.7.0.2.5.6.0-40-client.jar /etc/hbase/conf/hbase-site.xml  And I hope my solution will work for you 🙂 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-21-2016
	
		
		02:18 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Make sure that you have updated hbase-site.xml in your sqlline class path to have properties to take effect. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-07-2016
	
		
		04:00 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @ARUN please keep in mind that by setting this property you're giving load balancer algorithm limited information about the load on your cluster. It will impact it's ability to balance the regions. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-30-2016
	
		
		06:03 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @Rushikesh Deshmukh  The following table provides an overview for quickly comparing these approaches, which I’ll describe in detail below.  http://blog.cloudera.com/blog/2013/11/approaches-to-backup-and-disaster-recovery-in-hbase/  i used distcp as well but that did not work for me , in the sense data was copied but while running hbck i had issue  if you want to create backup on same cluster then copytable and sanpshot are very easy  for inter cluster snapshot works good  let me know if you need more details  Also this below link is really very useful and clear   http://hbase.apache.org/0.94/book/ops.backup.html 
						
					
					... View more