Member since 
    
	
		
		
		02-19-2016
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                158
            
            
                Posts
            
        
                69
            
            
                Kudos Received
            
        
                24
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 1864 | 11-29-2017 08:47 PM | |
| 2221 | 10-24-2017 06:37 PM | |
| 20189 | 08-04-2017 06:58 PM | |
| 2292 | 05-15-2017 06:42 PM | |
| 2997 | 03-27-2017 06:36 PM | 
			
    
	
		
		
		11-29-2017
	
		
		08:47 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 No, it's not possible. Index may have only one parent table. You need to keep in mind that indexes are updated when you update the parent table. Let's imagine that you have more than one parent table. Any update only one of them would mean that you have to make a lookup for all corresponding records in other parents to update index data. That would be too expensive for simple operation as upsert is.   
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-09-2017
	
		
		10:35 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 You may use user_permission <table> in the hbase shell for the list of the users that can access to the table. Also you may run scan 'hbase:acl', but that would require superuser privs in HBase.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-24-2017
	
		
		06:37 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 No, there is no alter for index. The problem is that adding one more column to the index means the same full scan over the table and the operation would be as expensive as the new index creation.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-12-2017
	
		
		08:27 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 You may want to use Stargate REST API Wiki page 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-07-2017
	
		
		05:16 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Check that you have /user/tcb directory on the HDFS. Log in as hdfs user and run following commands:  hadoop fs -mkdir /user/tcb  hadoop fs -chown tcp /user/tcb 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-29-2017
	
		
		06:26 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 As @ssattiraju mentioned you may use a file with commands providing it as a command line parameter. One quick note - if one of the commands fail, the script will stop executing. To avoid it you may use just simple redirect like:  phoenix-sqlline localhost < file.sql 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-08-2017
	
		
		12:52 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 That's an incorrect approach. You don't need to add xml files to the jars. As I already mentioned before, you need to add directories where those files located, not files themselves. That's how java classpath work. It accepts jars and directories only. So if you need a resource in the java classpath, you need to have it in a jar file (like you did)  OR put the parent directory to the classpath. In Squirrel it can be done in the Extra classpath tab of the Driver configuration:      
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-04-2017
	
		
		07:21 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Actually that's supposed to be something like 5 minutes by default. So, check whether you have any old snapshots that you don't need anymore.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-04-2017
	
		
		06:58 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		3 Kudos
		
	
				
		
	
		
					
							 Check whether you have hbase.master.hfilecleaner.ttl configuration property in hbase-site.xml. It defines TTL for archived files.  Archive directory can keep:  1. old WAL files  2. Old region files after compaction  3. files for snapshots.   I believe that you have some old snapshots and that's why you have so big archive directory. Delete snapshots that are not required  and those files will be deleted automatically.   
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-28-2017
	
		
		07:33 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I'm talking about the config directories. Those are : /etc/hbase/conf and /etc/hadoop/conf. Some versions of HDP have a copy of core-site.xml in the hbase conf dir (you may check it manually). The only jar you need to add to the driver configuration is /usr/hdp/current/phoenix-client/phoenix-client.jar. Don't add anything else.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
        













