Member since 
    
	
		
		
		10-05-2015
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                105
            
            
                Posts
            
        
                83
            
            
                Kudos Received
            
        
                25
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 1738 | 07-13-2017 09:13 AM | |
| 2051 | 07-11-2017 10:23 AM | |
| 1168 | 07-10-2017 10:43 AM | |
| 4633 | 03-23-2017 10:32 AM | |
| 4120 | 03-23-2017 10:04 AM | 
			
    
	
		
		
		02-25-2020
	
		
		10:17 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 From the stack trace the hbase:meta region itself not available means you cannot operate anything. From HBase master/regionserver logs we need to find what's the reason for hbase:meta region is not getting assigned. Would you mind sharing the logs? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-13-2017
	
		
		09:13 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 You can enable it by setting "hbase.replication.bulkload.enabled" to true in hbase-site.xml.  For more information you can check release notes of https://issues.apache.org/jira/browse/HBASE-13153. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-11-2017
	
		
		10:23 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 Are you creating table from phoenix sqlline or hive?  When you create table from Phoenix client or sqlline then you don't need to provide all the information PhoenixStoragehandler and table properties.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-10-2017
	
		
		10:43 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 When you know the row key you can use Get. HBase should automatically convert the Scan with same start and end key as get. With the get the block reads in a HFile are positional reads(better for random reads) than seek + read(better for scanning). 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-29-2017
	
		
		06:02 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 You need to create java.sql.Array by calling conn.createArrayOf with long array. For ex:  Long[] longArr =new Long[2];
longArr[0] = 25l;
longArr[1] = 36l;
array = conn.createArrayOf("BIGINT", longArr);
ps.setArray(3, array);
 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-23-2017
	
		
		10:54 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Ya got it. Just combining that from raw files is not possible. Instead you need to create two tables for CDR data and CRM data and you can write MR job or java client based on data size with following steps.  1) Scan CDR table and get Bnumber  2) Call get on CRM table to get the corresponding details.  3) Prepared puts from the get call on CRM and add them to the ROW get from CDR and write the CDR.  or else you can use Apache Phoenix so that you can utilise UPSERT SELECT features which simplify things.  http://phoenix.apache.org/  http://phoenix.apache.org/language/index.html#upsert_select 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-23-2017
	
		
		10:32 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 You can create a table with some column family of interest and run the ImportTsv on the both the files separately by specifying the Anumber field as HBASE_ROW_KEY then columns in the same Anumber will be combined into single row. Is this something you are looking?   Ex:   HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath` ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-VERSION.jar importtsv -Dimporttsv.columns=HBASE_ROW_KEY,cf:age,cf:day_c,cf:month_c,cf:year_c,cf:zip_code,cf:offerType,cf:offer,cf:gender -Dimporttsv.bulk.output=hdfs://storefileoutput datatsv hdfs://inputfile
 
  HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath` ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-VERSION.jar importtsv -Dimporttsv.columns=HBASE_ROW_KEY,cf:ContractCode,cf:Aoperator,cf:Bnumber, cf:Boperator, cf:Direction, cf:Type, cf:Month, cf:Category, cf:numberOfCalls, cf:duration, cf:longitude, cf:latitude -Dimporttsv.bulk.output=hdfs://storefileoutput datatsv hdfs://inputfile 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-23-2017
	
		
		10:04 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 You can increase       phoenix.query.threadPoolSize        to 256 or 512 based on number of machines/cores.  and also increase      phoenix.query.queueSize      to 5000 and add hbase-site.xml to class path or export HBASE_CONF_DIR.  You can refer http://phoenix.apache.org/tuning.html for more turning. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-23-2017
	
		
		08:50 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Houssem Alayet is Anumber common and going to be row key for the table? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-23-2017
	
		
		07:09 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Ashok Kumar BM  What's your hfile block size is it defult 64 kb? If you are writing all 1 millions cells of a row at a time then better you can increase the block size.  Are you using any data block encoding techniques which can improve the performance and also are you trying ROW or ROW_COL bloom filters? 
						
					
					... View more