Member since 
    
	
		
		
		07-18-2017
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                20
            
            
                Posts
            
        
                5
            
            
                Kudos Received
            
        
                1
            
            
                Solution
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 4608 | 03-15-2018 10:18 PM | 
			
    
	
		
		
		09-30-2018
	
		
		01:44 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Hi,     You are using wrong connection string.     Use: jdbc:hive2://localhost:10000/     Thanks,  Bhavesh 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-24-2018
	
		
		06:12 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi Suresh Babu  I have tried with your create table statement but it did't resolved my issue. The result were same, only ID column got populated.  Thanks,  Bhavesh 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-26-2018
	
		
		12:00 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @schhabra   Hi Schhabra,  Thanks for your quick response.  Yes, the schema mapping property is enabled at both side client and server side and I have executed the create table statement as you have provided in your answer but it didn't resolved my issue, Phoenix table is still not able to fetch the columns except ID.  Thanks,  Bhavesh 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-22-2018
	
		
		09:03 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi All,  We are facing an issue to map existing Base table with apache phoenix table.  We have table called PROD:MYTAB1 in HBase and have some records in it.  [hbase(main):008:0> scan 'PROD:MYTAB1'  ROW                                                          COLUMN+CELL                                                                                                                                                                         1                                                           column=CF1:DEPT, timestamp=1516286699016, value=Information Technology                                                                                                              1                                                           column=CF1:NAME, timestamp=1516286698951, value=Bhavesh Vadaliya                                                                                                                    1                                                           column=CF1:_0, timestamp=1516286700481, value=                                                                                                                                      1                                                           column=CF2:DESIGNATION, timestamp=1516286700481, value=System Admin                                                                                                                 1                                                           column=CF2:SALARY, timestamp=1516286699070, value=1000                                                                                                                             We mapped the HBase PROD:MYTAB1 with Apache Phoenix table using below command on Apache Phoenix Prompt  CREATE TABLE 'PROD'.'MYTAB1' (ID VARCHAR PRIMARY KEY, CF1.NAME VARCHAR,CF1.DEPT VARCHAR, CF2.SALARY VARCHAR, CF2.DESIGNATION VARCHAR) ;  Table got created successfully but when we ran select query, it just displayed ID field only, remaining all columns are empty.  Here is the screenshot:  Could you please help me to identify the issue, am I missing something to map table with namespace and schema.  Regards,  Bhavesh     
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
 - 
						
							
		
			Apache HBase
 - 
						
							
		
			Apache Phoenix
 
			
    
	
		
		
		04-08-2018
	
		
		11:42 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							Thanks Nick for the update.
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-15-2018
	
		
		10:38 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi All,     We are using solr as solr cloud with running on 4 nodes of solr server and most of peoples are having access on Admin Console.     Our solr developers are supplying DB password in plain text in data-config.xml file which is accessible over Admin Console and anybody can see the same.     Could you please let me know how we can hide/ecrypt or store password somewhere in file and use the same in data-config.xml.     Appriciate your help to resolve the issue.     Thanks,  Bhavesh 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
 - 
						
							
		
			Apache Solr
 
			
    
	
		
		
		03-15-2018
	
		
		10:18 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							    I resolved this issue long back.     I had to copy hbase-client jars into mapreduce's lib directory which resolved my issue.     Thanks,  Bhavesh 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-28-2018
	
		
		11:47 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi RobertM,     Thank you very much for looking into this issue.     Yes, those all Jars are available in PROD cluster, it's not a different cluster, I am trying to import/copy table in same cluster but in different table which has same table structure (I mean column family, fields etc).     I have tried with both type of tables, 1) table mapped with phoenix and 2) table not mapped with phoenix  But result is same, export is working fine but import/copy table throws errors.     Thanks,  Bhavesh Vadaliya 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-22-2018
	
		
		06:30 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi All,     This is my first port here, I need a help to import or Copy table in HBase.     We have one table called EMPLOYEE in default name space and now I want to Copy or Import data from EMPLOYEE table to PROD:TEST_EMPLOYEE table, I tried with below commands but it was failed with below errors.     We are using CDH-5.8.0 and PROD:TEST_EMPLOYEE table has been created by using Apache Phoenix.     sudo -u hbase hbase -Dhbase.import.version=1.2 org.apache.hadoop.hbase.mapreduce.Import PROD:TEST_EMPLOYEE /user/hbase/EMPLOYEE     Import Error:  18/01/22 19:43:47 INFO mapreduce.Job: Running job: job_1516623205824_0008  18/01/22 19:43:56 INFO mapreduce.Job: Job job_1516623205824_0008 running in uber mode : false  18/01/22 19:43:56 INFO mapreduce.Job:  map 0% reduce 0%  18/01/22 19:44:03 INFO mapreduce.Job: Task Id : attempt_1516623205824_0008_m_000000_0, Status : FAILED  Error: org.apache.hadoop.hbase.client.Put.setClusterIds(Ljava/util/List;)V  18/01/22 19:44:09 INFO mapreduce.Job: Task Id : attempt_1516623205824_0008_m_000000_1, Status : FAILED  Error: org.apache.hadoop.hbase.client.Put.setClusterIds(Ljava/util/List;)V  18/01/22 19:44:16 INFO mapreduce.Job: Task Id : attempt_1516623205824_0008_m_000000_2, Status : FAILED  Error: org.apache.hadoop.hbase.client.Put.setClusterIds(Ljava/util/List;)V  18/01/22 19:44:32 INFO mapreduce.Job:  map 100% reduce 0%  18/01/22 19:44:32 INFO mapreduce.Job: Job job_1516623205824_0008 failed with state FAILED due to: Task failed task_1516623205824_0008_m_000000  Job failed as tasks failed. failedMaps:1 failedReduces:0     CopyTable Error:  18/01/22 19:54:24 INFO mapreduce.Job: Job job_1516623205824_0009 running in uber mode : false  18/01/22 19:54:24 INFO mapreduce.Job:  map 0% reduce 0%  18/01/22 19:54:35 INFO mapreduce.Job: Task Id : attempt_1516623205824_0009_m_000000_0, Status : FAILED  Error: org.apache.hadoop.hbase.client.Put.setClusterIds(Ljava/util/List;)V  18/01/22 19:54:46 INFO mapreduce.Job: Task Id : attempt_1516623205824_0009_m_000000_1, Status : FAILED  Error: org.apache.hadoop.hbase.client.Put.setClusterIds(Ljava/util/List;)V  18/01/22 19:54:56 INFO mapreduce.Job: Task Id : attempt_1516623205824_0009_m_000000_2, Status : FAILED  Error: org.apache.hadoop.hbase.client.Put.setClusterIds(Ljava/util/List;)V  18/01/22 19:55:04 INFO mapreduce.Job:  map 100% reduce 0%  18/01/22 19:55:04 INFO mapreduce.Job: Job job_1516623205824_0009 failed with state FAILED due to: Task failed task_1516623205824_0009_m_000000        Thanks,  Bhavesh    
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
 - 
						
							
		
			Apache HBase
 - 
						
							
		
			Apache Phoenix
 - 
						
							
		
			MapReduce
 
			
    
	
		
		
		08-22-2017
	
		
		11:02 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi Team,  I tried to enable the HDFS Ranger plugin which asked me to restart NameNode/DataNode/Secondry Namenode but it failed to start with below error.  NameNode works very well if I disable HDFS Ranger plugin.  Appreciate your help to resolve this issue.  I am using Ambari Version
      2.5.0.3 and HDP Stack 2.5.3.0  Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 424, in <module>
    NameNode().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 314, in execute
    method(env)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 762, in restart
    self.start(env, upgrade_type=upgrade_type)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 100, in start
    upgrade_suspended=params.upgrade_suspended, env=env)
  File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
    return fn(*args, **kwargs)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 104, in namenode
    setup_ranger_hdfs(upgrade_type=upgrade_type)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/setup_ranger_hdfs.py", line 68, in setup_ranger_hdfs
    component_user_keytab=params.nn_keytab if params.security_enabled else None)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/setup_ranger_plugin_xml.py", line 103, in setup_ranger_plugin
    component_user_principal, component_user_keytab)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/ranger_functions_v2.py", line 106, in create_ranger_repository
    response_code = self.check_ranger_login_urllib2(self.base_url)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/decorator.py", line 82, in wrapper
    return function(*args, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/ranger_functions_v2.py", line 208, in check_ranger_login_urllib2
    response = openurl(url, timeout=20)
  File "/usr/lib/python2.6/site-packages/ambari_commons/inet_utils.py", line 41, in openurl
    return urllib2.urlopen(url, timeout=timeout, *args, **kwargs)
  File "/usr/lib64/python2.6/urllib2.py", line 126, in urlopen
    return _opener.open(url, data, timeout)
  File "/usr/lib64/python2.6/urllib2.py", line 383, in open
    protocol = req.get_type()
  File "/usr/lib64/python2.6/urllib2.py", line 244, in get_type
    raise ValueError, "unknown url type: %s" % self.__original
ValueError: unknown url type: {{policymgr_mgr_url}} 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
 - 
						
							
		
			Apache Hadoop
 - 
						
							
		
			Apache Ranger