Member since 
    
	
		
		
		07-17-2019
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                738
            
            
                Posts
            
        
                433
            
            
                Kudos Received
            
        
                111
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 3465 | 08-06-2019 07:09 PM | |
| 3657 | 07-19-2019 01:57 PM | |
| 5169 | 02-25-2019 04:47 PM | |
| 4657 | 10-11-2018 02:47 PM | |
| 1755 | 09-26-2018 02:49 PM | 
			
    
	
		
		
		12-09-2015
	
		
		08:12 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		3 Kudos
		
	
				
		
	
		
					
							 It looks like you have the value of the configuration property "hbase.local.dir" set in hbase-site.xml to "/data0/hadoop/hbase/local". The code checks to see if this directory exists, and, if it does not, creates it. If you are on a Linux system, it is likely that your client does not have permission to write to the root of the filesystem ("/").  The default value for "hbase.local.dir" is "/tmp/hbase-local-dir". You could consider setting a value for this property to a directory within "/tmp" as it should be writable by any user; however, any writable directory by the user running your code should be sufficient. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-09-2015
	
		
		07:22 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Instead of changing the default ZooKeeper node for HBase, you could (should) include /etc/hbase/conf/hbase-site.xml in the classpath of your application. This will help the HBase libraries find the correct location in ZooKeeper for your HBase instance. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-06-2015
	
		
		06:26 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 It should work even without Kerberos. I'm not sure how the Ranger authorization fits into the picture. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-03-2015
	
		
		06:28 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 The sqlline-thin.py script is only relevant when you want to use PQS. 
The only difference between it and the "normal" sqlline.py script is the
 JDBC URL. Make sure you invoke the right script for using PQS (thin) or
 "normal" Phoenix (thick). 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-03-2015
	
		
		05:31 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		11 Kudos
		
	
				
		
	
		
					
							 The Apache Phoenix QueryServer (PQS) is a relatively new (early 2015) component that has been added. Previously, users of Phoenix had at least two things in common: the client was written in Java and the Phoenix client libraries were included in that client. As Phoenix continues to grow as a dominant player in the "Hadoop OLTP" space, the need to eliminate these two burdens became apparent. PQS was the solution chosen to address these problems.  PQS specifically refers to an HTTP server which can interact with a custom JDBC driver that translates JDBC API calls to HTTP requests. These components are provided by an upstream Apache project, Apache Calcite, specifically the Calcite component referred to as Avatica. Originally, the HTTP requests were JSON; however recent work (mid-2015) has also shown that serialization using Protocol Buffers is also feasible.  The excitement behind PQS is two-fold as alluded. Java-based clients now have two means of running queries: a typical JDBC client with all of the necessary dependencies and computation local (the phoenix-client jar taking 10's of megabytes in space) or using the new "thin" client which offloads the majority of computation to PQS and a very small deployment footprint (little more than an HTTP client). The bigger excitement is the RPC API implicitly exposed by PQS. With a stable RPC API, this allows clients in any language to implement their own thin-client to PQS, enabling "native" access to Phoenix.  It is important to realize that PQS is still very much a work in progress effort. While efforts are ongoing to ensure that PQS is a fully-featured, secure, well-performing, and stable system. There has not been enough effort to guarantee any of these to date (2015, likely into 2016); however, there are visible strides being made in these areas. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		10-23-2015
	
		
		03:57 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 This is what the JSON looks like when I'm running the query locally with sqlline. It also is a bleeding-edge build of Calcite, so there are some differences over what is in HDP2.3 presently (e.g. executeResults instead of Service$ExecuteResponse). The whole difference I see is that you were missing statementId in the prepareAndExecute request for the upsert.  The create table:  $ curl -XPOST -H 'request: {"request":"createStatement","connectionId":"b27cbc83-a514-49f0-9bbe-f15d8bfb3532"}' http://localhost:8765
{"response":"createStatement","connectionId":"b27cbc83-a514-49f0-9bbe-f15d8bfb3532","statementId":22}
$ curl -XPOST -H 'request: {"request":"prepareAndExecute","connectionId":"b27cbc83-a514-49f0-9bbe-f15d8bfb3532","statementId":22,"sql":"CREATE TABLE teste(id bigint not null, text varchar, constraint pk primary key (id))","maxRowCount":-1}' http://localhost:8765
{"response":"executeResults","results":[{"response":"resultSet","connectionId":"b27cbc83-a514-49f0-9bbe-f15d8bfb3532","statementId":22,"ownStatement":false,"signature":null,"firstFrame":null,"updateCount":0}]}
  The upsert:  $ curl -XPOST -H 'request: {"request":"createStatement","connectionId":"b27cbc83-a514-49f0-9bbe-f15d8bfb3532"}' http://localhost:8765
{"response":"createStatement","connectionId":"b27cbc83-a514-49f0-9bbe-f15d8bfb3532","statementId":23}
$ curl -XPOST -H 'request: {"request":"prepareAndExecute","connectionId":"b27cbc83-a514-49f0-9bbe-f15d8bfb3532","statementId":23,"sql":"upsert into teste (id) values (10)","maxRowCount":-1}' http://localhost:8765
{"response":"executeResults","results":[{"response":"resultSet","connectionId":"b27cbc83-a514-49f0-9bbe-f15d8bfb3532","statementId":23,"ownStatement":false,"signature":null,"firstFrame":null,"updateCount":1}]}
  The select  $ curl -XPOST -H 'request: {"request":"createStatement","connectionId":"b27cbc83-a514-49f0-9bbe-f15d8bfb3532"}' http://localhost:8765 
{"response":"createStatement","connectionId":"b27cbc83-a514-49f0-9bbe-f15d8bfb3532","statementId":24}
$ curl -XPOST -H 'request: {"request":"prepareAndExecute","connectionId":"b27cbc83-a514-49f0-9bbe-f15d8bfb3532","statementId":24,"sql":"select * from teste","maxRowCount":-1}' http://localhost:8765
{"response":"executeResults","results":[{"response":"resultSet","connectionId":"b27cbc83-a514-49f0-9bbe-f15d8bfb3532","statementId":24,"ownStatement":true,"signature":{"columns":[{"ordinal":0,"autoIncrement":false,"caseSensitive":false,"searchable":true,"currency":false,"nullable":0,"signed":true,"displaySize":40,"label":"ID","columnName":"ID","schemaName":"","precision":0,"scale":0,"tableName":"TESTE","catalogName":"","type":{"type":"scalar","id":-5,"name":"BIGINT","rep":"PRIMITIVE_LONG"},"readOnly":true,"writable":false,"definitelyWritable":false,"columnClassName":"java.lang.Long"},{"ordinal":1,"autoIncrement":false,"caseSensitive":false,"searchable":true,"currency":false,"nullable":1,"signed":false,"displaySize":40,"label":"TEXT","columnName":"TEXT","schemaName":"","precision":0,"scale":0,"tableName":"TESTE","catalogName":"","type":{"type":"scalar","id":12,"name":"VARCHAR","rep":"STRING"},"readOnly":true,"writable":false,"definitelyWritable":false,"columnClassName":"java.lang.String"}],"sql":null,"parameters":[],"cursorFactory":{"style":"LIST","clazz":null,"fieldNames":null},"statementType":null},"firstFrame":{"offset":0,"done":true,"rows":[]},"updateCount":-1}]}
 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-23-2015
	
		
		03:17 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thanks, @Guilherme Braccialli, I'll try to get that patched on the Phoenix site. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-16-2015
	
		
		04:34 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		4 Kudos
		
	
				
		
	
		
					
							 For the evolving documentation: see Calcite's documentation pages on Avatica. Avatica is the powering technology behind PQS. Avatica Overview. As (hopefully) outlined in any presentation you saw, there is heavy work going into stabilizing PQS presently. There should be some noticed results of this in the 2.3 maintenance line.  PQS is managed by Ambari in the HDP 2.3 stack. It's located with the rest of the HBase controls. If you didn't install it initially, I'd guess you would use the Add Service workflow to install PQS. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-07-2015
	
		
		08:35 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Caused by: java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hbase.protobuf.ProtobufUtil  Looks like you don't have the correct JARs on the classpath. Hard to say why you might be missing that class without any product version information or the classpath/command for Storm. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-29-2015
	
		
		03:27 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Ambari does push towards installing PQS on each region server. The only "security consideration" I see is a single port opened to clients on each node, so I wouldn't worry too much about this presently.  Some concrete results on stateless queries behind a generic load balancer should be landing within the year (2015). 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		- « Previous
- Next »
 
        













