Member since
09-25-2015
230
Posts
276
Kudos Received
39
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
25106 | 07-05-2016 01:19 PM | |
8470 | 04-01-2016 02:16 PM | |
2126 | 02-17-2016 11:54 AM | |
5668 | 02-17-2016 11:50 AM | |
12687 | 02-16-2016 02:08 AM |
10-30-2015
11:57 PM
@khorvath@hortonworks.com for those using https://cloudbreak.sequenceiq.com hosted version, I think only option is using a time based, right? Would you have a simple example of "cron expression" usage to be executed only once and add few nodes to the cluster? I was showing cloudbreak to a partner and this was the first question: "how to add more nodes?"
... View more
10-30-2015
11:56 PM
2 Kudos
with help of @Josh Elser For 2.3.2 with phoenix enabled on ambari, you need to add phoenix jar to hive lib to make it to work, then restart hive cli / hiveserver2: mkdir /usr/hdp/current/hive-server2/auxlib/
cp /usr/hdp/2.3.2.0-2950/phoenix/phoenix-client.jar /usr/hdp/current/hive-server2/auxlib/
without this jar, you will get erro below, as describe by @Randy Gelhausen Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hbase.ipc.controller.ServerRpcControllerFactory UPDATE: Interesting that with sandbox 2.3.2, only phoenix-server.jar works, phoenix-client.jar does not work.
... View more
10-30-2015
04:25 AM
@rmolina@hortonworks.com same here sandbox 2.3.2. it's easy to reproduce, it happens to all applications created by spark-shell, spark-submit or zeppelin
... View more
10-30-2015
04:18 AM
1 Kudo
I put jars into folder /usr/hdp/current/hive-client/auxlib (in each hiveserver2/hive cli node). Other solutions might work, but in my experience was when it worked for hiveserver2, it did not work for hive cli, or good for hue and bad for squirel, so I gave up and started using auxlib.
... View more
10-26-2015
11:09 PM
4 Kudos
Sharing source code I wrote to a prospect that wants to try Phoenix + .NET. It's a simple code that POST the query to Phoenix Server and parse result JSON and prints results on console: using System;
using System.Net;
using System.Collections.Generic;
using Newtonsoft.Json;
using System.Text;
namespace ConsoleApplication1
{
class PhoenixSample
{
public class FirstFrame
{
public String offset;
public List<List<String>> rows;
}
public class Result
{
public FirstFrame firstFrame;
public int updateCount;
}
public class PhoenixResult
{
public string response { get; set; }
public List<Result> results { get; set; }
}
static void Main(string[] args)
{
try
{
string url = "http://192.168.56.203:8765";
string query = "select * from test";
var syncClient = new WebClient();
syncClient.Headers["request"] = "{\"request\":\"prepareAndExecute\",\"connectionId\":\"b27cbc83-a514-49f0-9bbe-f15d8bfb3532\",\"sql\":\"" + query + "\",\"maxRowCount\":-1}";
syncClient.Headers["Content-Type"] = "application/json";
byte[] responseArray = syncClient.UploadData(url, "POST", Encoding.ASCII.GetBytes(""));
string json = Encoding.ASCII.GetString(responseArray);
System.Diagnostics.Debug.WriteLine("teste");
System.Diagnostics.Debug.WriteLine("json=" + json);
PhoenixResult test = JsonConvert.DeserializeObject<PhoenixResult>(json);
System.Diagnostics.Debug.WriteLine(test.response);
System.Diagnostics.Debug.WriteLine(test.results[0].updateCount);
for (int i = 0; i < test.results[0].firstFrame.rows.Count; i++)
{
for (int j = 0; j < test.results[0].firstFrame.rows[i].Count; j++)
{
System.Diagnostics.Debug.Write("row # " + i + " col # " + j + " = ");
System.Diagnostics.Debug.WriteLine(test.results[0].firstFrame.rows[i][j]);
}
}
}
catch (Exception e)
{
Console.WriteLine(e.Message);
Console.Read();
}
}
}
}
... View more
Labels:
10-24-2015
01:17 AM
@Josh Elser statementId only existis in new version of phoenix. I built it locally, replaced phoenix jars and tried with commands you sent, but same issue happened, new rows were not inserted/updated. I ended finding the solution for both versions (2.3.2 and new phoenix version), we just need to add: phoenix.connection.autoCommit=true to hbase-site.xml (can be set using ambari as well). Is phoenix server missing the commit?
... View more
10-23-2015
02:25 PM
2 Kudos
Hi, I'm testing phoenix query server, working with a prospect that wants to use phoenix + .NET. I'm not being able to succeed with upsert thru phoenix query server. curl response seems to be ok, but new data WAS NOT COMMITTED. It might be related to those two jiras: https://issues.apache.org/jira/browse/PHOENIX-2320 https://issues.apache.org/jira/browse/PHOENIX-234 my create table is: create table teste(
id bigint not null,
text varchar
constraint pk primary key (id)
) ;
Here is a select statement that works thru phoenix query server: [root@hdp23 ~]# curl -XPOST -H 'request: {"request":"prepareAndExecute","connectionId":"aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa","sql":"select * from teste","maxRowCount":-1}' http://localhost:8765/
{"response":"Service$ExecuteResponse","results":[{"response":"resultSet","connectionId":"aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa","statementId":1955331455,"ownStatement":true,"signature":{"columns":[{"ordinal":0,"autoIncrement":false,"caseSensitive":false,"searchable":true,"currency":false,"nullable":0,"signed":true,"displaySize":40,"label":"ID","columnName":"ID","schemaName":"","precision":0,"scale":0,"tableName":"TESTE","catalogName":"","type":{"type":"scalar","id":-5,"name":"BIGINT","rep":"PRIMITIVE_LONG"},"readOnly":true,"writable":false,"definitelyWritable":false,"columnClassName":"java.lang.Long"},{"ordinal":1,"autoIncrement":false,"caseSensitive":false,"searchable":true,"currency":false,"nullable":1,"signed":false,"displaySize":40,"label":"TEXT","columnName":"TEXT","schemaName":"","precision":0,"scale":0,"tableName":"TESTE","catalogName":"","type":{"type":"scalar","id":12,"name":"VARCHAR","rep":"STRING"},"readOnly":true,"writable":false,"definitelyWritable":false,"columnClassName":"java.lang.String"}],"sql":null,"parameters":[],"cursorFactory":{"style":"LIST","clazz":null,"fieldNames":null}},"firstFrame":{"offset":0,"done":true,"rows":[[1,"guilherme"],[2,"isabela"],[3,"rogerio"],[4,null]]},"updateCount":-1}]}
And here insert statement, it says it worked, "updateCount=1" [root@hdp23 log]# curl -XPOST -H 'request: {"request":"prepareAndExecute","connectionId":"aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa","sql":"upsert into teste (id) values (10)","maxRowCount":-1}' http://localhost:8765/
{"response":"Service$ExecuteResponse","results":[{"response":"resultSet","connectionId":"aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa","statementId":1069768164,"ownStatement":false,"signature":null,"firstFrame":null,"updateCount":1}]}
If I select table "teste" the new line is not there. I was not committed to phoenix table. Additionally, /var/log/hbase/phoenix-hbase-server.log does show any message after command above. Anyone has any idea what is going on and/or how to debug? Thanks. Guilherme
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Phoenix
10-23-2015
02:20 PM
@Josh Elser please notice that link to Avatica into http://phoenix.apache.org/server.html is broken (404). thanks.
... View more
10-23-2015
01:16 AM
2 Kudos
@vperiasamy@hortonworks.com Can you explain what is "Ranger Admin user's password for Ambari"? I thought it was the password for admin user on ambari, but I tried to change it with new ambari admin's password and it breaks hdfs and hive services as well. I updated only "admin_password" with new Ranger admin password like Neeraj's screenshot and it worked. Thank you.
... View more
10-22-2015
05:21 PM
@abajwa@hortonworks.com I tried collect-stream-logs but I got error below when trying to use the imported template. Do you know what version of nifi it's supposed to work with?
... View more