Member since
09-24-2015
527
Posts
136
Kudos Received
19
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2849 | 06-30-2017 03:15 PM | |
4245 | 10-14-2016 10:08 AM | |
9498 | 09-07-2016 06:04 AM | |
11538 | 08-26-2016 11:27 AM | |
1883 | 08-23-2016 02:09 PM |
06-29-2016
09:17 PM
Hi: i create this table in hbase with phoenix with 4 column in the row key CREATE TABLE IF NOT EXISTS journey_oficina_hbase(
FECHAOPRCNF VARCHAR not null ,
CODNRBEENF VARCHAR not null ,
CODINTERNO VARCHAR not null,
CODTXF VARCHAR not null,
FREQ INTEGER,
IMPORTE DOUBLE
CONSTRAINT pk PRIMARY KEY (FECHAOPRCNF, CODNRBEENF,CODINTERNO, CODTXF) );
and fot this query is better tu put first FECHAOPRCNF its a range or CODNRBEENF that is string "SELECT R.fechaoprcnf as fechaoprcnf,R.codnrbeenf,R.codinterno as codinterno, sum(R.freq) as freq FROM (SELECT fechaoprcnf,codnrbeenf,freq, codinterno,codtxf FROM journey_oficina_hbase
WHERE codnrbeenf in ('3008') and codinterno in (''),
and (fechaoprcnf >= '", input$fecha[1],
" 00:00:00' and fechaoprcnf <= '", input$fecha[2]," 23:00:00')) R GROUP by R.fechaoprcnf, R.codnrbeenf,R.codinterno";
thanks
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Phoenix
06-22-2016
08:58 PM
1 Kudo
raw_data = LOAD '/tmp/JOURNEY_OFICINA_HBASE.csv' USING PigStorage(',') as (entidad:chararray, fecha :chararray, oficina :chararray, operacion :chararray, freq:long);
STORE raw_data into 'hbase://JOURNEY_OFICINA_HBASE' using org.apache.phoenix.pig.PhoenixHBaseStorage('lnxbig05','-batchSize 5000');
MANY THANKS!!
... View more
06-22-2016
07:54 PM
the sintaxis is wrong and i changed like this and still same error:
raw_data = LOAD '/tmp/JOURNEY_OFICINA_HBASE.csv' USING PigStorage(',') as (CODNRBEENF:chararray, FECHAOPRCNF :chararray, CODINTERNO :chararray, CODTXF :chararray, FREQ:long);
TaskAttempt 3 failed, info=[Error: Failure while running task:java.lang.RuntimeException: Unable to process column CHAR:"CODNRBEENF", innerMessage=Unknown type java.util.HashMap passed to PhoenixHBaseStorage
at org.apache.phoenix.pig.writable.PhoenixPigDBWritable.write(PhoenixPigDBWritable.java:66)
at org.apache.phoenix.mapreduce.PhoenixRecordWriter.write(PhoenixRecordWriter.java:78
at org.apache.phoenix.mapreduce.PhoenixRecordWriter.write(PhoenixRecordWriter.java:39)
at org.apache.phoenix.pig.PhoenixHBaseStorage.putNext(PhoenixHBaseStorage.java:184)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:136)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:95)
at org.apache.tez.mapreduce.output.MROutput$1.write(MROutput.java:503)
at org.apache.pig.backend.hadoop.executionengine.tez.plan.operator.POStoreTez.getNextTuple(POStoreTez.java:125)
at org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor.runPipeline(PigProcessor.java:332)
at org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor.run(PigProcessor.java:197)
at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:344)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:181)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:172)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:172)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:168)
at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: Unknown type java.util.HashMap passed to PhoenixHBaseStorage
at org.apache.phoenix.pig.util.TypeUtil.getType(TypeUtil.java:158)
at org.apache.phoenix.pig.util.TypeUtil.castPigTypeToPhoenix(TypeUtil.java:177)
at org.apache.phoenix.pig.writable.PhoenixPigDBWritable.convertTypeSpecificValue(PhoenixPigDBWritable.java:79)
at org.apache.phoenix.pig.writable.PhoenixPigDBWritable.write(PhoenixPigDBWritable.java:59)
... 22 more
... View more
06-22-2016
07:37 PM
and my out data is that:
(3008,2016-06-01 11:00:00,0161,GCA10CON,1)
(3008,2016-06-01 11:00:00,0161,GIN02OOU,14)
(3008,2016-06-01 11:00:00,0161,IBC06MOU,3)
(3008,2016-06-01 11:00:00,0161,RGE62COU,1)
(3008,2016-06-01 11:00:00,0161,STS06CON,6)
(3008,2016-06-01 11:00:00,0161,VPR28COU,2)
(3008,2016-06-01 11:00:00,0162,GAE05COU,1)
(3008,2016-06-01 11:00:00,0162,PGEA8COU,3)
(3008,2016-06-01 11:00:00,0163,DVI41OOU,5)
(3008,2016-06-01 11:00:00,0163,GAC11COU,10)
(3008,2016-06-01 11:00:00,0163,GAC67COU,22)
... View more
06-22-2016
07:26 PM
I changed it like that and the same error CREATE TABLE IF NOT EXISTS journey_oficina_hbase(
CODNRBEENF VARCHAR not null,
FECHAOPRCNF VARCHAR not null ,
CODINTERNO VARCHAR,
CODTXF VARCHAR,
FREQ BIGINT,
CONSTRAINT pk PRIMARY KEY (CODNRBEENF,FECHAOPRCNF) );
... View more
06-22-2016
07:20 PM
hi: schema phoenix: CREATE TABLE IF NOT EXISTS journey_oficina_hbase(
CODNRBEENF CHAR(4) not null,
FECHAOPRCNF CHAR(21) not null ,
CODINTERNO CHAR(4),
CODTXF CHAR(8),
FREQ BIGINT,
CONSTRAINT pk PRIMARY KEY (CODNRBEENF,FECHAOPRCNF) );
script pig: D = FOREACH B generate (chararray) SUBSTRING($0, 0, 13) as fecha ,(chararray) $1 as CODNRBE,(chararray) $2 as CODINTERNO,(chararray) $3 as CODTX;
D = FILTER D BY ($1 != '');
F = GROUP D BY (CONCAT(fecha,':00:00'),CODNRBE,CODINTERNO,CODTX);
G = FOREACH F GENERATE
(CHARARRAY) group.$1 as entidad,
(CHARARRAY) group.$0 as fecha,
(CHARARRAY) group.$2 as oficina,
(CHARARRAY) group.$3 as operacion,
(long) COUNT(D) as freq;
thanks
... View more
06-22-2016
07:14 PM
Hi: from pig i cant insert into table phoenix hbase gnostics=[Task failed, taskId=task_1464163049638_1419_1_00_000053, diagnostics=[TaskAttempt 0 failed, info=[Error: Failure while running task:java.lang.RuntimeException: Unable to process column CHAR:"CODNRBEENF", innerMessage=Unknown type java.util.HashMap passed to PhoenixHBaseStorage
at org.apache.phoenix.pig.writable.PhoenixPigDBWritable.write(PhoenixPigDBWritable.java:66)
at org.apache.phoenix.mapreduce.PhoenixRecordWriter.write(PhoenixRecordWriter.java:78)
at org.apache.phoenix.mapreduce.PhoenixRecordWriter.write(PhoenixRecordWriter.java:39)
at org.apache.phoenix.pig.PhoenixHBaseStorage.putNext(PhoenixHBaseStorage.java:184)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:136)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:95)
at org.apache.tez.mapreduce.output.MROutput$1.write(MROutput.java:503)
at org.apache.pig.backend.hadoop.executionengine.tez.plan.operator.POStoreTez.getNextTuple(POStoreTez.java:125)
at org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor.runPipeline(PigProcessor.java:332)
at org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor.run(PigProcessor.java:197)
at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:344)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:181)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:172)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:172)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:168)
at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: Unknown type java.util.HashMap passed to PhoenixHBaseStorage
at org.apache.phoenix.pig.util.TypeUtil.getType(TypeUtil.java:158)
at org.apache.phoenix.pig.util.TypeUtil.castPigTypeToPhoenix(TypeUtil.java:177)
at org.apache.phoenix.pig.writable.PhoenixPigDBWritable.convertTypeSpecificValue(PhoenixPigDBWritable.java:79)
at org.apache.phoenix.pig.writable.PhoenixPigDBWritable.write(PhoenixPigDBWritable.java:59)
... 22 more
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Phoenix
-
Apache Pig
06-22-2016
06:54 PM
Hi: from the sqlline is working like this: 0: jdbc:phoenix:zzzzzzz> UPSERT INTO JOURNEY_OFICINA_HBASE VALUES('0198','2016-06-01 00:00:00','8002','DVI96COU',8);
but not from the jar also, the table i created was like multi rowkey CREATE TABLE IF NOT EXISTS journey_oficina_hbase(
CODNRBEENF CHAR(4) not null,
FECHAOPRCNF CHAR(21) not null ,
CODINTERNO CHAR(4),
CODTXF CHAR(8),
FREQ BIGINT,
CONSTRAINT pk PRIMARY KEY (CODNRBEENF,FECHAOPRCNF) );
But now how can i get the row with this row key??? the 4 digist are de oficce and the rest is the date 01982016-06-01 00:00:00
thanks
... View more
06-22-2016
06:43 PM
hi, many thanks now is working using quorum:2181:/hbase-unsecure like that: hadoop jar phoenix-4.4.0.2.4.0.0-169-client.jar org.apache.phoenix.mapreduce.CsvBulkLoadTool --table journey_oficina_hbase --input /tmp/journey_oficina_hbase.csv -z lnxbig04.cajarural.gcr:2181:/hbase-unsecure
but doesnt finish i am still waiting.... 16/06/22 20:41:52 INFO mapreduce.Job: map 100% reduce 100%
16/06/22 20:41:52 INFO mapreduce.Job: Job job_1464163049638_1416 completed successfully
16/06/22 20:41:52 INFO mapreduce.Job: Counters: 50
16/06/22 20:41:52 INFO mapreduce.CsvBulkLoadTool: Loading HFiles from /tmp/a08d1fda-6bf5-4b47-8bba-0a7fd0e28e47/JOURNEY_OFICINA_HBASE
16/06/22 20:41:53 WARN mapreduce.LoadIncrementalHFiles: managed connection cannot be used for bulkload. Creating unmanaged connection.
16/06/22 20:41:53 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x311fac48 connecting to ZooKeeper ensemble=xxxxxxx:2181
16/06/22 20:41:53 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=xxxxxxx:2181 sessionTimeout=90000 watcher=hconnection-0x311fac480x0, quorum= xxxxxxx :2181, baseZNode=/hbase-unsecure
16/06/22 20:41:53 INFO zookeeper.ClientCnxn: Opening socket connection to server lnxbig04.cajarural.gcr/ xxxxxxx . Will not attempt to authenticate using SASL (unknown error)
16/06/22 20:41:53 INFO zookeeper.ClientCnxn: Socket connection established to xxxxxxx /10.1.246.18:2181, initiating session
16/06/22 20:41:53 INFO zookeeper.ClientCnxn: Session establishment complete on server xxxxxxx /10.1.246.18:2181, sessionid = 0x154b76f380a00f0, negotiated timeout = 40000
16/06/22 20:41:53 WARN mapreduce.LoadIncrementalHFiles: Skipping non-directory hdfs://lnxbig05.cajarural.gcr:8020/tmp/a08d1fda-6bf5-4b47-8bba-0a7fd0e28e47/JOURNEY_OFICINA_HBASE/_SUCCESS
16/06/22 20:41:53 INFO hfile.CacheConfig: CacheConfig:disabled
16/06/22 20:41:52 INFO mapreduce.CsvBulkLoadTool: Loading HFiles from /tmp/a08d1fda-6bf5-4b47-8bba-0a7fd0e28e47/JOURNEY_OFICINA_HBASE
16/06/22 20:41:53 WARN mapreduce.LoadIncrementalHFiles: managed connection cannot be used for bulkload. Creating unmanaged connection.
16/06/22 20:41:53 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x311fac48 connecting to ZooKeeper ensemble=lnxbig04.cajarural.gcr:2181
16/06/22 20:41:53 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=lnxbig04.cajarural.gcr:2181 sessionTimeout=90000 watcher=hconnection-0x311fac480x0, quorum=lnxbig04.cajarural.gcr:2181, baseZNode=/hbase-unsecure
16/06/22 20:41:53 INFO zookeeper.ClientCnxn: Opening socket connection to server lnxbig04.cajarural.gcr/10.1.246.18:2181. Will not attempt to authenticate using SASL (unknown error)
16/06/22 20:41:53 INFO zookeeper.ClientCnxn: Socket connection established to lnxbig04.cajarural.gcr/10.1.246.18:2181, initiating session
16/06/22 20:41:53 INFO zookeeper.ClientCnxn: Session establishment complete on server lnxbig04.cajarural.gcr/10.1.246.18:2181, sessionid = 0x154b76f380a00f0, negotiated timeout = 40000
16/06/22 20:41:53 WARN mapreduce.LoadIncrementalHFiles: Skipping non-directory hdfs://lnxbig05.cajarural.gcr:8020/tmp/a08d1fda-6bf5-4b47-8bba-0a7fd0e28e47/JOURNEY_OFICINA_HBASE/_SUCCESS
16/06/22 20:41:53 INFO hfile.CacheConfig: CacheConfig:disabled
16/06/22 20:41:53 INFO mapreduce.LoadIncrementalHFiles: Trying to load hfile=hdfs://lnxbig05.cajarural.gcr:8020/tmp/a08d1fda-6bf5-4b47-8bba-0a7fd0e28e47/JOURNEY_OFICINA_HBASE/0/bb208025d42b4b5fb14c3d8143e99878 first=00492016-06-01 18:00:00 last=F0132016-06-01 13:00:00
any suggestions
... View more
06-22-2016
05:59 PM
Hi: iam trying insert from phoenix into hbase table and i have this error, i Know the error is like this, but i dont want to modify the jar phoenix-4.4.0.2.4.0.0-169-client.jar, so anyone know how to get the new jar: https://2scompliment.wordpress.com/2013/12/11/running-hbase-java-applications-on-hortonworks-hadoop-sandbox-2-x-with-yarn/ 16/06/22 19:44:36 INFO zookeeper.ZooKeeper: Client environment:java.library.path=:/usr/hdp/2.4.0.0-169/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.4.0.0-169/hadoop/lib/native
16/06/22 19:44:36 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
16/06/22 19:44:36 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
16/06/22 19:44:36 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
16/06/22 19:44:36 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
16/06/22 19:44:36 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.32-573.8.1.el6.x86_64
16/06/22 19:44:36 INFO zookeeper.ZooKeeper: Client environment:user.name=hdfs
16/06/22 19:44:36 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/hdfs
16/06/22 19:44:36 INFO zookeeper.ZooKeeper: Client environment:user.dir=/usr/hdp/2.4.0.0-169/phoenix
16/06/22 19:44:36 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=xxxxxxxx:2181,xxxxxxxx:2181,xxxxxxxxx:2181 sessionTimeout=90000 watcher=hconnection-0x35dc6a890x0, quorum=xxxxxxxx:2181,xxxxxxxx:2181,xxxxxxxxx:2181, baseZNode=/hbase
16/06/22 19:44:36 INFO zookeeper.ClientCnxn: Opening socket connection to server xxxxxxx/xxxxxxxx:2181. Will not attempt to authenticate using SASL (unknown error)
16/06/22 19:44:36 INFO zookeeper.ClientCnxn: Socket connection established to xxxxxxxxx/10.1.246.20:2181, initiating session
16/06/22 19:44:36 INFO zookeeper.ClientCnxn: Session establishment complete on server xxxxxxxx/10.1.246.20:2181, sessionid = 0x354b76f3879008b, negotiated timeout = 40000
16/06/22 19:44:36 INFO client.ZooKeeperRegistry: ClusterId read in ZooKeeper is null
16/06/22 19:44:36 INFO metrics.Metrics: Initializing metrics system: phoenix
16/06/22 19:44:36 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
16/06/22 19:44:36 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 60 second(s).
16/06/22 19:44:36 INFO impl.MetricsSystemImpl: phoenix metrics system started
16/06/22 19:44:37 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3ed32db7 connecting to ZooKeeper ensemble=xxxxxxx:2181,xxxxxxx:2181,xxxxxxxx:2181
16/06/22 19:44:37 INFO client.ZooKeeperRegistry: ClusterId read in ZooKeeper is null
16/06/22 19:44:37 ERROR client.ConnectionManager$HConnectionImplementation: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Phoenix