Member since
11-07-2016
70
Posts
40
Kudos Received
16
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4101 | 02-22-2018 09:20 PM | |
7047 | 01-30-2018 03:41 PM | |
1281 | 10-25-2017 06:09 PM | |
10949 | 08-15-2017 10:54 PM | |
3423 | 06-26-2017 05:05 PM |
03-10-2017
01:07 AM
@Marcos Da Silva could you please provide create table statement and select query that invokes full table scan? also please run explain and let's see output here. that will definitely help to understand the issue.
... View more
02-28-2017
04:06 PM
1 Kudo
Hi All, What is the way to delete a row from HBase table (given some rowkey) using NiFi? We are using PutHbaseCell or PutHbaseJSON for inserts and updates, but couldn't find anything (except of just "execute script", which is pretty expensive) for "delete"s. Any ideas? Thanks!
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache NiFi
02-06-2017
04:20 PM
@subacini balakrishnan, I'm glad it worked for you. Could you please accept the correct answer, so the question would be marked as answered? thanks!
... View more
02-03-2017
03:09 PM
@subacini balakrishnan, Have you tried that?
... View more
02-01-2017
03:53 PM
1 Kudo
@subacini balakrishnan, Here we go! Please change partition key type to string. Date is not supported as type for partitions.
... View more
01-29-2017
05:17 PM
So, DDLs and DMLs are running successfully, but SELECT fails. Check if you have write permissions on /tmp directory for user you are running query with (looks like it is "hive" user). The difference between running CLI and HS2 APIs - your code is running on different nodes (edge vs master). so make sure you have the same setup for user "hive" on both of them.
... View more
01-29-2017
04:50 PM
1 Kudo
@subacini balakrishnan, There is some mess in logs... In a trace log above, in "HIVE FAILURE OUTPUT" section, you have: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. AlreadyExistsException(message:Table spark_3_test already exists).... ... dbName:default, tableName:spark_3_test ...Partition not found (server_date = 2016-10-23) Not sure how it is related to the query you are running for "spark_2_test" table and for different partition. I have reproduced all the steps in both Zeppelin and spark-shell. There shouldn't be any issue with running DROP PARTITION from spark shell. Try to give it clean shot - new table, new partitions, no locks of data/directories, no two tables with the same location, etc. Just clean shot. IMO it is related to the specific table configuration/definition.
... View more
01-27-2017
07:40 PM
@Arkaprova Saha
If you are running Java code via JDBC connection to Hive, there is no "LOCAL" directory (except of local mode, which I believe is not the case). Well, actually, there is local dir, but it is on a machine that accepts your query (HS2). Most probably you just don't have writing permissions for the user you are connecting with. I would suggest: 1. run the same query just without "LOCAL". it will create files on HDFS. 2. pull files from HDFS (hdfs get, or hdfs getmerge, or Java API File system). Let me know if that works for you.
... View more
01-26-2017
02:38 PM
Hi @Vaibhav Kumar, If you want to create a bag matching target table's structure, you can do as following: a = load 'file.csv' as PigStorage(',') as (x,y,w);
b = foreach a generate x, y, (int)null as z, w;
describe b;
-- b: {x: int,y: int,z: int,w: int}
... View more