Member since
09-28-2015
73
Posts
26
Kudos Received
6
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
6971 | 01-20-2017 01:27 PM | |
2227 | 06-01-2016 08:24 AM | |
2393 | 05-28-2016 01:33 AM | |
1708 | 05-17-2016 03:44 PM | |
911 | 12-22-2015 01:50 AM |
06-25-2016
02:28 PM
No. I didn't... It looks like to me a tiny bug in the QueryDatabaseTable processor, but I didn't get a change to dig into it.
... View more
06-13-2016
09:07 AM
This is just a very simple table created using the below: SQL> create table logs (log varchar2(255), update_time timestamp);
SQL> insert into logs values ('hello', CURRENT_TIMESTAMP);
SQL> insert into logs values ('beijing', CURRENT_TIMESTAMP);
SQL> insert into logs values ('this is a nice day today', CURRENT_TIMESTAMP);
... View more
06-01-2016
09:00 AM
@Simon Elliston Ball QueryDatabaseTable processor looks like exactly what I need. However, it failed to execute.. 08:56:45 UTCERRORa4ba040c-7a7e-4068-b7f4-f7d1e1f3823c
QueryDatabaseTable[id=a4ba040c-7a7e-4068-b7f4-f7d1e1f3823c] Unable to execute SQL select query SELECT * FROM LOGS WHERE UPDATE_TIME > '2016-05-30 16:48:00.373' due to org.apache.nifi.processor.exception.ProcessException: Error during database query or conversion of records to Avro: org.apache.nifi.processor.exception.ProcessException: Error during database query or conversion of records to Avro Executing the generated SQL directly on Sqlplus gave me same error. Looks like timestamp format issue.
... View more
06-01-2016
08:24 AM
Thanks @yzheng The mirror instance is in healthy status. After upgraded the target cluster to 2.3.4 and changed to job to run on source cluster, it works now.
... View more
05-31-2016
12:18 AM
Thanks, @Jitendra Yadav Changing table name and user name to capital letter solved the issue.
... View more
05-30-2016
11:58 PM
Hi, I got the below error using Sqoop on HDP 2.4.2 to import data from Oracle. The generate SQL seems to be wrong. It double quotes the table name ("logs") and also has a strange WHERE condition (1=0). I am using sqoop import --connect jdbc:oracle:thin:@//xxxxxx:1521/ORCL --username admin --password xxxx --table logs --target-dir /user/centos/input/oracle -m 1 16/05/30 23:27:31 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM "logs" t WHERE 1=0
16/05/30 23:27:31 ERROR manager.SqlManager: Error executing statement: java.sql.SQLSyntaxErrorException: ORA-00942: table or view does not exist
java.sql.SQLSyntaxErrorException: ORA-00942: table or view does not exist
... View more
Labels:
- Labels:
-
Apache Sqoop
05-30-2016
11:21 PM
@peeyush That is the target cluster running HDP 2.3.4.7-4. The source cluster is HDP 2.4.2.0
... View more
05-30-2016
05:41 PM
1 Kudo
Hi, Is there a template of using Nifi to incrementally ingest Data from RDBMS to Hadoop? I know Sqoop is better choice for this but for some reason I must use Nifi this time. I think the idea is to store the last successful ingestion to a RDB table, and compare row update time to that when using ExecuteSQL to do the select. The questions I have right row are: How to store the timestamp into RDB table using Nifi? PutSQL seems to be the processor, but how to give the SQL to PutSQL? How to handle empty resultset of ExecuteSQL response? Is there a simple way to do "if resultset is empty, then do nothing, else update late ingestion timestamp and store resultset to HDFS"?
... View more
Labels:
- Labels:
-
Apache NiFi
05-30-2016
05:31 PM
Hi, I set up two HDP clusters and tried to use Falcon to mirror data from one cluster to another. I could created two clusters and configured the mirror setting on Falcon UI without error. I could also see the mirroring MR job completed successfully. However, files do not get mirrored from source to target. I am attaching my Falcon Mirror setting. Am I missing anything?
... View more
Labels:
- Labels:
-
Apache Falcon
05-28-2016
01:33 AM
I solved this issue by change username to keyadmin@REALM.COM from Ranger KMS repository config UI directly. Configuring this in Ambari Ranger KMS UI and restarting Ranger and Ranger KMS services didn't apply to the actual KMS repository config property.
... View more