Member since
12-25-2018
13
Posts
0
Kudos Received
0
Solutions
07-19-2019
04:05 AM
Hi @CarlosHS, inorder to perform CURL/REST APi with Nifi, you might need a access token , as per blog : following command will get you the access token curl 'https://nifi-host:port/nifi-api/access/token' -H 'Content-Type: application/x-www-form-urlencoded; charset=UTF-8' --data 'username=ldap-username&password=ldap-password' --compressed —insecure Later this token need to be reused for all operations like : curl -k ’https://nifi-host:port/nifi-api/flow/process-groups/root/status' -H “Authorization: Bearer $token” hope this helps
... View more
06-27-2019
06:49 PM
@CarlosHS Here is a command I used to get a token: curl 'https://nifi-hostname:nifi-port/nifi-api/access/token' -H 'Content-Type: application/x-www-form-urlencoded; charset=UTF-8' --data 'username=username&password=password' --compressed --insecure
... View more
05-13-2019
02:21 AM
@CarlosHS You can use PutCassandraQL processor and extract all content of flowfile using ExtractText processor. Then prepare your insert statement and insert data into cassandra. Refer to this link for more details. (or) If you are having CSV data then You can configure CSV reader(in PutCassandraRecord) value separator which is not separator in data like ~ (or) some other characters not presented in the data. So that processor reads entire data as one value and writes to cassandra table.
... View more
04-28-2019
05:18 PM
You initially posted this in an inappropriate Track. The
... View more
04-24-2019
05:36 PM
Hi @Matt Burgess, I have deleted the "AttributeToJSON" processor I indicated you and now PutCassandraRecord processor doesn't return any error, so this problem has disappeared. Without it, anyway, and setting only this structure: In AttributesToJson I take the field register_date (which is null until that date), because this is the only field I want to put in the database. Then, with UpdateAttribute, I configure register_date as ${now():toNumber()} . At last, I use PutCassandraRecord. I'm obtaining in my database the following record: -294294420. What does this number mean? What if I want to obtain the date in a format of "yyyy-MM-dd", which is the format I was expecting to see? However, I do not understand very well the solution you are proposing (English is not my first language so it will possibly has something to do with that). Which is the structure you are proposing me, and how could I configure the UpdateRecord Processor, to achieve the objective that I have proposed in the previous paragraph? Many thanks for your help.
... View more
03-25-2019
11:19 PM
You can remove fields with any record-based processor, in this case probably ConvertRecord is the most straightforward. You'd use a JsonTreeReader with your incoming schema, and a JsonRecordSetWriter with a schema with the unwanted fields removed. Fields that don't exist in the outgoing schema are not written to the flow file.
... View more
12-26-2018
03:27 AM
@CarlosHS Even when we replace columns in hive text table the data will not be changed i.e wikipedia_link data will be still presented in HDFS file. So when we try to access the table hive reads the data with "," delimited and gives wikipedia_link data in place of keywords column. - Steps to drop wikipedia_link column with data: hive> set hive.support.quoted.identifiers=none;
hive> create table countries_temp row format delimited fields terminated by ',' stored as textfile as select `(wikipedia_link)?+.+` from countries; //create temp table with out wikipedia_link columnhive> drop table countries; //drop countries table
hive> alter table countries_temp rename to countries; //rename temp table to countries. (or) Another way would be creating a view without wikipedia_link column. hive> create view vw_countries as select id,code,name,continent,keywords from countries; then accessing data from view instead of countries table.
... View more