Member since
02-16-2016
89
Posts
24
Kudos Received
10
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
10006 | 05-14-2018 01:54 PM | |
1549 | 05-08-2018 05:07 PM | |
1101 | 05-08-2018 04:46 PM | |
2916 | 02-13-2018 08:53 PM | |
3468 | 11-09-2017 04:24 PM |
05-31-2018
03:04 PM
All processors run under the context of user that is running Nifi. Check authentication for the user running Nifi.
... View more
05-29-2018
02:05 PM
1 Kudo
You will need machine names (DN) for both Nifi and Nifi Registry like "CN=machinename,...dc=example,dc=com" where in CN you will use server name the remaining portion will come under whatever wildcard (sub)domain you have. Then you will enter the full DN, manually as a user in both Nifi and Nifi Registry. This method is similar to when setting up site-to-site policies here: https://community.hortonworks.com/articles/88473/site-to-site-communication-between-secured-https-a.html Finally, you can check logs files for errors when pulling buckets in both Nifi and Nifi registry in files *-app.log and *-user.log, this may as well give you a full DN Nifi is looking for.
... View more
05-28-2018
05:24 PM
In client mode your jar is running on edge node or local machine which will have smtp connectivity. In cluster mode, any of the data nodes could run your jar, so you will need to check connectivity to smtp from all nodes.
... View more
05-28-2018
05:10 PM
Have a look at this if you need to by pass Avro for Hive DDL: https://github.com/tspannhw/nifi-convertjsontoddl-processor If you need to convert JSON to ORC (for Hive) Avro will be required. You will need to write/manage Avro schemas (recommended will be to use Schema Registry for that). Alternatively you can use Infer Avro schema to detect incoming schema from JSON but it may not be 100% correct all the time.
... View more
05-28-2018
04:49 PM
If I remember correctly, you will need to add Nifi Registry Server DN (SSL machine name) to Nifi > Access Policies and Nifi Server DN to Nifi Registry to be able to read and query buckets. Once they both know each other, buckets will load.
... View more
05-28-2018
04:02 PM
If users and groups are deleted in openLDAP server you should use 'existing' mode with ambari ldap sync: ambari-server sync-ldap --existing https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.0.0/bk_ambari-security/content/existing_users_and_groups.html
... View more
05-22-2018
06:11 PM
Perhaps: "Reading/writing to an ACID table from a non-ACID session is not allowed. In other words, the Hive transaction manager must be set to org.apache.hadoop.hive.ql.lockmgr.DbTxnManager in order to work with ACID tables." SET hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager; https://cwiki.apache.org/confluence/display/Hive/Hive+Transactions#HiveTransactions-Limitations
... View more
05-22-2018
06:02 PM
Following call we do, if kafka does not exist you will get status 404. curl --user admin -sS -G "http://ambari_server_here/api/v1/clusters/CLUSTERNAME_HERE/services/KAFKA" API resource: https://github.com/apache/ambari/blob/trunk/ambari-server/docs/api/v1/index.md
... View more
05-22-2018
05:49 PM
Query Record Processor in this scenario seems a bit of overkill for this problem. And will require more work if you don't have auto incrementing fields. You can use just SplitText processor to do everything. https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.5.0/org.apache.nifi.processors.standard.SplitText/index.html
... View more