Member since
04-03-2019
38
Posts
68
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1475 | 11-03-2017 06:11 PM | |
1164 | 02-09-2017 11:24 PM | |
3048 | 02-06-2017 12:54 PM | |
3843 | 01-04-2017 02:49 PM | |
3031 | 02-17-2016 09:49 AM |
12-15-2022
03:46 AM
Try the Skivia app. It can sync Grafana and Hive Data without coding. Read more here.
... View more
12-07-2022
09:46 PM
While converting from json to avro format,how to get logicaltype in avro format. And to get logicaltype in avro format,what we need to add in json data .
... View more
06-03-2021
05:46 AM
Hi Please check http://docs.hortonworks.com/HDPDocuments/HDF2/HDF-2.1.1/index.html Once you enable apache nifi Ranger, you will need to add each user and the node identities in Ranger and apply policies. https://community.hortonworks.com/articles/60001/hdf-20-integrating-secured-nifi-with-secured-range... You can also check: https://community.hortonworks.com/articles/57980/hdf-20-apache-nifi-integration-with-apache-ambarir... http://bryanbende.com/development/2016/08/22/apache-nifi-1.0.0-using-the-apache-ranger-authorizer
... View more
05-03-2021
12:13 PM
Issue has been resolved thanks alot.
... View more
12-01-2020
03:47 AM
I'm trying to run a dag with airflow 1.10.12 and HDP 3.0.0 when i run the dag it gets stuck in ```Connecting to jdbc:hive2://[Server2_FQDN]:2181,[Server1_FQDN]:2181/default;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2``` when i run ```beeline -u "jdbc:hive2://[Server1_FQDN]:2181,[Server2_FQDN]:2181/default;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2"``` from shell, it connect to hive with no problem. I've also made a connection like this ``` Conn Id * hive_jdbc ------------- Conn Type ------------- Connection URL jdbc:hive2://centosserver.son.ir:2181,centosclient.son.ir:2181/default;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2 ------------- Login hive ------------- Password ****** ------------- Driver Path /usr/hdp/3.0.0.0-1634/hive/jdbc/hive-jdbc-3.1.0.3.0.0.0-1634-standalone.jar ------------- Driver Class org.apache.hive.jdbc.HiveDriver ``` and I'm not using kerberos I've also added ```hive.security.authorization.sqlstd.confwhitelist.append``` in the ambari ```Custom hive-site``` ``` radoop\.operation\.id|mapred\.job\.name||airflow\.ctx\.dag_id|airflow\.ctx\.task_id|airflow\.ctx\.execution_date|airflow\.ctx\.dag_run_id|airflow\.ctx\.dag_owner|airflow\.ctx\.dag_email|hive\.warehouse\.subdir\.inherit\.perms|hive\.exec\.max\.dynamic\.partitions|hive\.exec\.max\.dynamic\.partitions\.pernode|spark\.app\.name ``` any suggestions? I'm desperate, I've tried every way that i know but still nothing @nsabharwal @agillan @msumbul1 @deepesh1
... View more
01-13-2020
01:53 AM
Agree with @MattWho . You can use the nifi expression language to choose the certain files from your source. Based on the date or file name you can filter out the files using FetchSFTP processor. i can see in the latest version of nifi expression language is false in ListSFTP processor. to handle your case you have to do it in two steps ListSFTP ==> FetchSFTP (If you want to put dynamic date filters in your source directories). e.g. you can mention the below in Remote file property of FetchSFTP 0000000001_${now():toNumber():minus(86400000):format('yyyyMMdd')}235959_filename this will gives us output as 0000000001_20200112235959_filename. Regards Nitin
... View more
10-04-2019
04:19 PM
In case, you get below error, make sure you use Nifi host FQDN in API call and NOT IP address. Also, make sure DNS is configured correctly. <body><h2>HTTP ERROR 401</h2>
<p>Problem accessing /nifi-api/access/kerberos. Reason:
<pre> Unauthorized</pre>
... View more
07-03-2017
10:40 PM
Thanks for sharing. Let's say I want to store output return by script in a database table. How can I do that?
... View more
06-30-2017
11:09 PM
6 Kudos
While designing your flow, one will think of handling failures as well. Some may ignore (terminate) it while some may want to analyze their failed flow files.
You can view/download any flowfile in the connection via NiFi UI. However if there are multiple flowfiles then this becomes little tedious since you have to do it for each flowfile. .
In that case, I used below flow to put all failed flow files to a temp location on my nifi node and then access it from there for further analysis. In this I am generating random 2B text file and passing it to FTP server. If FTP server is unreachable it will put all flow files to failure/reject queue. You can list all files in this queue. I have viewed one such file in image below. Then all these files are passed to PutFile processor which is configured to write all the files it receives to /tmp/test location on the nifi node.
Please note: There could be multiple ways to do this. This is the approach used by me to quickly download all failed flow files in my flow. download-failed-flowfiles-2.png download-failed-flowfiles-3.png download-failed-flowfiles-4.png download-failed-flowfiles-5.png
... View more
Labels:
06-30-2017
10:03 PM
7 Kudos
This is a known issue https://issues.apache.org/jira/browse/NIFI-3800 which is fixed in Apache NiFi 1.2.0 (HDF 2.1.4 and later) Workaround:
You can downgrade your Java version to openjdk-1.8.0_121 if you are using something higher than that.
... View more
Labels: