Member since
04-03-2019
35
Posts
68
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
852 | 11-03-2017 06:11 PM | |
557 | 02-09-2017 11:24 PM | |
1865 | 02-06-2017 12:54 PM | |
2461 | 01-04-2017 02:49 PM | |
1554 | 02-17-2016 09:49 AM |
06-26-2018
07:43 AM
@Bhushan Kandalkar - what do you see in Ranger logs for the same timeframe?
... View more
06-25-2018
05:39 PM
@Bhushan Kandalkar Is the Test Connection for Ranger NiFi repo working fine?
... View more
03-09-2018
09:48 PM
Hi @Josh Nicholson, On ambari server node, can you check what is present in below file? /var/lib/ambari-server/resources/mpacks/hdf-ambari-mpack-3.0.1.0-43/stacks/HDF/3.0/services/NIFI/metainfo.xml Also, in Ambari DB you may need to check output of below query: select version_xml from repo_version; If both point to NiFi version as 1.2.0 then we will have to update metainfo.xml and also the repo_version table.
... View more
11-03-2017
06:11 PM
Hi @Sammy Gold Do you have Namenode HA enabled in your HDP cluster? Looks like it is looking for namespace of Namenode HA. Can you please pass hdfs-site.xml and core-site.xml as well alongwith hive-site.xml in PutHiveStreaming "Hive Configuration Resources" property?
... View more
10-05-2017
03:32 PM
Hi @Arun A K This is a known issue where the datatypes are not preserved. https://issues.apache.org/jira/browse/NIFI-2624 which talks about Oracle/SQL datatypes not being preserved. You should also check out https://gist.github.com/ijokarumawak/69b29fa7b11c2ada656823db614af373 As mentioned by @Karthik Narayanan, best approach would be to use Record Oriented processors.
... View more
06-30-2017
11:41 PM
8 Kudos
You can execute any shell script using ExecuteProcess Processor in Nifi. For example I have a very simple shell script below which writes "Hello world" to a file.
... View more
- Find more articles tagged with:
- Data Ingestion & Streaming
- How-ToTutorial
- NiFi
- nifi-processor
Labels:
06-30-2017
11:32 PM
8 Kudos
For non-SSL enabled NiFi below should work:
curl --tlsv1.2 -i -H
'Content-Type: application/json' -XPUT -d
'{"id":"cdb54c9a-0158-1000-5566-c45ca9692f85","state":"RUNNING"}'
localhost:8080/nifi-api/flow/process-groups/cdb54c9a-0158-1000-5566-c45ca9692f85
Start Process Group for SSL enabled NiFi :
Generate access token:
[root@nifi-ambari-01
~]# curl --tlsv1.2 https://000.00.00.000:9091/nifi-api/access/token
--data 'username=awadhwani&password=password' -k
O/P:
eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJjbj1BcnRpIFdhZGh3YW5pLG91PVBlb3BsZSxkYz1zbWUsZGM9aHd4IiwiaXNzIjoiTGRhcFByb3ZpZGVyIiwiYXVkIjoiTGRhcFByb3ZpZGVyIiwicHJlZmVycmVkX3VzZXJuYW1lIjoiQXJ0aSBXYWRod2FuaSIsImtpZCI6MywiZXhwIjoxNDg5MjMyNDQyLCJpYXQiOjE0ODkxODkyNDJ9.17iHL3XX7Bw6dXv5lCByimu_asQOaSwW11o2IQEFO0s[root@nifi-ambari-01~]#
Start PG by passing the above access token
[root@nifi-ambari-01
~]# curl --tlsv1.2 -ik -H 'Content-Type: application/json' -H 'Authorization:
Bearer
eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJjbj1BcnRpIFdhZGh3YW5pLG91PVBlb3BsZSxkYz1zbWUsZGM9aHd4IiwiaXNzIjoiTGRhcFByb3ZpZGVyIiwiYXVkIjoiTGRhcFByb3ZpZGVyIiwicHJlZmVycmVkX3VzZXJuYW1lIjoiQXJ0aSBXYWRod2FuaSIsImtpZCI6MywiZXhwIjoxNDg5MjMyNDQyLCJpYXQiOjE0ODkxODkyNDJ9.17iHL3XX7Bw6dXv5lCByimu_asQOaSwW11o2IQEFO0s'
-XPUT -d
'{"id":"2f092b07-0157-1000-0000-00005f526fbc","state":"RUNNING"}'
https://000.00.00.000:9091/nifi-api/flow/process-groups/2f092b07-0157-1000-0000-00005f526fbc
O/P:
HTTP/1.1 200 OKDate: Fri, 10 Mar
2017 23:42:14 GMTServer:
Jetty(9.3.9.v20160517)Cache-Control:
private, no-cache, no-store, no-transformX-ProxiedEntitiesAccepted:
trueDate: Fri, 10 Mar
2017 23:42:14 GMTContent-Type:
application/jsonContent-Length: 63{"id":"2f092b07-0157-1000-0000-00005f526fbc","state":"RUNNING"}[root@nifi-ambari-01
~]#
Earlier before starting:
After starting:
Stop process Group for SSL enabled NiFi :
(use the same token generated above)
[root@nifi-ambari-01
~]# curl --tlsv1.2 -ik -H 'Content-Type: application/json' -H 'Authorization:
Bearer
eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJjbj1BcnRpIFdhZGh3YW5pLG91PVBlb3BsZSxkYz1zbWUsZGM9aHd4IiwiaXNzIjoiTGRhcFByb3ZpZGVyIiwiYXVkIjoiTGRhcFByb3ZpZGVyIiwicHJlZmVycmVkX3VzZXJuYW1lIjoiQXJ0aSBXYWRod2FuaSIsImtpZCI6MywiZXhwIjoxNDg5MjMyNDQyLCJpYXQiOjE0ODkxODkyNDJ9.17iHL3XX7Bw6dXv5lCByimu_asQOaSwW11o2IQEFO0s'
-XPUT -d
'{"id":"2f092b07-0157-1000-0000-00005f526fbc","state":"STOPPED"}'
https://000.00.00.000:9091/nifi-api/flow/process-groups/2f092b07-0157-1000-0000-00005f526fbc
HTTP/1.1 200 OKDate: Fri, 10 Mar
2017 23:42:46 GMTServer:
Jetty(9.3.9.v20160517)Cache-Control:
private, no-cache, no-store, no-transformX-ProxiedEntitiesAccepted:
trueDate: Fri, 10 Mar
2017 23:42:46 GMTContent-Type:
application/jsonContent-Length: 63{"id":"2f092b07-0157-1000-0000-00005f526fbc","state":"STOPPED"}[root@nifi-ambari-01
~]
Earlier before stopping:
After stopping:
... View more
- Find more articles tagged with:
- Cloud & Operations
- FAQ
- NiFi
- nifi-api
Labels:
06-30-2017
11:09 PM
6 Kudos
While designing your flow, one will think of handling failures as well. Some may ignore (terminate) it while some may want to analyze their failed flow files.
You can view/download any flowfile in the connection via NiFi UI. However if there are multiple flowfiles then this becomes little tedious since you have to do it for each flowfile. .
In that case, I used below flow to put all failed flow files to a temp location on my nifi node and then access it from there for further analysis. In this I am generating random 2B text file and passing it to FTP server. If FTP server is unreachable it will put all flow files to failure/reject queue. You can list all files in this queue. I have viewed one such file in image below. Then all these files are passed to PutFile processor which is configured to write all the files it receives to /tmp/test location on the nifi node.
Please note: There could be multiple ways to do this. This is the approach used by me to quickly download all failed flow files in my flow. download-failed-flowfiles-2.png download-failed-flowfiles-3.png download-failed-flowfiles-4.png download-failed-flowfiles-5.png
... View more
- Find more articles tagged with:
- Data Ingestion & Streaming
- How-ToTutorial
- NiFi
- nifi-processor
Labels:
06-30-2017
10:27 PM
11 Kudos
Using NiFi REST API for unsecured cluster is straight-forward like below: [root@<nifi-host> ~]# curl -v -X GET http://<nifi-host>:<port>/nifi-api/flow/current-user
* About to connect() to <nifi-host> port <port> (#0)
* Trying <IP address>...
* Connected to <nifi-host> (<IP address>) port <port> (#0)
> GET /nifi-api/flow/current-user HTTP/1.1
> User-Agent: curl/7.29.0
> Host: <nifi-host>:<port>
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Fri, 30 Jun 2017 22:15:09 GMT
< X-Frame-Options: SAMEORIGIN
< X-Frame-Options: SAMEORIGIN
< X-Frame-Options: SAMEORIGIN
< Cache-Control: private, no-cache, no-store, no-transform
< Server: Jetty(9.4.3.v20170317)
< Vary: Accept-Encoding, User-Agent
< Date: Fri, 30 Jun 2017 22:15:09 GMT
< Date: Fri, 30 Jun 2017 22:15:09 GMT
< Content-Type: application/json
< Content-Length: 439
<
* Connection #0 to host <nifi-host> left intact
{"identity":"anonymous","anonymous":true,"provenancePermissions":{"canRead":true,"canWrite":true},"countersPermissions":{"canRead":true,"canWrite":true},"tenantsPermissions":{"canRead":true,"canWrite":true},"controllerPermissions":{"canRead":true,"canWrite":true},"policiesPermissions":{"canRead":true,"canWrite":true},"systemPermissions":{"canRead":true,"canWrite":true},"restrictedComponentsPermissions":{"canRead":true,"canWrite":true}} However if this cluster is using Kerberos for authentication then the curl call will need a Kerberos authentication token as below: First do a kinit (using appropriate keytab/principal) on the nifi node you are logged into. Now get a token using below API call: token=`curl -k -X POST --negotiate -u : https://<nifi-node>:<port>/nifi-api/access/kerberos` Second you need to pass above generated token to the actual API call: curl -k --header "Authorization: Bearer $token" https://<nifi-host>:<port>/nifi-api/flow/cluster/summary
... View more
- Find more articles tagged with:
- Data Ingestion & Streaming
- How-ToTutorial
- NiFi
- nifi-api
Labels:
06-30-2017
10:03 PM
7 Kudos
This is a known issue https://issues.apache.org/jira/browse/NIFI-3800 which is fixed in Apache NiFi 1.2.0 (HDF 2.1.4 and later) Workaround:
You can downgrade your Java version to openjdk-1.8.0_121 if you are using something higher than that.
... View more
- Find more articles tagged with:
- Data Ingestion & Streaming
- Issue Resolution
- issue-resolution
- NiFi
- nifi-ui
Labels:
06-08-2017
08:14 PM
We can change the url in quicklinks.json file and it should redirect. I tried changing it to below and NIFI UI redirects to google (for testing) "url":"http://www.google.com" However we should not be editing this file unless important else it is difficult to maintain it manually.
... View more
04-04-2017
08:40 PM
1 Kudo
@Anandakrishnan Ramakrishnan -- I believe this will be available once https://issues.apache.org/jira/browse/NIFI-3426 is fixed.
... View more
02-11-2017
12:14 AM
Hi @marksf, Can you try in a different browser? Also, can you verify if all is done as per "Generate Client certificate section" of below article : https://community.hortonworks.com/articles/58009/hdf-20-enable-ssl-for-apache-nifi-from-ambari.html Importing certificate into Firefox : https://blog.rosander.ninja/nifi/toolkit/tls/2016/09/19/tls-toolkit-intro.html
... View more
02-09-2017
11:24 PM
2 Kudos
@marksf
With HDP 2.4 what version of HDF did you install? Yes we cannot have HDP and HDF managed by same Ambari yet. You can install NiFi only tar on your HDP 2.5 but it wont be managed by HDP Ambari. HDF can be a standalone instance or a cluster as well. It can connect to HDP cluster using the client libraries available in NiFi lib (nars). The HDF cluster can have NiFi, Zookeeper, Kafka, Storm, Ranger, Ambari Metrics, Ambari Infra and Log search. However you can choose the services you would like to have similar to HDP and install only NiFi and zookeeper to start with. Hope this helps.
... View more
02-06-2017
12:54 PM
5 Kudos
Hi @chennuri gouri shankar, Did you try using installing HDF instead? Please check http://docs.hortonworks.com/HDPDocuments/HDF2/HDF-2.1.1/index.html Once you enable Ranger, you will need to add each user and the node identities in Ranger and apply policies. https://community.hortonworks.com/articles/60001/hdf-20-integrating-secured-nifi-with-secured-range.html You can also check: https://community.hortonworks.com/articles/57980/hdf-20-apache-nifi-integration-with-apache-ambarir.html http://bryanbende.com/development/2016/08/22/apache-nifi-1.0.0-using-the-apache-ranger-authorizer
... View more
01-04-2017
02:49 PM
Hi @Aman Jain, Can you check "State management: section of https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi.processors.standard.ListSFTP/" ? Is this what you are looking for?
... View more
01-01-2017
04:30 AM
2 Kudos
How to enable SSL for Storm UI on an unsecured cluster: 1. Generate keystore and certificate: root@beautiful-storm2 ~]# /usr/jdk64/jdk1.8.0_77/bin/keytool -genkeypair -alias certificatekey -keyalg RSA -validity 7 -keystore keystore.jks
Enter keystore password:
Re-enter new password:
What is your first and last name? [Unknown]: storm
What is the name of your organizational unit? [Unknown]: storm
What is the name of your organization? [Unknown]: storm
What is the name of your City or Locality? [Unknown]: storm
What is the name of your State or Province? [Unknown]: storm
What is the two-letter country code for this unit? [Unknown]: storm
Is CN=storm, OU=storm, O=storm, L=storm, ST=storm, C=storm correct? [no]: yes
Enter key password for <certificatekey> (RETURN if same as keystore password):
Re-enter new password:
2. Add below properties via Ambari to custom storm-site ui.https.key.password=bigdata
ui.https.keystore.password=bigdata
ui.https.keystore.path=/keystore.jks (This is path to your keystore.jks generated in above step)
ui.https.keystore.type=jks
ui.https.port=8740
3. Sanity check: List your keystore: <code>[root@beautiful-storm2 ~]# /usr/jdk64/jdk1.8.0_77/bin/keytool -list -keystore keystore.jks
Enter keystore password:
Keystore type: JKS
Keystore provider: SUN
Your keystore contains 1 entry certificatekey, Dec 13, 2016, PrivateKeyEntry, Certificate fingerprint (SHA1): 4D:8A:C1:0E:8F:4A:4B:26:0C:27:4C:DD:39:96:00:83:CE:F4:B3:6E
4. Now hit Storm https UI: https://<storm nimbus IP address>:8740/index.html (http does not work now)
5. You will see below in storm ui.log: 2016-12-13 18:47:20.011 o.a.s.j.s.Server [INFO] jetty-7.x.y-SNAPSHOT
2016-12-13 18:47:20.036 o.a.s.j.s.h.ContextHandler [INFO] started o.a.s.j.s.ServletContextHandler{/,null}
2016-12-13 18:47:20.481 o.a.s.j.u.s.SslContextFactory [INFO] Enabled Protocols [SSLv2Hello, TLSv1, TLSv1.1, TLSv1.2] of [SSLv2Hello, SSLv3, TLSv1, TLSv1.1, TLSv1.2]
2016-12-13 18:47:20.493 o.a.s.j.s.AbstractConnector [INFO] Started SslSocketConnector@0.0.0.0:8740
... View more
- Find more articles tagged with:
- Ambari
- Data Ingestion & Streaming
- How-ToTutorial
- ssl
- Storm
Labels:
11-08-2016
08:19 PM
Hi @Aravindan Vijayan -- ambari metrics monitor logs on all hosts throws "Error sending metrics to the server.. Connection refused" errror -- ambari metrics collector logs have no errors --other services dashboard in Grafana work fine.
... View more
11-08-2016
08:12 PM
1 Kudo
Grafana Hive Dashboard shows no Datapoints for Hive Dashboard. Since Hive service in Ambari has no metrics displayed, so where does Grafana read these metrics from?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hive
10-11-2016
05:31 PM
Hey @swagle.. Do you have any link for setting up Kafka custom dashboard in Grafana for Ambari 2.2.2?
... View more
03-23-2016
06:55 PM
1 Kudo
I also faced the same exception and it worked after I added hbase-site.xml to the oozie hive sharelib path (/user/oozie/share/lib/lib_<>/hive)
... View more
03-16-2016
08:59 AM
Can we specify queue to be used instead of default queue? Like append "tez.queue.name=xyz" to the command. Will it work?
... View more
03-10-2016
12:08 AM
3 Kudos
I think there are some global policies created whenever we enable any Ranger plugin in Sandbox. This global policy by default blocks access to all. So for other policies to work or for it to fallback on the other authorization method, we need to disable this global policy. Example : Like in this case, need to review if under HDFS Repo in Ranger, any global policy exists? If yes, need to disable it. In this case it will not fallback to HDFS ACLs if this global policy exists.
... View more
02-17-2016
09:49 AM
1 Kudo
OK. need to use POST instead of PUT in the curl call and it works fine. curl --negotiate -ikv -u: -X POST 'http://<yarn host>:8088/ws/v1/cluster/apps/new-application'
... View more
02-17-2016
09:44 AM
1 Kudo
To add more, the GET operation works fine for the kerberos cluster.
... View more
02-17-2016
09:29 AM
1 Kudo
Are there any extra settings needed for using YARN REST API in a kerberos enabled environment? Trying to create a new application using below curl call gives HTTP/1.1 500 Internal Server Error. The same cluster without kerberos works fine. Command used: curl --negotiate -ikv -u: -X PUT 'http://<yarn host>l:8088/ws/v1/cluster/apps/new-application'
... View more
Labels:
- Labels:
-
Apache YARN
01-28-2016
10:49 AM
2 Kudos
Need to integrate Ranger UI login with LDAP. Does below look correct? 1. Login to Ambari UI 2. Go to Ranger Service --> Configs
3. Expand "Ranger Settings"
4. Under "Authentication method" select the appropriate.
5. Upon selection of LDAP/ACTIVE_DIRECTORY the settings for the same will appear below in the next section. Please fill in the details accordingly.
6. Save the changes and restart the Ranger service.
... View more
Labels:
- Labels:
-
Apache Ranger
12-02-2015
12:24 PM
3 Kudos
Thanks @Neeraj Sabharwal and @Jonas Straub. Already asked customer to try this option but it did not work via command line. Did I miss something? tried it and it didn't recognise the option:
solr create_collection -c universe_counts -d /home/xyz/solrhorton/solr_universe_counts/universe_counts/ -n universe_counts -shards 1 -replicationFactor 3 -createNodeSet xyz01:8983_solr,xyz03:8983_solr,xyz05:8983_solr
ERROR: Unrecognized or misplaced argument: -createNodeSet!
Usage: solr create_collection [-c collection] [-d confdir] [-n configName] [-shards #] [-replicationFactor #] [-p port]
... View more
12-02-2015
12:01 PM
3 Kudos
Does anyone know if we have an option for solr create_collection command to specify the nodes for the index to be created on?
... View more
Labels:
- Labels:
-
Apache Solr