Member since
11-30-2017
44
Posts
6
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1405 | 04-16-2018 07:49 PM | |
1713 | 01-05-2018 02:31 PM |
02-06-2020
09:12 AM
@josh_nicholson NOTE: For Kerberized Cluster use the value of "zookeeper.znode.parent" may be "/ams-hbase-secure" so we can connect to it as following: /usr/hdp/2.5.0.0-1245/phoenix/bin/sqlline.py c6403.ambari.apache.org:61181:/ams-hbase-secure
... View more
10-08-2018
03:39 PM
Hey Josh: you could do: SHOW CREATE TABLE mytable; and then look for the keyword LOCATION in the output. When i run that in my sql client, the hdfs path is the next line. You can also look for a line that starts with 'hdfs:// If you want to use this with the information @Aditya Sirna provided, you could have a file with multiple statements like: SHOW CREATE TABLE mytable; SHOW CREATE TABLE mytabl1; SHOW CREATE TABLE mytabl2; and then filter for lines that start with hdfs. I haven't found a way to get JUST the Location of a table. Hope that helps. Thanks! Regards
... View more
04-16-2019
02:15 PM
We already have a flow which monitors our cluster and sends user an email every time a node is disconnected. I am trying to create a process on NiFi where before it sends us the email saying the node is disconnected it should start or enable the disconnected node(s).
... View more
05-07-2019
03:00 PM
@Josh Nicholson When using the kerberos-provider via the login-identity-providers.xml file. The user's full kerberos principal is going to be used every time. You can ignore the "default realm" property in the kerberos-provider provider (NiFi's code does not actually use it right now --> https://jira.apache.org/jira/browse/NIFI-6224 ) So when a user enters a username that does not include the "@<realm>" portion, the default realm as configured in the krb5.conf file configured in the nifi.properties file is used. That full DN is then passed through your configured identity.mapping.patterns. This means you need to have a pattern that matches on: ^(.*?)@(.*?)$ And a resulting value of: $1 so that only the username portion is then passed on to your configured authorizer. In the case of some user coming in with just username and other with full principal names... Those user coming in with just usernames must not being authenticated using the login provider. Even with a login provider configured the default TLS/SSL authentication is attempted first. So if these users have a trusted client certificate loaded in their browser it will be presented for authentication to your NiFi and those user will never see the login window. From a user certificate the full DN will be used to identify the user. That full DN is likely matching on your existing mapping pattern resulting in just the username you are seeing. So it is important that you not remove this existing mapping pattern, but instead add a second. nifi.security.identity.mapping.pattern.<any string> nifi.security.identity.mapping.value.<any string> Patterns are searched in a alpha-numeric order. First matching regex will be applied. Thank you, Matt
... View more
11-13-2018
07:08 AM
goto Ambari --> HDFS --> QuickLinks --> Master(Active) ... a new page will open ... search "Number of Blocks Pending Deletion"... you will find the pending deletion blocks.. refresh page after 30 sec. counts will be changed... enjoy
... View more
08-23-2018
05:44 PM
This worked, thanks!
... View more
07-31-2018
04:43 PM
The response of the POST should be the process group entity with the id populated, and in addition there should be a header that has the URI of the created process group.
... View more
05-07-2018
01:20 PM
@Matt Burgess Thanks for opening the JIRA Matt. As a workaround in the meantime, I discovered I can use ${s2s.host} to get the host name, I just then need to use an UpdateAttribute processor to add this as an actual property to the flowfile.
... View more
11-16-2018
08:51 PM
Another option could be to use Ambari log search
... View more
04-16-2018
07:49 PM
HDF 3.1.1 is not compatible with HDP 2.6.1. I was told by Hortonworks that HDP 3.0 will be compatible and I must wait until its release.
... View more