Member since
01-04-2021
29
Posts
4
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
898 | 09-17-2023 03:09 AM |
09-17-2023
09:07 AM
I have been trying to integrate Nifi with Apache Ranger. When I manually configure policies in Ranger for Nifi the policies are fetched by Nifi and authorization works fine. But when I try to define the service definition and test the connection its is giving the following error. The configuration in service definition is shown below. The authentication in nifi is setup by following this article. Note: Only Nifi is setup in SSL mode. Apache Ranger doesn't have SSL mode. What could be the possible reason this is happening?
... View more
Labels:
- Labels:
-
Apache NiFi
-
Apache Ranger
09-17-2023
03:09 AM
1 Kudo
After some more time of debugging, looks like all the configurations where correct. The password for the configured user was wrong in LDAP causing the issue.
... View more
09-16-2023
02:34 PM
Here is my login-identity-providers.xml <provider>
<identifier>ldap-provider</identifier>
<class>org.apache.nifi.ldap.LdapProvider</class>
<property name="Authentication Strategy">SIMPLE</property>
<property name="Manager DN">cn=admin,dc=example,dc=com</property>
<property name="Manager Password">secret</property>
<property name="TLS - Keystore"></property>
<property name="TLS - Keystore Password"></property>
<property name="TLS - Keystore Type"></property>
<property name="TLS - Truststore"></property>
<property name="TLS - Truststore Password"></property>
<property name="TLS - Truststore Type"></property>
<property name="TLS - Client Auth"></property>
<property name="TLS - Protocol"></property>
<property name="TLS - Shutdown Gracefully"></property>
<property name="Referral Strategy">FOLLOW</property>
<property name="Connect Timeout">10 secs</property>
<property name="Read Timeout">10 secs</property>
<property name="Url">ldap://localhost:389</property>
<property name="User Search Base">cn=vishnu,cn=admin,dc=example,dc=com</property>
<property name="User Search Filter">(objectClass=*)</property>
<property name="Identity Strategy">USE_USERNAME</property>
<property name="Authentication Expiration">12 hours</property>
<property name="User Object Class">person</property>
<property name="User Search Scope">ONE_LEVEL</property>
<property name="User Identity Attribute">cn</property>
</provider> The authorizers is shown below. <userGroupProvider>
<identifier>file-user-group-provider</identifier>
<class>org.apache.nifi.authorization.FileUserGroupProvider</class>
<property name="Users File">./conf/users.xml</property>
<property name="Legacy Authorized Users File"></property>
<property name="Initial User Identity 1">cn=vishnu,cn=admin,dc=example,dc=com</property>
</userGroupProvider>
<accessPolicyProvider>
<identifier>file-access-policy-provider</identifier>
<class>org.apache.nifi.authorization.FileAccessPolicyProvider</class>
<property name="User Group Provider">file-user-group-provider</property>
<property name="Authorizations File">./conf/authorizations.xml</property>
<property name="Initial Admin Identity">cn=vishnu,cn=admin,dc=example,dc=com</property>
<property name="Legacy Authorized Users File"></property>
<property name="Node Identity 1"></property>
<property name="Node Group"></property>
</accessPolicyProvider>
<authorizer>
<identifier>managed-authorizer</identifier>
<class>org.apache.nifi.authorization.StandardManagedAuthorizer</class>
<property name="User Group Provider">ldap-user-group-provider</property>
<property name="Access Policy Provider">file-access-policy-provider</property>
<property name="Initial Admin Identity">cn=vishnu,cn=admin,dc=example,dc=com</property>
<property name="Legacy Authorized Users File"></property>
<property name="Node Identity 1"></property>
</authorizer> The following values of properties are updated nifi.login.identity.provider.configuration.file=./conf/login-identity-providers.xml Below is the view of LDAP from Apache Directory studio. Currently there is only one user in that search base. Can someone help identify why the authentication is failing? I referred other articles within Cloudera community and outside but none seem to be working.
... View more
Labels:
- Labels:
-
Apache NiFi
12-19-2022
03:51 AM
1 Kudo
I have tried several scenarios to generate cache miss in HBase with HDP 2.6.5. Different steps I followed include: 1) putting value in HBase using put command and fetching using get command. 2) putting value in HBase with put command , flushing and the trying to fetch the data. None of this creates cache misses. Infact the hits and hits caching keeps increasing by multiple counts during flushing. Misses and misses caching always remains zero. Why is this behaviour occurring? What is the difference between misses and misses caching and hits and hits caching? I will attach the screen shots of region server logs.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache HBase
-
HDFS
12-13-2022
09:34 PM
@smdas Adding another scenario to picture. lets say a row key is already in blockcache. An update for that was just made. The row key has an existing value already in blockcache which is not the latest but the updated value is in memstore and if flushed then in Hfile. When a read occurs for the same row key then we look for the data first in blockcache and it will find a row key with old value. How does hbase know the value blockcache currently holds is not the latest and latest has to be fetched from Memstore or Hfile?
... View more
12-10-2022
10:04 AM
@smdas So even if a key is updated in Memcache and not updated in Blockcache the read merge updates the values of BlockCache with Memcache directly without updating HFile? Cause for HFile to be updated a flush has to happen right? Or is it that MemCache and BlockCache checks are done simultaneously?
... View more
12-08-2022
02:38 AM
1 Kudo
Give a scenario that data is written to HFile in Hbase. Now a read occurs and result is saved in blockcache. For the data that is there is in block cache an update occurs, which is saved in Memstore.
Now if the same data is read it will look for data first in Block Cache and if cache is not yet expired the result is found there. If that data is returned to client then it would be an inconsistent read.
Is it possible for an above scenario to occur or is my understanding wrong?
... View more
Labels:
- Labels:
-
Apache HBase
-
HDFS
12-08-2022
02:35 AM
In Hbase as per my reading the reads happens by first checking in blockcache, if missed then Memcache,if missed then use bloom filters to check for the record and finally use index on the HFile to read the data. But what if all the data is compressed? How can it find the index and read the data from a compressed Hfile? Even if read from where does the decompression occur? Is it from client?
... View more
Labels:
- Labels:
-
Apache HBase
-
HDFS
03-03-2021
11:45 PM
I have setup user authentication using ldap for nifi. Now when Iam able to add new users and achieve access restrictions to different users etc. But still now Iam not able to understand how to truly achieve multi-tenancy with nifi. For eg. when I add new users and to let them view the interface I give the policy to view the interface. But the user is still able to see all the components positions and connections even though not able to access them. This is not true multi-tenancy as I understand because users should not be able to see things even though not accessible to them. The same is problem with the controllers. Users are able to view them but not access or edit. It is also not possible to create a admin just for a tenant. Is there anyway we can truly achieve this in nifi? @MattWho @bbende @pam1
... View more
Labels:
- Labels:
-
Apache NiFi
- « Previous
-
- 1
- 2
- Next »