Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 839 | 06-04-2025 11:36 PM | |
| 1414 | 03-23-2025 05:23 AM | |
| 705 | 03-17-2025 10:18 AM | |
| 2521 | 03-05-2025 01:34 PM | |
| 1650 | 03-03-2025 01:09 PM |
11-02-2017
12:29 PM
@Zhao Chaofeng Is this thread still open? i.e hasn't this problem been resolved? Please revert
... View more
11-01-2017
10:26 AM
@Nilesh This document will walk you through, you should be able to access Nifi using yoour AD account
... View more
10-31-2017
08:24 PM
@Chaitanya D HDP 2.6.x I just tried out the same command on my kerberized cluster it was successful see attached sqoop.jpg. $ sqoop import --connect jdbc:mysql://localhost/oozie --username oozie --password oozie --table BUNDLE_ACTIONS -m 1 --target-dir /user/oozie/sqoop_import/BUNDLE_ACTIONS/ Has the user root got a home directory in hdfs ? check using the below $ hdfs dfs -ls /user Otherwise, create one dummy directory or change the permsiions on the existing $ hdfs dfs -chmod 777 /user/hduser Then run your sqoop I added --class-name [force creating the customer_part] and customers3 to create a new file as you'd already created cusomer2 sqoop import--connect jdbc:mysql://localhost/localdb --username root --password mnbv@1234 --table customers -m 1 --class-name customers_part --target-dir /user/hduser/sqoop_import/customers3/ Please let me know
... View more
10-30-2017
12:54 PM
1 Kudo
@Florin Miron This error might arise if you do not start namenode as hdfs user. Actually, you are trying to run it as root user but the file /hadoop/hdfs/namenode/in_use.lock is owned by hdfs user. Do not use sudo to start hadoop processess. Try to start it without sudo. Try this: su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh start namenode" Can you try that and revert
... View more
10-30-2017
11:44 AM
@suraj l Can you upload this log /data/log/nifi/nifi-user.log, meanwhile let me spin up a single node Nifi cluster and to try to reproduce your settings. Do you have some specific document or steps you followed? Please revert
... View more
10-30-2017
09:43 AM
@suraj l I see the below error in the logs INFO [NiFi Web Server-22] o.a.n.w.a.c.IllegalStateExceptionMapper java.lang.IllegalStateException: Kerberos ticket login not supported by this NiFi.. Returning Conflict response. Your setup looks kerberized did you configuring a Login Identity Provider that integrates with a Kerberos Key? <provider>
<identifier>kerberos-provider</identifier>
<class>org.apache.nifi.kerberos.KerberosProvider</class>
<property name="Default Realm">NIFI.APACHE.ORG</property>
<property name="Kerberos Config File">/etc/krb5.conf</property>
<property name="Authentication Expiration">12 hours</property>
</provider> With the above configuration, username/password authentication can be enabled by referencing this provider in nifi.properties. nifi.security.user.login.identity.provider=kerberos-provider In the Ranger UI your parameter Nifi Resource Identifier, I see you have put * which is open permissions the controlled valid options are /flow -----Allows users to view the user interface
/controller
/tenants
/site-to-site
/system
/proxy
/counters The guide linked correctly states that your NiFi must be configured to run securely (HTTPS) and have an authentication mechanism (user certificates. ldap, or kerberos) in place. Without a secure setup, all users who access the NiFi UI get in with anonymous access which gives all of them full access so all aspects of the NiFi UI. Remove the http configuration to prevent uncontrolled anonymous access in nifi.properties
... View more
10-30-2017
08:01 AM
@suraj l have you succeeded in giving your Nifi users UI access through ranger? If so well and good , I don't quite understand the next part of your question. "How to remove the anonymous user by getting default login. I have not given anonymous user in ranger policy." Can you elaborate
... View more
10-29-2017
09:40 PM
1 Kudo
@Florin Miron Run the below command to check if JCE is Unlimited.To avoid several Kerberos issues, Java Cryptography Extension (JCE) should be unlimited. If JCE is unlimited, then 'local_policy.jar' should contain keyword 'CryptoAllPermission' $ zipgrep CryptoAllPermission $JAVA_HOME/jre/lib/security/local_policy.jar In my case the ouput is as below which is expected default_local.policy: permission javax.crypto.CryptoAllPermission $ zipgrep CryptoAllPermission /usr/jdk64/jdk1.8.0_112/jre/lib/security/local_policy.jar
default_local.policy: permission javax.crypto.CryptoAllPermission; How to Install JCE
Go to the Oracle Java SE download page http://www.oracle.com/technetwork/java/javase/downloads/index.html Scroll down ... Under "Additional Resources" section you will find "Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy File" Download the version that matches your installed JVM E.g. UnlimitedJCEPolicyJDK7.zip Unzip the downloaded zip Copy local_policy.jar and US_export_policy.jar to the $JAVA_HOME/jre/lib/security (Note: these jars will be already there so you have to overwrite them) Then restart your application to get rid of this exception. Then try to restart the datanode.
... View more
10-29-2017
07:33 PM
@Florin Miron Yeah that's quite disappointing the files you have attached are not exhaustive, can you upload the below files.
/var/log/hadoop/hdfs/hadoop-hdfs-datanode-node.bigdata.com.log
/var/log/ambari-server/ambari-server.log
This could help in the diagnostics
... View more
10-29-2017
05:41 PM
@Fawze AbuJaber Don't give up as yet, just imagine it was a production cluster 🙂 Do you have a documentation you followed, I could compare it to my MIT kerberos HDP / AD integration setups and maybe find the discrepancy. Please revert.
... View more