Member since
04-11-2016
32
Posts
21
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
554 | 06-24-2017 06:01 PM | |
2210 | 03-25-2017 03:09 AM | |
529 | 12-21-2016 09:27 PM | |
1633 | 06-01-2016 12:48 AM | |
913 | 05-10-2016 04:17 AM |
08-19-2019
03:23 PM
This article describes the configuration of the authorizers.xml for using the Composite User Group Provider for both LDAP and File based authentication in Cloudera Manager for CDF 1.0 This article assumes TLS has already been configured for NiFi using either the NiFi CA or your own certs. Here is the Active Directory Structure that I will use for my sync: Here are the groups in the Groups OU: Here are the users in the Users OU: Here are the service users in the ServiceUsers OU: First, I will import my Active Directory root certificate into the NiFi and Java Truststores: keytool -import -file adcert.pem -alias ad -keystore /var/lib/nifi/cert/truststore.jks
keytool -import -file adcert.pem -alias ad -keystore /etc/pki/java/cacerts Now login to Cloudera Manager and proceed to the NiFi Configuration. Update the configuration to use the "composite-user-group-provider" as follows: xml.authorizers.accessPolicyProvider.file-access-policy-provider.property.User Group Provider=composite-user-group-provider Next, use the "NiFi Node Advanced Configuration Snippet (Safety Valve) for staging/authorizers.xml" and add the following configurations using the XML view: <property>
<name>xml.authorizers.userGroupProvider.ldap-user-group-provider.class</name>
<value>org.apache.nifi.ldap.tenants.LdapUserGroupProvider</value>
</property>
<property>
<name>xml.authorizers.userGroupProvider.ldap-user-group-provider.property.Manager DN</name>
<value>CN=ldapbind-svc,OU=ServiceUsers,OU=CLDR,DC=nismaily,DC=com</value>
</property>
<property>
<name>xml.authorizers.userGroupProvider.ldap-user-group-provider.property.Manager Password</name>
<value>hadoop</value>
</property>
<property>
<name>xml.authorizers.userGroupProvider.ldap-user-group-provider.property.Referral Strategy</name>
<value>FOLLOW</value>
</property>
<property>
<name>xml.authorizers.userGroupProvider.ldap-user-group-provider.property.Connect Timeout</name>
<value>10 secs</value>
</property>
<property>
<name>xml.authorizers.userGroupProvider.ldap-user-group-provider.property.Read Timeout</name>
<value>10 secs</value>
</property>
<property>
<name>xml.authorizers.userGroupProvider.ldap-user-group-provider.property.Url</name>
<value>ldaps://win-ltfjo4jgo4r.nismaily.com:636</value>
</property>
<property>
<name>xml.authorizers.userGroupProvider.ldap-user-group-provider.property.Sync Interval</name>
<value>15 mins</value>
</property>
<property>
<name>xml.authorizers.userGroupProvider.ldap-user-group-provider.property.User Search Base</name>
<value>OU=Users,OU=CLDR,DC=nismaily,DC=com</value>
</property>
<property>
<name>xml.authorizers.userGroupProvider.ldap-user-group-provider.property.User Object Class</name>
<value>user</value>
</property>
<property>
<name>xml.authorizers.userGroupProvider.ldap-user-group-provider.property.User Search Scope</name>
<value>SUBTREE</value>
</property>
<property>
<name>xml.authorizers.userGroupProvider.ldap-user-group-provider.property.User Identity Attribute</name>
<value>sAMAccountName</value>
</property>
<property>
<name>xml.authorizers.userGroupProvider.ldap-user-group-provider.property.User Group Name Attribute</name>
<value>memberof</value>
</property>
<property>
<name>xml.authorizers.userGroupProvider.ldap-user-group-provider.property.Group Search Base</name>
<value>OU=Groups,OU=CLDR,DC=nismaily,DC=com</value>
</property>
<property>
<name>xml.authorizers.userGroupProvider.ldap-user-group-provider.property.Group Object Class</name>
<value>group</value>
</property>
<property>
<name>xml.authorizers.userGroupProvider.ldap-user-group-provider.property.Group Search Scope</name>
<value>SUBTREE</value>
</property>
<property>
<name>xml.authorizers.userGroupProvider.ldap-user-group-provider.property.Group Name Attribute</name>
<value>cn</value>
</property>
<property>
<name>xml.authorizers.userGroupProvider.ldap-user-group-provider.property.Group Member Attribute</name>
<value>member</value>
</property>
<property>
<name>xml.authorizers.userGroupProvider.composite-user-group-provider.class</name>
<value>org.apache.nifi.authorization.CompositeConfigurableUserGroupProvider</value>
</property>
<property>
<name>xml.authorizers.userGroupProvider.composite-user-group-provider.property.Configurable User Group Provider</name>
<value>file-user-group-provider</value>
</property>
<property>
<name>xml.authorizers.userGroupProvider.composite-user-group-provider.property.User Group Provider 1</name>
<value>ldap-user-group-provider</value>
</property>
<property>
<name>xml.authorizers.userGroupProvider.ldap-user-group-provider.property.Authentication Strategy</name>
<value>LDAPS</value>
</property>
<property>
<name>xml.authorizers.userGroupProvider.ldap-user-group-provider.property.TLS - Keystore</name>
<value>/var/lib/nifi/cert/keystore.jks</value>
</property>
<property>
<name>xml.authorizers.userGroupProvider.ldap-user-group-provider.property.TLS - Keystore Password</name>
<value>hadoop</value>
</property>
<property>
<name>xml.authorizers.userGroupProvider.ldap-user-group-provider.property.TLS - Keystore Type</name>
<value>jks</value>
</property>
<property>
<name>xml.authorizers.userGroupProvider.ldap-user-group-provider.property.TLS - Truststore</name>
<value>/var/lib/nifi/cert/truststore.jks</value>
</property>
<property>
<name>xml.authorizers.userGroupProvider.ldap-user-group-provider.property.TLS - Truststore Password</name>
<value>hadoop</value>
</property>
<property>
<name>xml.authorizers.userGroupProvider.ldap-user-group-provider.property.TLS - Truststore Type</name>
<value>jks</value>
</property>
<property>
<name>xml.authorizers.userGroupProvider.ldap-user-group-provider.property.TLS - Client Auth</name>
<value>WANT</value>
</property>
<property>
<name>xml.authorizers.userGroupProvider.ldap-user-group-provider.property.TLS - Protocol</name>
<value>TLSv1.2</value>
</property>
<property>
<name>xml.authorizers.userGroupProvider.ldap-user-group-provider.property.TLS - Shutdown Gracefully</name>
<value>false</value>
</property> Restart NiFi and login to the NiFi UI. Proceed to the Users Tab: You will see the users synced from Active Directory:
... View more
Labels:
06-24-2017
06:01 PM
1 Kudo
Try this: sqoop import \
--username username \
--P \
--connect jdbc:sap://hostname:portnumber/ \
--driver com.sap.db.jdbc.Driver \
--query "select * from \"schema\".\"/path/to/hana/calculation\" where \$CONDITIONS" \
--hcatalog-database hivedatabase \
--hcatalog-table hivetable \
--create-hcatalog-table \
--hcatalog-storage-stanza "stored as orc"
... View more
05-26-2017
05:41 AM
This Is Amazing, thanks for posting this !
... View more
03-25-2017
03:09 AM
1 Kudo
Did you copy the ojdbc jar to the nifi lib directory and restart nifi?
... View more
12-21-2016
09:32 PM
What is the correct process in moving the Journal Node Directories to a different disk? Lets say I have 3 servers, each running a Journal Node Service: server1 server2 server3 The Journal Nodes are currently writing to /var/lib/hadoop/hdfs/journal I want to change this to /data1/hadoop/hdfs/journal for each server I've tried: 1) Stopping Namenodes (HA) 2) Copying data from /var/lib/hadoop/hdfs/journal to /data1/hadoop/hdfs/journal 3) Changing the dir property for the Journal Nodes in Ambari 4) Starting the Namenodes This did not seem to work
... View more
Labels:
- Labels:
-
Apache Hadoop
12-21-2016
09:27 PM
4 Kudos
Edit the following file: /etc/ambari-server/conf/ambari.properties Increase the following parameters: views.request.connect.timeout.millis views.request.read.timeout.millis Restart Ambari Server
... View more
11-08-2016
08:36 PM
I increased the capacity and it worked, thanks!
... View more
11-03-2016
09:21 PM
I'm having trouble getting Hive Interactive (LLAP) to work I'm working on a kerberized HDP 2.5 cluster. It seems like only 1 daemon starts, here's the error: 2016-11-03 15:50:50,084 - Marker index for start of JSON data for 'llapsrtatus' comamnd : 0 2016-11-03 15:50:50,085 - LLAP app 'llap0' in 'RUNNING_PARTIAL' state. Live Instances : '1'. Desired Instances : '4' after 221.781697989 secs. 2016-11-03 15:50:50,085 - LLAP app 'llap0' did not come up after a wait of 221.782001972 seconds. 2016-11-03 15:50:50,087 - LLAP app 'llap0' deployment unsuccessful. What is the cause for this?
... View more
- Tags:
- Hadoop Core
- Hive
- llap
Labels:
- Labels:
-
Apache Hive
06-01-2016
12:48 AM
2 Kudos
Thanks Ravi, I had to: 1) Copy spark shuffle jars to nodemanager classpaths on all nodes 2) add spark_shuffle to yarn.nodemanager.aux-services, set yarn.nodemanager.aux-services.spark_shuffle.class to org.apache.spark.network.yarn.YarnShuffleService in yarn-site.xml (via Ambari) 3) Restart all nodemanagers 4) Add the following to spark-defaults.conf spark.dynamicAllocation.enabled true spark.shuffle.service.enabled true 5) Set these parameters per job basis spark.dynamicAllocation.initialExecutors=# spark.dynamicAllocation.minExecutors=#
... View more
05-25-2016
09:19 PM
Thanks Ravi, this is very close to what I need. Question, spark.dynamicAllocation.minExecutors seems to be a global property in spark-defautls Is there a way to set this property on a job by job basis? Spark job1 -> min executors 8 Spark job2 -> min executors 5
... View more
05-25-2016
07:09 PM
1 Kudo
Can we configure the capacity scheduler in such a way that a spark job only runs when it can procure enough resources? In the current FIFO setup a spark job will start running if it can get a few of the required executors, but the job will fail because it couldn't get enough resources. I would like the spark job to only start when it can procure all the required resources.
... View more
Labels:
- Labels:
-
Apache Spark
05-25-2016
07:06 PM
Is there any documentation available on integrating FreeIPA with HDP?
... View more
05-19-2016
12:01 AM
Thanks Sridhar, but what if job3 runs after job1, and then job4 runs, and then job2 runs?
... View more
05-18-2016
11:48 PM
2 Kudos
Has anyone come across the following scenario: I launch 5 YARN jobs (requiring variable resources) in the same queue in this order: job1, job2, job3, job4, job5 I’ve configured a Capacity Scheduler with FIFO ordering. Observed behavior: job 1 runs job2 - job5 in waiting state once job1 completes, job2-job5 runs in random order (job4, job2, job3, job5) Is this expected?
... View more
- Tags:
- Capacity Scheduler
- capacity scheduler queue
- capacity-scheduler
- Cloud & Operations
- YARN
- yarn-scheduler
Labels:
- Labels:
-
Apache YARN
05-12-2016
03:54 PM
Alon, what are you attempting to do? Is fs.hdfs.impl the correct property you are looking to change?
... View more
05-11-2016
10:30 PM
Can you attach your logs for hive ? Also change the log level to debug ($HIVE_HOME/bin/hive --hiveconf hive.root.logger=DEBUG,DRFA)
... View more
05-11-2016
10:17 PM
Are there any available metrics for the overhead associated with enabling the various types of encryption (RPC, HTTPS, etc...) across the cluster?
... View more
Labels:
- Labels:
-
Cloudera Navigator Encrypt
05-11-2016
09:47 PM
2 Kudos
If we look at the configuration for an older version of Hadoop: name|value|description fs.hdfs.impl org.apache.hadoop.dfs.DistributedFileSystem The FileSystem for hdfs: uris. If we look at the configuration of Hadoop 2.7.1 name|value|description fs.AbstractFileSystem.hdfs.impl org.apache.hadoop.fs.Hdfs The FileSystem for hdfs: uris. For the second part of your question: core-default.xml just gives you the default implementation for core-site.xml You can add the properties to core-site.xml to overwrite the default values
... View more
05-11-2016
07:51 PM
1 Kudo
HDP 2.3 uses Apache Hadoop 2.7.1 The available configuration(in core-site.xml) for this version of Hadoop can be found here: https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml It looks like it may have been changed to: fs.AbstractFileSystem.hdfs.impl
... View more
05-10-2016
04:17 AM
4 Kudos
The error you are getting has to do with the logic in your execute method, particularly: String str = tuple.getStringByField("word"); The Tick Tuple your bolt receives is mixed in with the other normal tuples you are processing. You are attempting to get the field with the name “word” before you are checking if the tuple is a Tick Tuple. The Tick Tuple has 1 field, “rate_secs” that is a equal to the value set for TOPOLOGY_TICK_TUPLE_FREQ_SECS in your conf. When you receive this Tick Tuple, you are attempting to get a field that does not exist, “word”, and assign it to the String, str. This is the reason you get the IllegalArgumentException: word does not exist. Move String str = tuple.getStringByField("word"); After your check for the Tick Tuple.
... View more