Member since
05-30-2018
1322
Posts
715
Kudos Received
148
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 4130 | 08-20-2018 08:26 PM | |
| 2003 | 08-15-2018 01:59 PM | |
| 2431 | 08-13-2018 02:20 PM | |
| 4232 | 07-23-2018 04:37 PM | |
| 5111 | 07-19-2018 12:52 PM |
03-07-2017
05:34 PM
1 Kudo
@Sunile Manjee Even if you were to use NiFi's file based authorizer instead of Ranger the same limitation exists with maintaining authorizations when moving templates from one NiFi environment to another. Templates were never intended or designed to be the answer to the SDLC. Although they represent the closest thing to it for now. Templates are nothing more the a snippet of a NiFi components that can be reused within the same NiFi or downloaded and shared with user of other NiFi instances. They cannot be hardcoded to use specific component uuids nor would be want to because that what hinder there reusability within the same NiFi instance. We also can't include any authorizations with a template since there is no way of knowing that other NiFi instances in which the template is loaded will contain the same set of users. Nor can we set authorizations based on PG names. What if another PG is created with that same name in another process group? What is a user happens to use a PG name that has policies associated to it? The results could present a security issue. There is on going work towards a better SDLC model with NIFi. That being said, the default behavior when adding a template to a graph is that all components inherit the policies from the parent process group. So if at the root level you create several process groups with a specific set of authorizations for each, instantiating your templates in a given process group will establish a controlled set of authorizations. Not the ideal solution, but helps some until future work is done to make SDLC better. Thanks, Matt
... View more
03-04-2017
04:45 AM
1 Kudo
Check your livy.superusers in Spark->Livy in Ambari, it should mach your Zeppelin principal, in this case zeppelin-lake. You are using custom zeppelin service name, there may be also some bugs in that case. Also check your Zeppelin principal stored in zeppelin.server.kerberos.keytab. Many other issues are covered in this post and it this blog, but Zeppelin service name and the principal in those posts is "zeppelin".
... View more
08-07-2018
05:09 PM
Thanks for sharing this worthwhile information here. Just a great article or information. Hope you will sharing new article or post in future. Good blog post. I want to thank you for interesting Register LLp In India and helpful information and I like your point of view. Thank you! I love to read this type of material Good and attractive information I take from it. Thank you for posting such a good
... View more
03-02-2017
03:23 PM
Hi Sunile, As we discussed yesterday, I found this installing HDP 2.5.3 using Ambari 2.4.2. Looking further into this, RHEL 7.3 comes installed with snappy 1.1.0-3.el7 while HDP 2.5.3 needs snappy 1.0.5-1.el6.x86_64. I spun up a RHEL 7.3 instance and ran the following command, showing snappy 1.1.0-3.el7 came pre-installed: As Jay posted - Looking at the latest documentation for Ambari 2.4.2, I found this problem in "Resolving Cluster Deployment Problems" - there should be a bug fix that goes into RHEL 7 (so we don't rely on a rhel 6 dependency) https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-troubleshooting/content/resolving_cluster_install_and_configuration_problems.html - What do you think?
... View more
02-27-2017
05:52 PM
@Muhammad Touseef You can use the instructions to stop hadoop services before upgrade. One thing you absolutely want to make sure is no jobs are running. Let any running job complete. The right way to do it would be to stop queues so any running jobs will complete but no new jobs will be submitted. Once all jobs are completed and nothing is running, you can use "stop all" command from Ambari or simply follow the instructions below: https://hadoop.apache.org/docs/r2.7.2/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html ---> check for how to stop queues https://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.0/bk_upgrading_Ambari/content/_stop_cluster_and_checkpoint_HDFS_mamiu.html -->stop all.
... View more
02-27-2017
10:15 AM
Hi Sunile, Thanks for your answer. We think we will store our initial model and then all alter scripts. But all alter scripts will be included in the initial model in cas a complete re-deployment is wanted. To view a logical model a tool will be used which can reverse engineer the ddl. We try to establish a workflow in that fashion and hope that works.
... View more
03-15-2018
05:13 PM
Any idea if there are any plans to include the kafka rest proxy in the near future into HDF. I noticed the JIRA has been closed as it didn't pass community vote but it's been added to the confluent platform and was wondering if the same will occur for HDF
... View more
03-15-2019
03:04 PM
Use the ExecuteStreamCommand Processor and use the sed command something like this. This worked for me.
... View more
02-27-2018
09:34 PM
yes by default. You can change ports as well in ambari https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.1.0/bk_installing-nifi/content/ch02s04.html or it could be 8443
... View more