Member since
07-25-2016
15
Posts
0
Kudos Received
0
Solutions
06-25-2020
10:44 PM
Issue
Unable to install NiFi 3.5.1 from an HDP 3.1.5 cluster.
Background
In a twelve node HDP 3.1.5 cluster, all services are installed and run successfully. The cluster has Kerberos and full SSL installed. The correct mpack for HDF NiFi has been installed. Ambari-server and agents are restarted. However, NiFi fails to install.
Error
The following is the error displayed during the installation:
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/NIFI/1.0.0/package/scripts/nifi.py", line 312, in <module>
Master().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/NIFI/1.0.0/package/scripts/nifi.py", line 66, in install
self.install_packages(env)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 853, in install_packages
retry_count=agent_stack_retry_count)
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/ambari-agent/lib/resource_management/core/providers/packaging.py", line 30, in action_install
self._pkg_manager.install_package(package_name, self.__create_context())
File "/usr/lib/ambari-agent/lib/ambari_commons/repo_manager/yum_manager.py", line 219, in install_package
shell.repository_manager_executor(cmd, self.properties, context)
File "/usr/lib/ambari-agent/lib/ambari_commons/shell.py", line 753, in repository_manager_executor
raise RuntimeError(message)
RuntimeError: Failed to execute command '/usr/bin/yum -y install nifi_3_5_*', exited with code '1', message: 'https://<username>:<password>@archive.cloudera.com/p/HDF/centos7/3.x/updates/3.5.1.0/repodata/repomd.xml: [Errno 14] HTTPS Error 403 - Forbidden
Troubleshooting
In this case, a version.xml file was used when installing the HDP cluster. When checking Managed Versions > Versions > 3.1.5, the listing for HDF, HDP, and HDP-Utils was the same. They were are displaying a field like the following:
https://****:****@archive.cloudera.com/p/HDF/centos7/3.x/updates/3.5.1.0
This Cloudera documentation states to edit the file on the NiFi nodes for ambari-hdp-1.repo. When performing this step, the install overwrites this file every time during the installation. However, when the <username>:<password> is replaced with license ID and password, when running the installation again, it overwrites the changes and the file reverts to the original configuration.
File ambari-hdp-1.repo
[HDF-3.5-repo-1]
name=HDF-3.5-repo-1
baseurl=https://<username>:<password>@archive.cloudera.com/p/HDF/centos7/3.x/updates/3.5.1.0
Resolution
In the original version.xml file that was used to install HDP, there were no entries for HDF. However, the new listing in Ambari clearly showed the services for NiFi, NiFi Registery, etc being added. This may be been an issue because the <username>:<license> was not updated during the installation of the mpack.
To resolve this issue, manually insert the username:license into the Managed Version > HDP 3.1.5 > HDF 3.5.0.
Essentially, manually building a combined HDP / HDF version.xml should help you install NiFi without any issues.
... View more
Labels:
11-06-2019
05:11 AM
Error: Unable to instantiate class [org.apache.zeppelin.server.LdapRealm] for object named 'ldapRealm'. The same is true for 'ldapGroupRealm' Zeppelin 0.8 on HDP 3.1.0.0 Configuration: # authentication settings ldapRealm = org.apache.zeppelin.server.LdapRealm ldapRealm.contextFactory.environment[ldap.searchBase] = DC=tantor,DC=net ldapRealm.userDnTemplate = uid={0},OU=users,DC=tantor,DC=net ldapRealm.contextFactory.url = ldap://infra01.tantor.net:389 ldapRealm.contextFactory.authenticationMechanism = SIMPLE # general settings sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager securityManager.cacheManager = $cacheManager securityManager.sessionManager = $sessionManager securityManager.sessionManager.globalSessionTimeout = 86400000 shiro.loginUrl = /api/login Exception: ERROR [2019-11-06 12:49:24,565] ({main} EnvironmentLoader.java[initEnvironment]:146) - Shiro environment initialization failed org.apache.shiro.config.ConfigurationException: Unable to instantiate class [org.apache.zeppelin.server.LdapRealm] for object named 'ldapRealm'. Please ensure you've specified the fully qualified class name correctly. at org.apache.shiro.config.ReflectionBuilder.createNewInstance(ReflectionBuilder.java:309) NOTE: I have tried both LdapRealm and LdapGroupRealm with the same results Additional Information: http://mail-archives.apache.org/mod_mbox/zeppelin-users/201611.mbox/%3CCAPyZXSRUtGx9A_QB9BZN6H8mkk1gU88nAEJra38JhP44uhM3sw@mail.gmail.com%3E
... View more
Labels:
- Labels:
-
Apache Zeppelin
10-13-2017
01:04 AM
We should build this into the Ambari web UI, perhaps as a check box on the bottom of the add user form. This will allow the Ambari admin to determine if they require the Ambari user account to have a working directory in HDFS at the time of user create.
... View more
10-13-2017
12:59 AM
Kartik, thank you very much. This is an important change to my presentations. I will incorporate it into my next class.
... View more
09-26-2017
09:41 AM
I have been repeatedly asked why there is no option to create a HDFS working directory during the creation of an Ambari user. I have explained the creation of a working directory requires access to the hdfs user. But it does seems this could be handled through programming. After all we are creating the user while we are the Ambari admin and generally as the cluster admin as well. Is there a practical reason for this situation?
... View more
Labels:
- Labels:
-
Apache Ambari
09-26-2017
09:34 AM
When installing the software for Ambari using ambari-server setup-ldap we input a complex configuration to declare the ldap connection. There is a parameter --ldap-url which we set to an AD host. Can this parameter take a commented list? The actual question is how do point the ambari-server at more than one AD host to support AD failure?
... View more
Labels:
- Labels:
-
Apache Ambari
02-28-2017
02:52 AM
I was hoping to find a parameter to set a time limit for Ambari users, resulting in an automatic logout. Does such a parameter exist? If not we should file a requrest.
... View more
Labels:
- Labels:
-
Apache Ambari
02-28-2017
12:42 AM
When using the heat map in Ambari we have a select metric for DataNode JVM Heap Memory Used and another metric for DataNode JVM Heap Memory Committed. What is the difference and why is this difference important to us?
... View more
Labels:
- Labels:
-
Apache Hadoop
10-07-2016
07:09 PM
In the Fair Scheduler there are parameters for setting the algos that run preemption. This allows a window for containers to become available without killing them off. A standard is 300 seconds. I searched for a similar parameter in Capacity Scheduler using preemption but did not find it. Is there a back off period and if so what is that period? Is there a parameter to configure this window?
... View more
Labels:
- Labels:
-
Apache YARN