Support Questions

Find answers, ask questions, and share your expertise

Instalation of HUE using Ambari for hdp 2.6.5

avatar
Expert Contributor

Team ,

 

I am following below link to install managed by Ambari .

Hdp version :- 2.6.5

Ambari version :-

 

https://github.com/EsharEditor/ambari-hue-service .

 

After following  few steps and when i try to install the HUE services from ambari  i am getting below error .

 

2020-03-31 16:43:07,626 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=None -> 2.6
2020-03-31 16:43:07,630 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2020-03-31 16:43:07,631 - Group['kms'] {}
2020-03-31 16:43:07,634 - Group['livy'] {}
2020-03-31 16:43:07,634 - Group['spark'] {}
2020-03-31 16:43:07,634 - Group['ranger'] {}
2020-03-31 16:43:07,635 - Group['hue'] {}
2020-03-31 16:43:07,642 - Adding group Group['hue']
2020-03-31 16:43:07,670 - Group['hdfs'] {}
2020-03-31 16:43:07,671 - Group['zeppelin'] {}
2020-03-31 16:43:07,672 - Group['hadoop'] {}
2020-03-31 16:43:07,673 - Group['users'] {}
2020-03-31 16:43:07,673 - Group['knox'] {}
2020-03-31 16:43:07,675 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2020-03-31 16:43:07,679 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2020-03-31 16:43:07,682 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2020-03-31 16:43:07,685 - User['superset'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2020-03-31 16:43:07,687 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2020-03-31 16:43:07,688 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2020-03-31 16:43:07,690 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'ranger'], 'uid': None}
2020-03-31 16:43:07,691 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2020-03-31 16:43:07,693 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'zeppelin', u'hadoop'], 'uid': None}
2020-03-31 16:43:07,694 - User['kms'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2020-03-31 16:43:07,696 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2020-03-31 16:43:07,697 - User['druid'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2020-03-31 16:43:07,699 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2020-03-31 16:43:07,700 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2020-03-31 16:43:07,701 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2020-03-31 16:43:07,703 - User['hue'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2020-03-31 16:43:07,721 - Adding user User['hue']
2020-03-31 16:43:08,285 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs'], 'uid': None}
2020-03-31 16:43:08,288 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2020-03-31 16:43:08,292 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2020-03-31 16:43:08,295 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2020-03-31 16:43:08,297 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2020-03-31 16:43:08,298 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2020-03-31 16:43:08,301 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2020-03-31 16:43:08,303 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2020-03-31 16:43:08,307 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2020-03-31 16:43:08,317 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2020-03-31 16:43:08,317 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2020-03-31 16:43:08,318 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2020-03-31 16:43:08,319 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2020-03-31 16:43:08,320 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {}
2020-03-31 16:43:08,333 - call returned (0, '57467')
2020-03-31 16:43:08,333 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 57467'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2020-03-31 16:43:08,340 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 57467'] due to not_if
2020-03-31 16:43:08,340 - Group['hdfs'] {}
2020-03-31 16:43:08,342 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', u'hdfs']}
2020-03-31 16:43:08,343 - User['admin'] {'fetch_nonlocal_groups': True}
2020-03-31 16:43:08,345 - FS Type: 
2020-03-31 16:43:08,346 - Directory['/etc/hadoop'] {'mode': 0755}
2020-03-31 16:43:08,362 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'root', 'group': 'hadoop'}
2020-03-31 16:43:08,363 - Writing File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] because contents don't match
2020-03-31 16:43:08,363 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2020-03-31 16:43:08,377 - Repository['HDP-2.6-repo-301'] {'append_to_file': False, 'base_url': 'http://private-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.5.128-2', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-301', 'mirror_list': None}
2020-03-31 16:43:08,384 - File['/etc/yum.repos.d/ambari-hdp-301.repo'] {'content': '[HDP-2.6-repo-301]\nname=HDP-2.6-repo-301\nbaseurl=http://private-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.5.128-2\n\npath=/\nenabled=1\ngpgcheck=0'}
2020-03-31 16:43:08,385 - Writing File['/etc/yum.repos.d/ambari-hdp-301.repo'] because contents don't match
2020-03-31 16:43:08,385 - Repository with url http://public-repo-1.hortonworks.com/HDP-GPL/centos7/2.x/updates/2.6.5.1050 is not created due to its tags: set([u'GPL'])
2020-03-31 16:43:08,385 - Repository['HDP-UTILS-1.1.0.22-repo-301'] {'append_to_file': True, 'base_url': 'http://private-repo-1.hortonworks.com/HDP-UTILS-1.1.0.22/repos/centos7', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-301', 'mirror_list': None}
2020-03-31 16:43:08,388 - File['/etc/yum.repos.d/ambari-hdp-301.repo'] {'content': '[HDP-2.6-repo-301]\nname=HDP-2.6-repo-301\nbaseurl=http://private-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.5.128-2\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-UTILS-1.1.0.22-repo-301]\nname=HDP-UTILS-1.1.0.22-repo-301\nbaseurl=http://private-repo-1.hortonworks.com/HDP-UTILS-1.1.0.22/repos/centos7\n\npath=/\nenabled=1\ngpgcheck=0'}
2020-03-31 16:43:08,388 - Writing File['/etc/yum.repos.d/ambari-hdp-301.repo'] because contents don't match
2020-03-31 16:43:08,388 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2020-03-31 16:43:08,836 - Skipping installation of existing package unzip
2020-03-31 16:43:08,836 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2020-03-31 16:43:08,919 - Skipping installation of existing package curl
2020-03-31 16:43:08,919 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2020-03-31 16:43:09,003 - Skipping installation of existing package hdp-select
2020-03-31 16:43:09,007 - The repository with version 2.6.5.128-2 for this command has been marked as resolved. It will be used to report the version of the component which was installed
2020-03-31 16:43:09,011 - Skipping stack-select on HUE because it does not exist in the stack-select package structure.
2020-03-31 16:43:09,257 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2020-03-31 16:43:09,260 - Package['wget'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2020-03-31 16:43:09,428 - Skipping installation of existing package wget
2020-03-31 16:43:09,429 - Package['tar'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2020-03-31 16:43:09,561 - Skipping installation of existing package tar
2020-03-31 16:43:09,562 - Package['asciidoc'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2020-03-31 16:43:09,646 - Installing package asciidoc ('/usr/bin/yum -d 0 -e 0 -y install asciidoc')
2020-03-31 16:43:10,410 - Execution of '/usr/bin/yum -d 0 -e 0 -y install asciidoc' returned 1. Error: Nothing to do
Loaded plugins: product-id
Cannot upload enabled repos report, is this client registered?
2020-03-31 16:43:10,410 - Failed to install package asciidoc. Executing '/usr/bin/yum clean metadata'
2020-03-31 16:43:10,623 - Retrying to install package asciidoc after 30 seconds
2020-03-31 16:43:41,768 - The repository with version 2.6.5.128-2 for this command has been marked as resolved. It will be used to report the version of the component which was installed
2020-03-31 16:43:41,773 - Skipping stack-select on HUE because it does not exist in the stack-select package structure.

 

 

Any advice or solution is highly appreciated  .

 

Regards

Bharad

 

25 REPLIES 25

avatar
Expert Contributor

@stevenmatison 

 i have changed the password  - it did not work

I have updated lines 92-94 in common.py - it did not work

i made changes to params.py - it gave the below error .

 

Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/HUE/4.6.0/package/scripts/hue_server.py", line 76, in <module>
    HueServer().execute()
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 375, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/HUE/4.6.0/package/scripts/hue_server.py", line 26, in start
    import params
  File "/var/lib/ambari-agent/cache/common-services/HUE/4.6.0/package/scripts/params.py", line 194, in <module>
    webhdfs_url = format('http://' + dfs_namenode_http_address + '/webhdfs/v1')
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/config_dictionary.py", line 73, in __getattr__
    raise Fail("Configuration parameter '" + self.name + "' was not found in configurations dictionary!")
resource_management.core.exceptions.Fail: Configuration parameter 'dfs.namenode.http-address' was not found in configurations dictionary!

and i see these (dfs.namenode.http-address are configured under HDFS configuration . Do i have to do anything separately for HUE  .

 

Regards

Bharad

Regards

Bharad

avatar
Super Guru
I will check again in the morning. Some of what you are seeing now is that
the original Ambari hue service wasn’t finished for high availability.
This is a customer service so...


The last changes I suggest are to get around those settings manually.


High Availability and SSL is on my list of things to do but so far I have
been very busy just getting the base of this all working with single node
cluster.

I will run a multi node cluster in the morning and update repo.

avatar
Expert Contributor

@stevenmatison  Thank you Sir .

avatar
Super Guru

@bhara This morning I did a 3 node HDP 2.6.5 cluster with NameNode HA (the point you had conflict in).   The install worked from management pack without issues.  Hue install and started.    No problems.

 

In my early morning testing,  I had installed Hue before doing NameNode HA.  So for the HA test, I stopped hue, executed 2 rms below, deleted service from ambari, and did a re-install from ambari.

 

   14  rm -rf /usr/local/hue
   15  rm -rf /usr/hdp/current/hue-server

 

If you continue to have issues, you will need to debug using method I showed you before:  edit the files that are throwing errors.   Looking at the conflicts, and commenting/debugging python is how I even got this to work at all.   In my original article I discuss some important information about debugging the service files directly, and using the retry button during the install or start section of the Ambari Add Service Wizard.   

 

If you do find a solution that needs to be changed, please report it back so that I can get the repo updated.   I am doing an update now to include the rms above, making an install/reinstall from ambari easier.

 

avatar
Expert Contributor

@stevenmatison  Sure .Let me reinstall and see how it goes .Will reach out to you if i need any further help . Appreciate all your support .

 

Regards

Bharad

avatar
Super Guru

@bhara No problem.  Glad I could help and I appreciate your feedback too.  

 

When you do get it started,  the setup and configuration is going to need similar attention.

 

Please keep me updated and/or reach out in Private Messages.   You can always open new Questions here and tag me in them too.