Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Error Ranger KMS server install on Ambari2.7.0.0. and HDP version 3.0

Solved Go to solution

Error Ranger KMS server install on Ambari2.7.0.0. and HDP version 3.0

New Contributor

I am trying to install HDP 3.0 version on a 6 node cluster where 5 nodes act as ambari-agent and 1 node as server.

Here is the log file:

stderr: 
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/RANGER_KMS/package/scripts/kms_server.py", line 137, in <module>
    KmsServer().execute()
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 353, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/RANGER_KMS/package/scripts/kms_server.py", line 51, in install
    kms.setup_kms_db()
  File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/RANGER_KMS/package/scripts/kms.py", line 68, in setup_kms_db
    copy_jdbc_connector(kms_home)
  File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/RANGER_KMS/package/scripts/kms.py", line 359, in copy_jdbc_connector
    Please run 'ambari-server setup --jdbc-db={db_name} --jdbc-driver={path_to_jdbc} on server host.'".format(params.db_flavor, params.jdk_location)
KeyError: 'db_name'
 stdout:
2019-02-02 04:43:22,791 - Stack Feature Version Info: Cluster Stack=3.0, Command Stack=None, Command Version=None -> 3.0
2019-02-02 04:43:22,810 - Using hadoop conf dir: /usr/hdp/3.0.1.0-187/hadoop/conf
2019-02-02 04:43:22,813 - Group['kms'] {}
2019-02-02 04:43:22,815 - Group['livy'] {}
2019-02-02 04:43:22,815 - Group['spark'] {}
2019-02-02 04:43:22,819 - Group['ranger'] {}
2019-02-02 04:43:22,820 - Group['hdfs'] {}
2019-02-02 04:43:22,820 - Group['hadoop'] {}
2019-02-02 04:43:22,820 - Group['users'] {}
2019-02-02 04:43:22,821 - User['yarn-ats'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-02-02 04:43:22,824 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-02-02 04:43:22,829 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-02-02 04:43:22,831 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2019-02-02 04:43:22,833 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-02-02 04:43:22,835 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['ranger', 'hadoop'], 'uid': None}
2019-02-02 04:43:22,840 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2019-02-02 04:43:22,842 - User['kms'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['kms', 'hadoop'], 'uid': None}
2019-02-02 04:43:22,844 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['livy', 'hadoop'], 'uid': None}
2019-02-02 04:43:22,849 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['spark', 'hadoop'], 'uid': None}
2019-02-02 04:43:22,851 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2019-02-02 04:43:22,853 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-02-02 04:43:22,858 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop'], 'uid': None}
2019-02-02 04:43:22,860 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-02-02 04:43:22,862 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-02-02 04:43:22,867 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-02-02 04:43:22,868 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2019-02-02 04:43:22,871 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2019-02-02 04:43:22,884 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2019-02-02 04:43:22,885 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2019-02-02 04:43:22,887 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2019-02-02 04:43:22,889 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2019-02-02 04:43:22,891 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {}
2019-02-02 04:43:22,905 - call returned (0, '1017')
2019-02-02 04:43:22,906 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1017'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2019-02-02 04:43:22,914 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1017'] due to not_if
2019-02-02 04:43:22,915 - Group['hdfs'] {}
2019-02-02 04:43:22,916 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop', u'hdfs']}
2019-02-02 04:43:22,917 - FS Type: HDFS
2019-02-02 04:43:22,918 - Directory['/etc/hadoop'] {'mode': 0755}
2019-02-02 04:43:22,964 - File['/usr/hdp/3.0.1.0-187/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2019-02-02 04:43:22,966 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2019-02-02 04:43:23,007 - Repository['HDP-3.0-repo-1'] {'append_to_file': False, 'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos7/3.x/updates/3.0.1.0', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None}
2019-02-02 04:43:23,032 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': '[HDP-3.0-repo-1]\nname=HDP-3.0-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/3.x/updates/3.0.1.0\n\npath=/\nenabled=1\ngpgcheck=0'}
2019-02-02 04:43:23,034 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match
2019-02-02 04:43:23,039 - Repository['HDP-3.0-GPL-repo-1'] {'append_to_file': True, 'base_url': 'http://public-repo-1.hortonworks.com/HDP-GPL/centos7/3.x/updates/3.0.1.0', 'action': ['create'], 'components': [u'HDP-GPL', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None}
2019-02-02 04:43:23,049 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': '[HDP-3.0-repo-1]\nname=HDP-3.0-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/3.x/updates/3.0.1.0\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-3.0-GPL-repo-1]\nname=HDP-3.0-GPL-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP-GPL/centos7/3.x/updates/3.0.1.0\n\npath=/\nenabled=1\ngpgcheck=0'}
2019-02-02 04:43:23,050 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match
2019-02-02 04:43:23,050 - Repository['HDP-UTILS-1.1.0.22-repo-1'] {'append_to_file': True, 'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.22/repos/centos7', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None}
2019-02-02 04:43:23,060 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': '[HDP-3.0-repo-1]\nname=HDP-3.0-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/3.x/updates/3.0.1.0\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-3.0-GPL-repo-1]\nname=HDP-3.0-GPL-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP-GPL/centos7/3.x/updates/3.0.1.0\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-UTILS-1.1.0.22-repo-1]\nname=HDP-UTILS-1.1.0.22-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.22/repos/centos7\n\npath=/\nenabled=1\ngpgcheck=0'}
2019-02-02 04:43:23,060 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match
2019-02-02 04:43:23,061 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2019-02-02 04:43:23,519 - Skipping installation of existing package unzip
2019-02-02 04:43:23,519 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2019-02-02 04:43:23,570 - Skipping installation of existing package curl
2019-02-02 04:43:23,570 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2019-02-02 04:43:23,626 - Skipping installation of existing package hdp-select
2019-02-02 04:43:23,639 - The repository with version 3.0.1.0-187 for this command has been marked as resolved. It will be used to report the version of the component which was installed
2019-02-02 04:43:24,364 - Package['ranger_3_0_1_0_187-kms'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2019-02-02 04:43:24,819 - Skipping installation of existing package ranger_3_0_1_0_187-kms
2019-02-02 04:43:24,822 - Stack Feature Version Info: Cluster Stack=3.0, Command Stack=None, Command Version=None -> 3.0
2019-02-02 04:43:24,892 - Using hadoop conf dir: /usr/hdp/3.0.1.0-187/hadoop/conf
2019-02-02 04:43:24,910 - Execute[('cp', '-f', u'/usr/hdp/current/ranger-kms/install.properties', u'/usr/hdp/current/ranger-kms/install-backup.properties')] {'not_if': 'ls /usr/hdp/current/ranger-kms/install-backup.properties', 'sudo': True, 'only_if': 'ls /usr/hdp/current/ranger-kms/install.properties'}
2019-02-02 04:43:24,923 - Skipping Execute[('cp', '-f', u'/usr/hdp/current/ranger-kms/install.properties', u'/usr/hdp/current/ranger-kms/install-backup.properties')] due to not_if
2019-02-02 04:43:24,924 - Password validated
2019-02-02 04:43:24,930 - The repository with version 3.0.1.0-187 for this command has been marked as resolved. It will be used to report the version of the component which was installed


Command failed after 1 tries


1 ACCEPTED SOLUTION

Accepted Solutions

Re: Error Ranger KMS server install on Ambari2.7.0.0. and HDP version 3.0

Mentor

@Shraddha Singh

This is database connection issue it seems you haven't set the database for rangerkms !! If you other databases are running on Mysql or MariaDB do the following are the root user if not use the appropriate syntax.

Usually the all the databases are co-hosted on the same node for hive, oozie, Ambari etc

mysql -uroot -p{root_password} 
create database rangerkms; 
create user 'rangerkms'@'localhost' identified by '{rangerkms_password}'; 
grant all privileges on rangerkms.* to 'rangerkms'@'localhost'; 
grant all privileges on rangerkms.* to 'rangerkms'@'%'; 
grant all privileges on rangerkms.* to 'rangerkms'@'{DB_HOST}' identified by '{rangerkms_password}'; 
grant all privileges on rangerkms.* to 'rangerkms'@'{DB_HOST}' with grant option; 
grant all privileges on rangerkms.* to 'rangerkms'@'%' with grant option; 
flush privileges; 
quit;

After the above statements have run successfully use the above user/password to reconfigure your rangerkms it should start up

HTH

4 REPLIES 4

Re: Error Ranger KMS server install on Ambari2.7.0.0. and HDP version 3.0

Mentor

@Shraddha Singh

This is database connection issue it seems you haven't set the database for rangerkms !! If you other databases are running on Mysql or MariaDB do the following are the root user if not use the appropriate syntax.

Usually the all the databases are co-hosted on the same node for hive, oozie, Ambari etc

mysql -uroot -p{root_password} 
create database rangerkms; 
create user 'rangerkms'@'localhost' identified by '{rangerkms_password}'; 
grant all privileges on rangerkms.* to 'rangerkms'@'localhost'; 
grant all privileges on rangerkms.* to 'rangerkms'@'%'; 
grant all privileges on rangerkms.* to 'rangerkms'@'{DB_HOST}' identified by '{rangerkms_password}'; 
grant all privileges on rangerkms.* to 'rangerkms'@'{DB_HOST}' with grant option; 
grant all privileges on rangerkms.* to 'rangerkms'@'%' with grant option; 
flush privileges; 
quit;

After the above statements have run successfully use the above user/password to reconfigure your rangerkms it should start up

HTH

Re: Error Ranger KMS server install on Ambari2.7.0.0. and HDP version 3.0

New Contributor

Thanks @Geoffrey Shelton Okot !

It worked for me!

Re: Error Ranger KMS server install on Ambari2.7.0.0. and HDP version 3.0

New Contributor

Hi @Geoffrey Shelton Okot

I am getting this error even after running the above commands on mysql.

stderr: 
Traceback (most recent call last):
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
    raise ExecutionFailed(err_msg, code, out, err)
ExecutionFailed: Execution of 'ambari-python-wrap /usr/hdp/current/ranger-kms/dba_script.py -q' returned 1. 2019-02-07 12:52:26,435  [I] Running DBA setup script. QuiteMode:True
2019-02-07 12:52:26,435  [I] Using Java:/usr/jdk64/jdk1.8.0_112/bin/java
2019-02-07 12:52:26,435  [I] DB FLAVOR:MYSQL
2019-02-07 12:52:26,435  [I] DB Host:machine
2019-02-07 12:52:26,435  [I] ---------- Verifing DB root password ---------- 
2019-02-07 12:52:26,436  [I] DBA root user password validated
2019-02-07 12:52:26,436  [I] ---------- Verifing Ranger KMS db user password ---------- 
2019-02-07 12:52:26,436  [I] KMS user password validated
2019-02-07 12:52:26,436  [I] ---------- Creating Ranger KMS db user ---------- 
2019-02-07 12:52:26,436  [JISQL] /usr/jdk64/jdk1.8.0_112/bin/java  -cp /usr/hdp/current/ranger-kms/ews/webapp/lib/mysql-connector-java.jar:/usr/hdp/current/ranger-kms/jisql/lib/* org.apache.util.sql.Jisql -driver mysqlconj -cstring jdbc:mysql://machine/mysql -u rangerkms -p '********' -noheader -trim -c \; -query "SELECT version();"
SQLException : SQL state: 28000 java.sql.SQLException: Access denied for user 'rangerkms'@'machine' (using password: YES) ErrorCode: 1045
2019-02-07 12:52:27,212  [E] Can't establish db connection.. Exiting..

Re: Error Ranger KMS server install on Ambari2.7.0.0. and HDP version 3.0

Mentor

@Shraddha Singh

Where machine is the FQDN and {rangerkms_password} is the rangerkms user password.

The FQDN is the output of

$hostname -f 

Re-run the below commands

grant all privileges on rangerkms.* to 'rangerkms'@'machine' identified by '{rangerkms_password}'; 
grant all privileges on rangerkms.* to 'rangerkms'@'machine' with grant option; 

And let me know