Support Questions
Find answers, ask questions, and share your expertise

HDF 3.0.2 - Install Schema Registry Errors using PostgreSQL


Building dev playground for HDF. Installed Ambari 2.6.0, PostgreSQL 9.6, did all the pre-req setup steps like creating the Ambari DB, Registry DB, SAM DB, etc. Configured PostgreSQL for remote access all the stuff listed here:

I've also registered with Ambari server the JDBC postgresql driver as noted here:

Installed Zookeeper and Ambari Metrics, all green. Added Kafka, also green. Note: This is a single node system, just a playground. Went through the install steps for Schema Registry, switched DB type to postgres, set the password to registry as I set in postgresql, reset the FDQN and the configuration lines as shown below. Click Next, it runs for a few seconds and then bombs.

I have also gone through the steps in the Best Answer for this issue to ensure I have the right Registry version packages:


Does anyone have any idea of what I'm missing or mis-configured?


Traceback (most recent call last):

File "/var/lib/ambari-agent/cache/common-services/REGISTRY/0.3.0/package/scripts/", line 129, in <module>


File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/", line 367, in execute


File "/var/lib/ambari-agent/cache/common-services/REGISTRY/0.3.0/package/scripts/", line 57, in install

import params

File "/var/lib/ambari-agent/cache/common-services/REGISTRY/0.3.0/package/scripts/", line 173, in <module>

connector_curl_source = format("{jdk_location}/{jdbc_driver_jar}")

File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/", line 95, in format

return ConfigurationFormatter().format(format_string, args, **result)

File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/", line 59, in format

result_protected = self.vformat(format_string, args, all_params)

File "/usr/lib64/python2.7/", line 549, in vformat

result = self._vformat(format_string, args, kwargs, used_args, 2)

File "/usr/lib64/python2.7/", line 571, in _vformat

obj, arg_used = self.get_field(field_name, args, kwargs)

File "/usr/lib64/python2.7/", line 632, in get_field

obj = self.get_value(first, args, kwargs)

File "/usr/lib64/python2.7/", line 591, in get_value

return kwargs[key]

File "/usr/lib/python2.6/site-packages/resource_management/core/", line 63, in __getitem__

return self._convert_value(self._dict[name])

KeyError: 'jdbc_driver_jar'


2018-01-25 16:18:45,596 - Stack Feature Version Info: Cluster Stack=3.0, Command Stack=None, Command Version=None -> 3.0

User Group mapping (user_group) is missing in the hostLevelParams

2018-01-25 16:18:45,600 - Group['hadoop'] {}

2018-01-25 16:18:45,601 - File['/var/lib/ambari-agent/tmp/'] {'content': StaticFile(''), 'mode': 0555}

2018-01-25 16:18:45,602 - call['/var/lib/ambari-agent/tmp/ registry'] {}

2018-01-25 16:18:45,609 - call returned (0, '1004')

2018-01-25 16:18:45,609 - User['registry'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1004}

2018-01-25 16:18:45,611 - File['/var/lib/ambari-agent/tmp/'] {'content': StaticFile(''), 'mode': 0555}

2018-01-25 16:18:45,611 - call['/var/lib/ambari-agent/tmp/ zookeeper'] {}

2018-01-25 16:18:45,619 - call returned (0, '1001')

2018-01-25 16:18:45,620 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1001}

2018-01-25 16:18:45,620 - File['/var/lib/ambari-agent/tmp/'] {'content': StaticFile(''), 'mode': 0555}

2018-01-25 16:18:45,621 - call['/var/lib/ambari-agent/tmp/ ams'] {}

2018-01-25 16:18:45,628 - call returned (0, '1002')

2018-01-25 16:18:45,629 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1002}

2018-01-25 16:18:45,630 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users'], 'uid': None}

2018-01-25 16:18:45,630 - File['/var/lib/ambari-agent/tmp/'] {'content': StaticFile(''), 'mode': 0555}

2018-01-25 16:18:45,631 - call['/var/lib/ambari-agent/tmp/ kafka'] {}

2018-01-25 16:18:45,638 - call returned (0, '1005')

2018-01-25 16:18:45,638 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1005}

2018-01-25 16:18:45,639 - File['/var/lib/ambari-agent/tmp/'] {'content': StaticFile(''), 'mode': 0555}

2018-01-25 16:18:45,640 - Execute['/var/lib/ambari-agent/tmp/ ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}

2018-01-25 16:18:45,646 - Skipping Execute['/var/lib/ambari-agent/tmp/ ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if

2018-01-25 16:18:45,659 - Repository['HDF-3.0-repo-1'] {'append_to_file': False, 'base_url': '', 'action': ['create'], 'components': [u'HDF', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdf-1', 'mirror_list': None}

2018-01-25 16:18:45,665 - File['/etc/yum.repos.d/ambari-hdf-1.repo'] {'content': '[HDF-3.0-repo-1]\nname=HDF-3.0-repo-1\nbaseurl=\n\npath=/\nenabled=1\ngpgcheck=...'}

2018-01-25 16:18:45,666 - Writing File['/etc/yum.repos.d/ambari-hdf-1.repo'] because contents don't match

2018-01-25 16:18:45,667 - Repository['HDP-UTILS-'] {'append_to_file': True, 'base_url': '', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdf-1', 'mirror_list': None}

2018-01-25 16:18:45,669 - File['/etc/yum.repos.d/ambari-hdf-1.repo'] {'content': '[HDF-3.0-repo-1]\nname=HDF-3.0-repo-1\nbaseurl=\n\npath=/\nenabled=1\ngpgcheck=...'}

2018-01-25 16:18:45,669 - Writing File['/etc/yum.repos.d/ambari-hdf-1.repo'] because contents don't match

2018-01-25 16:18:45,670 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}

2018-01-25 16:18:45,741 - Skipping installation of existing package unzip

2018-01-25 16:18:45,741 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}

2018-01-25 16:18:45,752 - Skipping installation of existing package curl



Interestingly enough I was able to get the schema registry installed by while the failure message was up, going into the and setting the parameters to point to the JDBC driver on my box


I set the lines:

connector_curl_source = format("/usr/share/java/postgresql-jdbc.jar")

downloaded_custom_connector = format("/usr/share/java/postgresql-jdbc.jar")

Then after saving my changes I clicked retry on the install and it went through just fine. Any ideas why the variable substitution that I think should have been going on didn't?

Super Mentor

@Alexander Yamashita

Confirm if the Postgres JDBC driver is present here "/usr/share/java/postgresql-jdbc.jar" location. If not then please download the JDBC driver from:

The perform these steps:

# ls /usr/share/java/postgresql-jdbc.jar
# chmod 644 /usr/share/java/postgresql-jdbc.jar
# ambari-server setup --jdbc-db=postgres --jdbc-driver=/usr/share/java/postgresql-jdbc.jar
# ambari-server restart


Super Mentor


Yeah that's what I did, except for the chmod command. I went and checked the permissions on the file and they are set to that, so I'm not sure why it didn't work, until I did the hardcode.


Hey Alex,

I had the same problem 🙂

It seems /var/lib/ambari-agent/cache/common-services/REGISTRY/0.3.0/package/scripts/ sets jdbc JAR for oracle and mysql but not for postgresql.

This is missing:

if 'postgresql' == registry_storage_type:
	jdbc_driver_jar = default("/hostLevelParams/custom_postgres_jdbc_name", None)


It is the same when you setup Postgres for the Streaming Analytics Manager.

The path is /var/lib/ambari-agent/cache/common-services/STREAMLINE/0.5.0/package/scripts/