Support Questions
Find answers, ask questions, and share your expertise

Cloudbreak & Atlas installation error when using blueprint

Expert Contributor

I created a cluster using Cloudbreak, I installed Atlas, I exported the blueprint, then imported it via cloudbreak-shell. The installation fails with the following error:

Atlas Metadata Client Install:

Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-INSTALL/scripts/hook.py", line 37, in <module>
    BeforeInstallHook().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 314, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-INSTALL/scripts/hook.py", line 34, in hook
    install_packages()
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-INSTALL/scripts/shared_initialization.py", line 37, in install_packages
    retry_count=params.agent_stack_retry_count)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 54, in action_install
    self.install_package(package_name, self.resource.use_repos, self.resource.skip_repos)
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py", line 51, in install_package
    self.checked_call_with_retries(cmd, sudo=True, logoutput=self.get_logoutput())
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 86, in checked_call_with_retries
    return self._call_with_retries(cmd, is_checked=True, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 98, in _call_with_retries
    code, out = func(cmd, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 72, in inner
    result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 102, in checked_call
    tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call
    raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of '/usr/bin/yum -d 0 -e 0 -y install hdp-select' returned 1. File contains no section headers.
file: file:///etc/yum.repos.d/HDP-UTILS.repo, line: 1
'[]\n'
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-INSTALL/scripts/hook.py", line 37, in <module>
    BeforeInstallHook().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 314, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-INSTALL/scripts/hook.py", line 34, in hook
    install_packages()
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-INSTALL/scripts/shared_initialization.py", line 37, in install_packages
    retry_count=params.agent_stack_retry_count)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 54, in action_install
    self.install_package(package_name, self.resource.use_repos, self.resource.skip_repos)
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py", line 51, in install_package
    self.checked_call_with_retries(cmd, sudo=True, logoutput=self.get_logoutput())
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 86, in checked_call_with_retries
    return self._call_with_retries(cmd, is_checked=True, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 98, in _call_with_retries
    code, out = func(cmd, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 72, in inner
    result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 102, in checked_call
    tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call
    raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of '/usr/bin/yum -d 0 -e 0 -y install hdp-select' returned 1. File contains no section headers.
file: file:///etc/yum.repos.d/HDP-UTILS.repo, line: 1
'[]\n'

Any ideas would be appreciated.

Blueprint attached if it helps.

17 REPLIES 17

Super Mentor

@Matt Andruff

The Error that you are getting indicates that you have incorrect "section headers" (like Empty []) inside your file "/etc/yum.repos.d/HDP-UTILS.repo".

resource_management.core.exceptions.ExecutionFailed: Execution of '/usr/bin/yum -d 0 -e 0 -y install hdp-select' returned 1. File contains no section headers.
file: file:///etc/yum.repos.d/HDP-UTILS.repo, line: 1
'[]\n'

.

Please see the first line of the HDP-UTILs repo (or any repo) should contain a valid String like [HDP-UTILS-1.1.0.21] as following:

# cat /etc/yum.repos.d/HDP-UTILS.repo 
[HDP-UTILS-1.1.0.21]
name=HDP-UTILS-1.1.0.21
baseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7

path=/
enabled=1

.

I am suspecting that your "/etc/yum.repos.d/HDP-UTILS.repo" file has Empty [] at the first line which is not right.

.

Expert Contributor

@Jay SenSharmaTotally makes sense but this is Cloudbreak so shouldn't this repo be setup correctly? If this was a manual install I'd make the fix like you suggest. But I was hoping I could use Cloudbreak to script a installation.

Expert Contributor

yeah that totally makes sense... and I kinda thought the same, just wondering how to fix that when its a cloudbreak installation... I can manually fix it but cloudbreak is automated...

@Matt Andruff

What were the exact steps in CB shell for creating that cluster?

Haven't you missed setting the "utilsRepoId" for "cluster create"?

--utilsRepoIdstringStack utils repoId (e.g. HDP-UTILS-1.1.0.21)

Here is the documentation: https://github.com/hortonworks/cloudbreak/tree/master/shell

Hope this helps!

Expert Contributor

I'm happy to try and see if I manually specify that it works. Normally if I use a 'clean' blueprint I don't have to do that. (Blueprint that doesn't contain configuration.) But this really kinda smells like a bug to me. I was able to reproduce this by creating a cluster with "Data Science: Apache Spark 2.1, Apache Zeppelin 0.7.0" via the UI and then exporting the blueprint and importing it. I'm going to do that right now and I'll post the blueprint so that anyone can reproduce the issue.

Expert Contributor

Here's the script I used to create the cluster:

blueprint create --name "template" --file exported-blueprint.json
credential select --name cloudbreakcredential
blueprint select --name "template"
availabilityset create --name hadoop-pilot-as --platformFaultDomainCount TWO
instancegroup configure --AZURE --instanceGroup host_group_1 --nodecount 1 --templateName default-infrastructure-template-d4 --securityGroupName internal-ports-and-ssh --ambariServer false
instancegroup configure --AZURE --instanceGroup host_group_2 --nodecount 1 --templateName default-infrastructure-template-d4 --securityGroupName internal-ports-and-ssh --ambariServer false
instancegroup configure --AZURE --instanceGroup host_group_3 --nodecount 1 --templateName default-infrastructure-template-d4 --securityGroupName internal-ports-and-ssh --ambariServer false
instancegroup configure --AZURE --instanceGroup host_group_4 --nodecount 1 --templateName default-infrastructure-template-d4 --securityGroupName internal-ports-and-ssh --ambariServer false
instancegroup configure --AZURE --instanceGroup host_group_6 --nodecount 1 --templateName default-infrastructure-template --securityGroupName internal-ports-and-ssh --ambariServer true
instancegroup configure --AZURE --instanceGroup host_group_7 --nodecount 1 --templateName default-infrastructure-template --securityGroupName internal-ports-and-ssh --ambariServer false
instancegroup configure --AZURE --instanceGroup host_group_5 --nodecount 4 --templateName default-infrastructure-template-3drives --securityGroupName internal-ports-and-ssh --ambariServer false  --availabilitySetName hadoop-pilot-as
network select --name default-azure-network
stack create --AZURE --name hadoop-pilot-bugtest-rg  --region "Canada Central" --attachedStorageType PER_VM
cluster create --description "Haoop Pilot" --password password --enableKnoxGateway



Expert Contributor

I reproduced the issue simply by creating "Data Science: Apache Spark 2.1, Apache Zeppelin 0.7.0"

exported the blue print

Added the blue print.

Then used the UI with default options. (The scripted method failed due to 'null' configurations in the blueprint)

Same issue happens:

File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call
    raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of '/usr/bin/yum -d 0 -e 0 -y install hdp-select' returned 1. File contains no section headers.
file: file:///etc/yum.repos.d/HDP-UTILS.repo, line: 1
'[]\n'datasciencejson.txt
Blueprint attached.

Expert Contributor

@pdarvasi

This issue only happens with exported blueprints with lots of configuration. If I use a hand built blueprint with minimal configuration it I do not run into this issue. I think the size of the file creates an issue as the version is at the bottom of the file.

@Matt Andruff

This might not be the root cause, as HDP-UTILS.repo file is generated before posting the blueprint by Cloudbreak. The default value is specified in application.yml (under hdp.entries.2.6.repo.util, it's value is HDP-UTILS-1.1.0.21), are you sure have not accidentally overwritten this parameter with custom value under [cloudbreak-deployment]/etc?

You can check in Cloudbreak database, you should see the value for :

docker exec -it cbreak_commondb_1 psql -U postgres
select * from clustercomponent where componenttype ='HDP_REPO_DETAILS' where cluster_id = [your-cluster-id];

You should see sg. like this in the attributes text:

"util":{"repoid":"HDP-UTILS-1.1.0.21"...

In the meantime, can you tell the exact version of Cloudbreak?

Hope this helps!

Expert Contributor

@pdarvasi I know I haven't overwritten the default value because I can correctly install other blueprints afterwards without issue... how would one overwrite this value? Or how is this value calculated? Are there some specific logs I could look in to see what the value is being selected?

Here's the cloudbreak-shell script I used to create the cluster that reproduces the issue

blueprint create --name "blueprintImportIssue" --file DataScience.json
credential select --name cloudbreakcredential
blueprint select --name "blueprintImportIssue"
availabilityset create --name hadoop-pilot-as --platformFaultDomainCount TWO
instancegroup configure --AZURE --instanceGroup host_group_1 --nodecount 1 --templateName default-infrastructure-template-d4 --securityGroupName internal-ports-and-ssh --ambariServer false
instancegroup configure --AZURE --instanceGroup host_group_2 --nodecount 1 --templateName default-infrastructure-template-d4 --securityGroupName internal-ports-and-ssh --ambariServer false
instancegroup configure --AZURE --instanceGroup host_group_3 --nodecount 1 --templateName default-infrastructure-template-d4 --securityGroupName internal-ports-and-ssh --ambariServer false
instancegroup configure --AZURE --instanceGroup host_group_4 --nodecount 1 --templateName default-infrastructure-template-d4 --securityGroupName internal-ports-and-ssh --ambariServer false
instancegroup configure --AZURE --instanceGroup host_group_6 --nodecount 1 --templateName default-infrastructure-template --securityGroupName internal-ports-and-ssh --ambariServer true
instancegroup configure --AZURE --instanceGroup host_group_7 --nodecount 1 --templateName default-infrastructure-template --securityGroupName internal-ports-and-ssh --ambariServer false
instancegroup configure --AZURE --instanceGroup host_group_5 --nodecount 4 --templateName default-infrastructure-template-3drives --securityGroupName internal-ports-and-ssh --ambariServer false  --availabilitySetName hadoop-pilot-as
network select --name default-azure-network
stack create --AZURE --name hadoop-pilot-import-issue-rg  --region "Canada Central" --attachedStorageType PER_VM
cluster create --description "Haoop Pilot" --password password

Anything look fishy that might cause this overwrite that you are talking about. (I'm just waiting for the build to finish so I can run the command you asked me to run.)

I will afterward try and figure out how to use the --utilsRepoId command to see if that helps.

Expert Contributor

@pdarvasi

I actually tried this but I'm clearly missing something:

cloudbreak-shell>cluster create --description "Haoop Pilot" --password password --enableKnoxGateway true --stackRepoId HDP-2.6 --stackBaseURL  http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.0.3 --stack HDP --utilsBaseURL  http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7 --utilsRepoId HDP-UTILS-1.1.0.21
Command failed java.lang.RuntimeException: may not be null
<br>

Expert Contributor
cloudbreak-shell>script --file ecmake-cloudbreak-pilot-cluster.sh
credential select --name cloudbreakcredential
Credential selected, name: cloudbreakcredential
blueprint select --name "template-name"
Blueprint has been selected, name: template-name
availabilityset create --name hadoop-pilot-as --platformFaultDomainCount TWO
  Availability sets  name             faultDomainCount
  -----------------  ---------------  ----------------
  hadoop-pilot-as    hadoop-pilot-as  TWO


instancegroup configure --AZURE --instanceGroup host_group_1 --nodecount 1 --templateName default-infrastructure-template-d4 --securityGroupName internal-ports-and-ssh --ambariServer false 
  instanceGroup  templateId  nodeCount  type  securityGroupId  attributes
  -------------  ----------  ---------  ----  ---------------  ----------
  host_group_1   5           1          CORE  5                {}


instancegroup configure --AZURE --instanceGroup host_group_2 --nodecount 1 --templateName default-infrastructure-template-d4 --securityGroupName internal-ports-and-ssh --ambariServer false 
  instanceGroup  templateId  nodeCount  type  securityGroupId  attributes
  -------------  ----------  ---------  ----  ---------------  ----------
  host_group_2   5           1          CORE  5                {}
  host_group_1   5           1          CORE  5                {}


instancegroup configure --AZURE --instanceGroup host_group_3 --nodecount 1 --templateName default-infrastructure-template-d4 --securityGroupName internal-ports-and-ssh --ambariServer false 
  instanceGroup  templateId  nodeCount  type  securityGroupId  attributes
  -------------  ----------  ---------  ----  ---------------  ----------
  host_group_2   5           1          CORE  5                {}
  host_group_1   5           1          CORE  5                {}
  host_group_3   5           1          CORE  5                {}


instancegroup configure --AZURE --instanceGroup host_group_4 --nodecount 1 --templateName default-infrastructure-template-d4 --securityGroupName internal-ports-and-ssh --ambariServer false
  instanceGroup  templateId  nodeCount  type  securityGroupId  attributes
  -------------  ----------  ---------  ----  ---------------  ----------
  host_group_2   5           1          CORE  5                {}
  host_group_1   5           1          CORE  5                {}
  host_group_4   5           1          CORE  5                {}
  host_group_3   5           1          CORE  5                {}


instancegroup configure --AZURE --instanceGroup host_group_6 --nodecount 1 --templateName default-infrastructure-template --securityGroupName internal-ports-and-ssh --ambariServer true 
  instanceGroup  templateId  nodeCount  type     securityGroupId  attributes
  -------------  ----------  ---------  -------  ---------------  ----------
  host_group_2   5           1          CORE     5                {}
  host_group_1   5           1          CORE     5                {}
  host_group_4   5           1          CORE     5                {}
  host_group_3   5           1          CORE     5                {}
  host_group_6   4           1          GATEWAY  5                {}


instancegroup configure --AZURE --instanceGroup host_group_7 --nodecount 1 --templateName default-infrastructure-template --securityGroupName internal-ports-and-ssh --ambariServer false
  instanceGroup  templateId  nodeCount  type     securityGroupId  attributes
  -------------  ----------  ---------  -------  ---------------  ----------
  host_group_7   4           1          CORE     5                {}
  host_group_2   5           1          CORE     5                {}
  host_group_1   5           1          CORE     5                {}
  host_group_4   5           1          CORE     5                {}
  host_group_3   5           1          CORE     5                {}
  host_group_6   4           1          GATEWAY  5                {}


instancegroup configure --AZURE --instanceGroup host_group_5 --nodecount 4 --templateName default-infrastructure-template-3drives --securityGroupName internal-ports-and-ssh --ambariServer false  --availabilitySetName hadoop-pilot-as 
  instanceGroup  templateId  nodeCount  type     securityGroupId  attributes
  -------------  ----------  ---------  -------  ---------------  ------------------------------------------------------------
  host_group_7   4           1          CORE     5                {}
  host_group_2   5           1          CORE     5                {}
  host_group_1   5           1          CORE     5                {}
  host_group_4   5           1          CORE     5                {}
  host_group_3   5           1          CORE     5                {}
  host_group_6   4           1          GATEWAY  5                {}
  host_group_5   7           4          CORE     5                {availabilitySet={faultDomainCount=2, name=hadoop-pilot-as}}


network select --name default-azure-network
Network is selected with name: default-azure-network
stack create --AZURE --name hadoop-pilot-import-issue-rg  --region "Canada Central" --attachedStorageType PER_VM  
Stack creation started with id: '130' and name: 'hadoop-pilot-import-issue-rg'
cluster create --description "Haoop Pilot" --password password 
Cluster creation started
Script required 3.309 seconds to execute



then I do as you say... with some digging I got this... skip to the end... there is no entry in that table.

docker exec -it cbreak_commondb_1 psql -U postgres

show databases;

 \c cbdb


cbdb=# select * from clustercomponent where cluster_id = 130                                                                                                                                                ;
 id | componenttype | name | cluster_id | attributes 
----+---------------+------+------------+------------
(0 rows)
cbdb=# select * from cluster where name = 'hadoop-pilot-import-issue-rg'                                                                                                                                    ;
 id |  account   | creationfinished | creationstarted | description | emailneeded |             name             |                owner                 | secure |    status     |                                                                                                          statusreason                                                                                                           | upsince | blueprint_id |         username         |             password             |   ambariip    | stack_id | filesystem_id |              configstrategy              | ldaprequired | sssdconfig_id | enableshipyard | emailto | ldapconfig_id |                                                                                                                                                                                        attributes                                                                                                                                                                                        |     blueprintinputs      | cloudbreakambariuser |                 cloudbreakambaripassword                 | blueprintcustomproperties | kerberosconfig_id | topologyvalidation | customcontainerdefinition 
----+------------+------------------+-----------------+-------------+-------------+------------------------------+--------------------------------------+--------+---------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------+--------------+--------------------------+----------------------------------+---------------+----------+---------------+------------------------------------------+--------------+---------------+----------------+---------+---------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------+----------------------+----------------------------------------------------------+---------------------------+-------------------+--------------------+---------------------------
 97 | seq1234567 |                  |   1506124194009 | Haoop Pilot | f           | hadoop-pilot-import-issue-rg | 8b8b515a-4092-44a6-9347-375c40fbd584 | f      | CREATE_FAILED | Cluster installation failed to complete, please check the Ambari UI for more details. You can try to reinstall the cluster with a different blueprint or fix the failures in Ambari and sync the cluster with Cloudbreak later. |         |         1360 | ITY78VsJUx3VoCqAPPrEGg== | 02qK4MHBrRbZXbzFtoBITR+vuFih29e/ | 52.233.43.237 |      130 |               | ALWAYS_APPLY_DONT_OVERRIDE_CUSTOM_VALUES | f            |               | f              |         |               | LZk3WV7IrzH4pRkiwQ+Cfj85eG/ZqxWC44tfRy54VuijBNiyPpjRaYMf3MYAc3BQaki5V10HOKLkiSZ1uaAUevQY4L4Hs67O7sFwB9DtGtVR5lUIhf+iVJEPaAGf/GMd5KPY0Zh0U6ldtPHpKFxzIcQWNSCfN+XBjOnzoOD/Jd/fxhBRVEmArcb2YdPC62y9yWt+cfwdNqPPw/ZFAv87YcG3J+1P80Su9v/D+L+fTPjY6bMU5R68U0l1xt8HN7pmb5YML31GSoNlAssrINMKNts4Zas8TbbSIwraRL+ljSFqY6fqE0/UkzQgIhfcR1vgIYRN6Y9oabNm9MnYHAvwhBG6G/rRi/Li2CYiRR8nxy/5Hjb/ytMMPA== | 4RSE6ooK4YB8uKcVYVVu7Q== | cloudbreak           | H/89H2rul1DhODpedIvc8tgkU7Of65YTjzqGGccQDCbJHUEo6Rt7pQ== | 9kh4l+L8qlywJkeJLbE4kg==  |                97 | f                  | {}
(1 row)


cbdb=# select * from clustercomponent where id = 97
cbdb-# ;
 id |    componenttype    |        name         | cluster_id |                                                                                                            attributes                                                                                                            
----+---------------------+---------------------+------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 97 | AMBARI_REPO_DETAILS | AMBARI_REPO_DETAILS |         16 | {"predefined":false,"version":"2.5.0.3-7","baseUrl":"http://public-repo-1.hortonworks.com/ambari/centos6/2.x/updates/2.5.0.3","gpgKeyUrl":"http://public-repo-1.hortonworks.com/ambari/centos6/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins"}
(1 row)



So there is a cluster but there isn't an entry in the database for this.

@Matt Andruff

This

cbdb=# select * from clustercomponent where id = 97

should be like this instead:

cbdb=# select * from clustercomponent where cluster_id = 97

There should be 3 rows.

And pls. write us the exact CB version as well! Thanks!

Expert Contributor

@pdarvasi

Here is the corrected query

postgres=# \c cbdb
You are now connected to database "cbdb" as user "postgres".
cbdb=# select * from clustercomponent where cluster_id = 97
cbdb-# ;
 id  |      componenttype      |          name           | cluster_id |                                                                                                                                                                                                                    attributes                                                                                                                                                                                                                    
-----+-------------------------+-------------------------+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 607 | AMBARI_REPO_DETAILS     | AMBARI_REPO_DETAILS     |         97 | {"predefined":false,"version":"2.5.0.3-7","baseUrl":"http://public-repo-1.hortonworks.com/ambari/centos6/2.x/updates/2.5.0.3","gpgKeyUrl":"http://public-repo-1.hortonworks.com/ambari/centos6/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins"}
 608 | HDP_REPO_DETAILS        | HDP_REPO_DETAILS        |         97 | {"stack":{"repoid":"HDP-2.6","redhat6":"http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.6.0.3","redhat7":"http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.0.3"},"util":{"repoid":"HDP-UTILS-1.1.0.21","redhat6":"http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos6","redhat7":"http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7"},"verify":true,"hdpVersion":"2.6.0.3"}
 609 | AMBARI_DATABASE_DETAILS | AMBARI_DATABASE_DETAILS |         97 | {"vendor":"embedded","fancyName":"","name":"postgres","host":"localhost","port":5432,"userName":"ambari","password":"password"}
(3 rows)
<br>

Expert Contributor

@pdarvasi cloudbreak-shell version: 1.16.4

Expert Contributor

I just checked your blueprint file and the blueprint_name is missing in the blueprint config like this:

https://github.com/hortonworks/cloudbreak/blob/master/integration-test/src/main/resources/blueprint/...

Expert Contributor

I was just be really honest about what was exported. I know it needs a name added to it and I do add a name to it. That is not part of the issue for this case. I should have called out that it was 'raw' and needed a name added.

Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.