Support Questions

Find answers, ask questions, and share your expertise

Cloudbreak deploy fails due to Unsuccessful address resolving: The Service identity.service.consul cannot be resolved

New Contributor

Followed installation instructions for deploying cloudbreak to AWS using the AMI.

After running cbd start, and checking the logs cbd logs cloudbreak

I see the logs filled with:

cloudbreak_1 | 2016-02-03 19:40:31,207 [localhost-startStop-1] handleException:56 WARN c.s.c.s.r.RetryingServiceAddressResolver - [owner:spring] [type:springLog] [id:] [name:] Unsuccessful address resolving: The Service identity.service.consul cannot be resolved, retrying in 2000millis

After a while, it crashes with stack traces.

I ran cbd update, and retried, and instead got:

Cannot link to a non running container: /cbreak_consul_1 AS /cbreak_registrator_1/cbreak_consul_1

I checked the consule container's logs and found:

Error starting dns server: Invalid recursor address: lookup -recursor: no such host

Any idea how to resolve this?

1 ACCEPTED SOLUTION

Expert Contributor

Hi,

The problem is that the cbd tries to parse the dns nameservers from the /etc/resolv.conf file located on the host vm and that resolve.conf probably contains comments with a word "nameserver" and those comment lines are also parsed by cbd as well and generates an empty -recursor config for consul.

This is a bug in cbd 1.1.0 and needs to be fixed, but meantime as a workaround please delete the comments from /etc/resolv.conf file.

Attila

View solution in original post

2 REPLIES 2

Expert Contributor

Hi,

The problem is that the cbd tries to parse the dns nameservers from the /etc/resolv.conf file located on the host vm and that resolve.conf probably contains comments with a word "nameserver" and those comment lines are also parsed by cbd as well and generates an empty -recursor config for consul.

This is a bug in cbd 1.1.0 and needs to be fixed, but meantime as a workaround please delete the comments from /etc/resolv.conf file.

Attila

In my case, it was a PRIVATE_IP variable in Profile. Once removed, it started working.