<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: cdsw init fails with kube-dns issues in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/cdsw-init-fails-with-kube-dns-issues/m-p/66169#M76617</link>
    <description>&lt;P&gt;All,&lt;/P&gt;&lt;P&gt;I'm still facing the same issue.&lt;/P&gt;&lt;P&gt;If any of you have the&amp;nbsp;kube-dns pod running with all 3 containers running successfully (kubedns,dnsmasq and sidecar), can you run the following and reply back with the output...it would be greatly appreciated.&lt;/P&gt;&lt;P&gt;Get the pod names from the output of this command&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;&lt;FONT size="3"&gt;kubectl get pods --all-namespaces&lt;/FONT&gt;&lt;/PRE&gt;&lt;P&gt;then get the&amp;nbsp;CLUSTER-IP from this command&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;&lt;FONT size="3"&gt;kubectl get services --sort-by=.metadata.name&lt;/FONT&gt;&lt;/PRE&gt;&lt;P&gt;then execute nslookup commands on the running pods&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;e.g.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;&lt;FONT size="3"&gt;kubectl exec &amp;lt;kube-dns-pod-name&amp;gt; -c sidecar --namespace=kube-system -- nslookup &amp;lt;CLUSTER-IP&amp;gt;
kubectl exec &amp;lt;kube-dns-pod-name&amp;gt; -c dnsmasq --namespace=kube-system -- nslookup &amp;lt;CLUSTER-IP&amp;gt;
kubectl exec &amp;lt;kube-dns-pod-name&amp;gt; -c kubedns --namespace=kube-system -- nslookup &amp;lt;CLUSTER-IP&amp;gt;

e.g.
kubectl exec kube-dns-3911048160-lhtvm -c kubedns --namespace=kube-system -- nslookup 100.77.0.1&lt;/FONT&gt;&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I may be barking up the wrong tree, but I'm trying to figure out why my containers timeout when trying to connect to&amp;nbsp;https://&lt;SPAN class="s1"&gt;100&lt;/SPAN&gt;.&lt;SPAN class="s1"&gt;77&lt;/SPAN&gt;.&lt;SPAN class="s1"&gt;0&lt;/SPAN&gt;.&lt;SPAN class="s1"&gt;1&lt;/SPAN&gt;:&lt;SPAN class="s1"&gt;443&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Also, if you could post a copy of your&amp;nbsp;/etc/cdsw/config/cdsw.conf (with sensitive information redacted or masked) that would be great.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Mon, 09 Apr 2018 20:21:53 GMT</pubDate>
    <dc:creator>Joey_Krabacher</dc:creator>
    <dc:date>2018-04-09T20:21:53Z</dc:date>
    <item>
      <title>cdsw init fails with kube-dns issues</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/cdsw-init-fails-with-kube-dns-issues/m-p/65898#M76616</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;kubedns and dnsmasq both appear to be failing&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;sudo /usr/bin/cdsw init&lt;/P&gt;&lt;P&gt;...&lt;/P&gt;&lt;P&gt;Waiting for kube-system cluster to come up. This could take a few minutes...&lt;BR /&gt;ERROR:: Unable to bring up kube-system cluster.: 1&lt;/P&gt;&lt;P&gt;ERROR:: Unable to start kubernetes system pods.: 1&lt;/P&gt;&lt;P&gt;...&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;$ sudo kubectl --namespace=kube-system get pods
NAME                                READY     STATUS             RESTARTS   AGE
etcd-udodapp05                      1/1       Running            0          16m
kube-apiserver-udodapp05            1/1       Running            0          16m
kube-controller-manager-udodapp05   1/1       Running            0          16m
kube-dns-3911048160-99klb           2/3       CrashLoopBackOff   13         15m
kube-proxy-02z9b                    1/1       Running            0          15m
kube-scheduler-udodapp05            1/1       Running            0          15m
weave-net-4fzw6                     2/2       Running            0          15m&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;$ cat cdsw.conf
JAVA_HOME=/usr/java/jdk1.7.0_67-cloudera/
MASTER_IP=[redacted]
DOMAIN=[redacted]
DOCKER_BLOCK_DEVICES=/dev/mapper/imgvg-imglv
APPLICATION_BLOCK_DEVICE=/dev/mapper/appvg-applv
NO_PROXY="127.0.0.1,localhost,[redacted],100.66.0.1,100.66.0.2,100.66.0.3,100.66.0.4,100.66.0.5,100.66.0.6,100.66.0.7,100.66.0.8,100.66.0.9,100.66.0.10,100.66.0.11,100.66.0.12,100.66.0.13,100.66.0.14,100.66.0.15,100.66.0.16,100.66.0.17,100.66.0.18,100.66.0.19,100.66.0.20,100.66.0.21,100.66.0.22,100.66.0.23,100.66.0.24,100.66.0.25,100.66.0.26,100.66.0.27,100.66.0.28,100.66.0.29,100.66.0.30,100.66.0.31,100.66.0.32,100.66.0.33,100.66.0.34,100.66.0.35,100.66.0.36,100.66.0.37,100.66.0.38,100.66.0.39,100.66.0.40,100.66.0.41,100.66.0.42,100.66.0.43,100.66.0.44,100.66.0.45,100.66.0.46,100.66.0.47,100.66.0.48,100.66.0.49,100.66.0.50,100.77.0.129,100.77.0.130,100.77.0.1,100.77.0.10"&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;$ sudo kubectl logs -f --since=1h po/kube-dns-3911048160-99klb dnsmasq --namespace=kube-system
I0320 22:03:25.264188       1 main.go:76] opts: {{/usr/sbin/dnsmasq [-k --cache-size=1000 --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053] true} /etc/k8s/dns/dnsmasq-nanny 10000000000}
I0320 22:03:25.265432       1 nanny.go:86] Starting dnsmasq [-k --cache-size=1000 --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053]
I0320 22:03:25.298956       1 nanny.go:111]
I0320 22:03:25.298956       1 nanny.go:108] dnsmasq[25]: started, version 2.78-security-prerelease cachesize 1000
W0320 22:03:25.299025       1 nanny.go:112] Got EOF from stdout
I0320 22:03:25.299031       1 nanny.go:108] dnsmasq[25]: compile time options: IPv6 GNU-getopt no-DBus no-i18n no-IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth no-DNSSEC loop-detect inotify
I0320 22:03:25.299044       1 nanny.go:108] dnsmasq[25]: using nameserver 127.0.0.1#10053 for domain ip6.arpa
I0320 22:03:25.299052       1 nanny.go:108] dnsmasq[25]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa
I0320 22:03:25.299055       1 nanny.go:108] dnsmasq[25]: using nameserver 127.0.0.1#10053 for domain cluster.local
I0320 22:03:25.299065       1 nanny.go:108] dnsmasq[25]: reading /etc/resolv.conf
I0320 22:03:25.299068       1 nanny.go:108] dnsmasq[25]: using nameserver 127.0.0.1#10053 for domain ip6.arpa
I0320 22:03:25.299072       1 nanny.go:108] dnsmasq[25]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa
I0320 22:03:25.299076       1 nanny.go:108] dnsmasq[25]: using nameserver 127.0.0.1#10053 for domain cluster.local
I0320 22:03:25.299079       1 nanny.go:108] dnsmasq[25]: using nameserver [redacted]#53
I0320 22:03:25.299082       1 nanny.go:108] dnsmasq[25]: using nameserver [redacted]#53
I0320 22:03:25.299085       1 nanny.go:108] dnsmasq[25]: using nameserver [redacted]#53
I0320 22:03:25.299089       1 nanny.go:108] dnsmasq[25]: using nameserver [redacted]#53
I0320 22:03:25.299092       1 nanny.go:108] dnsmasq[25]: read /etc/hosts - 7 addresses&lt;/PRE&gt;&lt;PRE&gt;$ sudo kubectl logs -f --since=1h po/kube-dns-3911048160-99klb kubedns --namespace=kube-system
I0320 21:58:22.617903       1 dns.go:48] version: 1.14.4-2-g5584e04
I0320 21:58:22.619053       1 server.go:70] Using configuration read from directory: /kube-dns-config with period 10s
I0320 21:58:22.619096       1 server.go:113] FLAG: --alsologtostderr="false"
I0320 21:58:22.619108       1 server.go:113] FLAG: --config-dir="/kube-dns-config"
I0320 21:58:22.619114       1 server.go:113] FLAG: --config-map=""
I0320 21:58:22.619118       1 server.go:113] FLAG: --config-map-namespace="kube-system"
I0320 21:58:22.619121       1 server.go:113] FLAG: --config-period="10s"
I0320 21:58:22.619129       1 server.go:113] FLAG: --dns-bind-address="0.0.0.0"
I0320 21:58:22.619132       1 server.go:113] FLAG: --dns-port="10053"
I0320 21:58:22.619137       1 server.go:113] FLAG: --domain="cluster.local."
I0320 21:58:22.619142       1 server.go:113] FLAG: --federations=""
I0320 21:58:22.619148       1 server.go:113] FLAG: --healthz-port="8081"
I0320 21:58:22.619151       1 server.go:113] FLAG: --initial-sync-timeout="1m0s"
I0320 21:58:22.619155       1 server.go:113] FLAG: --kube-master-url=""
I0320 21:58:22.619162       1 server.go:113] FLAG: --kubecfg-file=""
I0320 21:58:22.619165       1 server.go:113] FLAG: --log-backtrace-at=":0"
I0320 21:58:22.619171       1 server.go:113] FLAG: --log-dir=""
I0320 21:58:22.619175       1 server.go:113] FLAG: --log-flush-frequency="5s"
I0320 21:58:22.619180       1 server.go:113] FLAG: --logtostderr="true"
I0320 21:58:22.619183       1 server.go:113] FLAG: --nameservers=""
I0320 21:58:22.619186       1 server.go:113] FLAG: --stderrthreshold="2"
I0320 21:58:22.619189       1 server.go:113] FLAG: --v="2"
I0320 21:58:22.619192       1 server.go:113] FLAG: --version="false"
I0320 21:58:22.619202       1 server.go:113] FLAG: --vmodule=""
I0320 21:58:22.619292       1 server.go:176] Starting SkyDNS server (0.0.0.0:10053)
I0320 21:58:22.619587       1 server.go:198] Skydns metrics enabled (/metrics:10055)
I0320 21:58:22.619599       1 dns.go:147] Starting endpointsController
I0320 21:58:22.619603       1 dns.go:150] Starting serviceController
I0320 21:58:22.619713       1 logs.go:41] skydns: ready for queries on cluster.local. for tcp://0.0.0.0:10053 [rcache 0]
I0320 21:58:22.619737       1 logs.go:41] skydns: ready for queries on cluster.local. for udp://0.0.0.0:10053 [rcache 0]
I0320 21:58:23.119838       1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver...
I0320 21:58:23.619844       1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver...
E0320 21:58:23.623059       1 reflector.go:199] k8s.io/dns/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://100.77.0.1:443/api/v1/endpoints?resourceVersion=0: dial tcp 100.77.0.1:443: getsockopt: connection refused
E0320 21:58:23.623077       1 reflector.go:199] k8s.io/dns/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://100.77.0.1:443/api/v1/services?resourceVersion=0: dial tcp 100.77.0.1:443: getsockopt: connection refused
I0320 21:58:24.119875       1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver...
I0320 21:58:24.619805       1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver...
I0320 21:58:25.119883       1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver...
I0320 21:58:25.619870       1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver..
..............
I0320 21:59:22.119836       1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver...
F0320 21:59:22.619832       1 dns.go:168] Timeout waiting for initialization&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 21 Apr 2026 13:28:06 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/cdsw-init-fails-with-kube-dns-issues/m-p/65898#M76616</guid>
      <dc:creator>Joey_Krabacher</dc:creator>
      <dc:date>2026-04-21T13:28:06Z</dc:date>
    </item>
    <item>
      <title>Re: cdsw init fails with kube-dns issues</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/cdsw-init-fails-with-kube-dns-issues/m-p/66169#M76617</link>
      <description>&lt;P&gt;All,&lt;/P&gt;&lt;P&gt;I'm still facing the same issue.&lt;/P&gt;&lt;P&gt;If any of you have the&amp;nbsp;kube-dns pod running with all 3 containers running successfully (kubedns,dnsmasq and sidecar), can you run the following and reply back with the output...it would be greatly appreciated.&lt;/P&gt;&lt;P&gt;Get the pod names from the output of this command&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;&lt;FONT size="3"&gt;kubectl get pods --all-namespaces&lt;/FONT&gt;&lt;/PRE&gt;&lt;P&gt;then get the&amp;nbsp;CLUSTER-IP from this command&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;&lt;FONT size="3"&gt;kubectl get services --sort-by=.metadata.name&lt;/FONT&gt;&lt;/PRE&gt;&lt;P&gt;then execute nslookup commands on the running pods&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;e.g.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;&lt;FONT size="3"&gt;kubectl exec &amp;lt;kube-dns-pod-name&amp;gt; -c sidecar --namespace=kube-system -- nslookup &amp;lt;CLUSTER-IP&amp;gt;
kubectl exec &amp;lt;kube-dns-pod-name&amp;gt; -c dnsmasq --namespace=kube-system -- nslookup &amp;lt;CLUSTER-IP&amp;gt;
kubectl exec &amp;lt;kube-dns-pod-name&amp;gt; -c kubedns --namespace=kube-system -- nslookup &amp;lt;CLUSTER-IP&amp;gt;

e.g.
kubectl exec kube-dns-3911048160-lhtvm -c kubedns --namespace=kube-system -- nslookup 100.77.0.1&lt;/FONT&gt;&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I may be barking up the wrong tree, but I'm trying to figure out why my containers timeout when trying to connect to&amp;nbsp;https://&lt;SPAN class="s1"&gt;100&lt;/SPAN&gt;.&lt;SPAN class="s1"&gt;77&lt;/SPAN&gt;.&lt;SPAN class="s1"&gt;0&lt;/SPAN&gt;.&lt;SPAN class="s1"&gt;1&lt;/SPAN&gt;:&lt;SPAN class="s1"&gt;443&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Also, if you could post a copy of your&amp;nbsp;/etc/cdsw/config/cdsw.conf (with sensitive information redacted or masked) that would be great.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 09 Apr 2018 20:21:53 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/cdsw-init-fails-with-kube-dns-issues/m-p/66169#M76617</guid>
      <dc:creator>Joey_Krabacher</dc:creator>
      <dc:date>2018-04-09T20:21:53Z</dc:date>
    </item>
    <item>
      <title>Re: cdsw init fails with kube-dns issues</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/cdsw-init-fails-with-kube-dns-issues/m-p/66209#M76618</link>
      <description>&lt;P&gt;I now have CDSW up and running.&lt;/P&gt;&lt;P&gt;I'm not sure which one of these did the trick or if there was some other force at play.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We found a bug in&amp;nbsp;&lt;SPAN&gt;ip6tables.service (RHEL 7.4) that was producing error messages like this:&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Apr 10 10:06:56&amp;nbsp;[redacted] systemd[1]: [/usr/lib/systemd/system/ip6tables.service:3] Failed to add dependency on syslog.target,iptables.service, ignoring: Invalid argument&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;so we changed the After parameter from comma delimited to space delimited.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;before change:&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;After=syslog.target,iptables.service&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;after change:&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;After=syslog.target iptables.service&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Bug link:&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&lt;A href="https://bugzilla.redhat.com/show_bug.cgi?id=1499367" target="_blank"&gt;https://bugzilla.redhat.com/show_bug.cgi?id=1499367&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Here are the commands that were run&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;edit /usr/lib/systemd/system/ip6tables.service&lt;BR /&gt;systemctl stop iptables&lt;BR /&gt;systemctl disable iptables&lt;BR /&gt;systemctl stop ip6tables&lt;BR /&gt;systemctl disable ip6tables&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;/usr/bin/cdsw reset&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;/usr/bin/cdsw init&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 10 Apr 2018 17:38:51 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/cdsw-init-fails-with-kube-dns-issues/m-p/66209#M76618</guid>
      <dc:creator>Joey_Krabacher</dc:creator>
      <dc:date>2018-04-10T17:38:51Z</dc:date>
    </item>
    <item>
      <title>Re: cdsw init fails with kube-dns issues</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/cdsw-init-fails-with-kube-dns-issues/m-p/93130#M76619</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We are facing same kind of issue..are you able to resolve?&lt;/P&gt;&lt;P&gt;Please find below logs for reference.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;cdsw status&lt;BR /&gt;Sending detailed logs to [/tmp/cdsw_status_HOe8Jj.log] ...&lt;BR /&gt;CDSW Version: [1.5.0.849870:4b1d6ac]&lt;BR /&gt;OK: Application running as root check&lt;BR /&gt;OK: NFS service check&lt;BR /&gt;OK: System process check for CSD install&lt;BR /&gt;OK: Sysctl params check&lt;BR /&gt;OK: Kernel memory slabs check&lt;BR /&gt;-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------&lt;BR /&gt;| NAME | STATUS | CREATED-AT | VERSION | EXTERNAL-IP | OS-IMAGE | KERNEL-VERSION | GPU | STATEFUL |&lt;BR /&gt;-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------&lt;BR /&gt;| dvwuaspnhad03.ams.com | True | 2019-07-23 15:22:18+00:00 | v1.8.12-1+44f60fa9b27304-dirty | None | Red Hat Enterprise Linux | 3.10.0-693.2.2.el7.x86_64 | 0 | True |&lt;BR /&gt;-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------&lt;BR /&gt;1/1 nodes are ready.&lt;BR /&gt;-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------&lt;BR /&gt;| NAME | READY | STATUS | RESTARTS | CREATED-AT | POD-IP | HOST-IP | ROLE |&lt;BR /&gt;-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------&lt;BR /&gt;| etcd-dvwuaspnhad03.ams.com | 1/1 | Running | 0 | 2019-07-23 15:23:22+00:00 | 159.127.45.148 | 159.127.45.148 | None |&lt;BR /&gt;| kube-apiserver-dvwuaspnhad03.ams.com | 1/1 | Running | 0 | 2019-07-23 15:23:39+00:00 | 159.127.45.148 | 159.127.45.148 | None |&lt;BR /&gt;| kube-controller-manager-dvwuaspnhad03.ams.com | 1/1 | Running | 0 | 2019-07-23 15:23:37+00:00 | 159.127.45.148 | 159.127.45.148 | None |&lt;BR /&gt;| kube-dns-78dcf4b9d9-4qlmt | 3/3 | Running | 0 | 2019-07-23 15:23:49+00:00 | 100.66.0.4 | 159.127.45.148 | None |&lt;BR /&gt;| kube-proxy-72npf | 1/1 | Running | 0 | 2019-07-23 15:23:52+00:00 | 159.127.45.148 | 159.127.45.148 | None |&lt;BR /&gt;| kube-scheduler-dvwuaspnhad03.ams.com | 1/1 | Running | 0 | 2019-07-23 15:23:30+00:00 | 159.127.45.148 | 159.127.45.148 | None |&lt;BR /&gt;| tiller-deploy-775556c68-ntgxs | 1/1 | Running | 0 | 2019-07-23 15:22:36+00:00 | 100.66.0.2 | 159.127.45.148 | None |&lt;BR /&gt;| weave-net-6w4cc | 2/2 | Running | 1 | 2019-07-23 15:22:36+00:00 | 159.127.45.148 | 159.127.45.148 | None |&lt;BR /&gt;-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------&lt;BR /&gt;All required pods are ready in cluster kube-system.&lt;BR /&gt;-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------&lt;BR /&gt;| NAME | READY | STATUS | RESTARTS | CREATED-AT | POD-IP | HOST-IP | ROLE |&lt;BR /&gt;-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------&lt;BR /&gt;| cron-5df865cd67-8v9gq | 1/1 | Running | 0 | 2019-07-23 15:24:07+00:00 | 100.66.0.5 | 159.127.45.148 | cron |&lt;BR /&gt;| db-586cf7d4b6-kgrgs | 1/1 | Running | 0 | 2019-07-23 15:24:07+00:00 | 100.66.0.8 | 159.127.45.148 | db |&lt;BR /&gt;| db-migrate-4b1d6ac-757lc | 0/1 | Succeeded | 0 | 2019-07-23 15:24:07+00:00 | 100.66.0.6 | 159.127.45.148 | db-migrate |&lt;BR /&gt;| ds-cdh-client-b948b4b8b-qvltp | 1/1 | Running | 0 | 2019-07-23 15:24:09+00:00 | 100.66.0.19 | 159.127.45.148 | ds-cdh-client |&lt;BR /&gt;| ds-operator-84d49b8786-mvssl | 2/2 | Running | 2 | 2019-07-23 15:24:09+00:00 | 100.66.0.13 | 159.127.45.148 | ds-operator |&lt;BR /&gt;| ds-vfs-7c85df495f-2xbcj | 1/1 | Running | 0 | 2019-07-23 15:24:09+00:00 | 100.66.0.21 | 159.127.45.148 | ds-vfs |&lt;BR /&gt;| ingress-controller-ff89786db-cbmpj | 0/1 | CrashLoopBackOff | 243 | 2019-07-23 15:24:07+00:00 | 159.127.45.148 | 159.127.45.148 | ingress-controller |&lt;BR /&gt;| livelog-66f5b7986c-ctzsp | 1/1 | Running | 0 | 2019-07-23 15:24:07+00:00 | 100.66.0.7 | 159.127.45.148 | livelog |&lt;BR /&gt;| s2i-builder-5b7c868b6d-4lslx | 1/1 | Running | 2 | 2019-07-23 15:24:09+00:00 | 100.66.0.22 | 159.127.45.148 | s2i-builder |&lt;BR /&gt;| s2i-builder-5b7c868b6d-m8r28 | 1/1 | Running | 2 | 2019-07-23 15:24:10+00:00 | 100.66.0.18 | 159.127.45.148 | s2i-builder |&lt;BR /&gt;| s2i-builder-5b7c868b6d-t56q2 | 1/1 | Running | 2 | 2019-07-23 15:24:09+00:00 | 100.66.0.23 | 159.127.45.148 | s2i-builder |&lt;BR /&gt;| s2i-client-77d575bcc8-s98nf | 1/1 | Running | 0 | 2019-07-23 15:24:09+00:00 | 100.66.0.20 | 159.127.45.148 | s2i-client |&lt;BR /&gt;| s2i-git-server-7855bcbcc5-prmgc | 1/1 | Running | 0 | 2019-07-23 15:24:07+00:00 | 100.66.0.9 | 159.127.45.148 | s2i-git-server |&lt;BR /&gt;| s2i-queue-76fc7f5f88-jwrwf | 1/1 | Running | 0 | 2019-07-23 15:24:07+00:00 | 100.66.0.3 | 159.127.45.148 | s2i-queue |&lt;BR /&gt;| s2i-registry-74496d54dc-jkjp4 | 1/1 | Running | 0 | 2019-07-23 15:24:07+00:00 | 100.66.0.15 | 159.127.45.148 | s2i-registry |&lt;BR /&gt;| s2i-registry-auth-6f6f658947-8dgp9 | 1/1 | Running | 0 | 2019-07-23 15:24:07+00:00 | 100.66.0.11 | 159.127.45.148 | s2i-registry-auth |&lt;BR /&gt;| s2i-server-5b778bcb8d-n92rk | 1/1 | Running | 2 | 2019-07-23 15:24:08+00:00 | 100.66.0.12 | 159.127.45.148 | s2i-server |&lt;BR /&gt;| secret-generator-77d7b98444-wwjgt | 1/1 | Running | 0 | 2019-07-23 15:24:08+00:00 | 100.66.0.10 | 159.127.45.148 | secret-generator |&lt;BR /&gt;| spark-port-forwarder-q6r9t | 1/1 | Running | 0 | 2019-07-23 15:24:09+00:00 | 159.127.45.148 | 159.127.45.148 | spark-port-forwarder |&lt;BR /&gt;| web-75bbb7d4ff-6ngdl | 1/1 | Running | 0 | 2019-07-23 15:24:08+00:00 | 100.66.0.17 | 159.127.45.148 | web |&lt;BR /&gt;| web-75bbb7d4ff-g7hf9 | 1/1 | Running | 0 | 2019-07-23 15:24:08+00:00 | 100.66.0.14 | 159.127.45.148 | web |&lt;BR /&gt;| web-75bbb7d4ff-jtf8b | 1/1 | Running | 0 | 2019-07-23 15:24:08+00:00 | 100.66.0.16 | 159.127.45.148 | web |&lt;BR /&gt;-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------&lt;BR /&gt;Pods not ready in cluster default ['role/ingress-controller'].&lt;BR /&gt;All required Application services are configured.&lt;BR /&gt;All required secrets are available.&lt;BR /&gt;Persistent volumes are ready.&lt;BR /&gt;Persistent volume claims are ready.&lt;BR /&gt;Ingresses are ready.&lt;BR /&gt;Checking web at url: &lt;A href="http://cdsw.ams.com" target="_blank"&gt;http://cdsw.ams.com&lt;/A&gt;&lt;BR /&gt;Web is not yet up.&lt;BR /&gt;Cloudera Data Science Workbench is not ready yet&lt;/P&gt;</description>
      <pubDate>Wed, 24 Jul 2019 11:47:18 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/cdsw-init-fails-with-kube-dns-issues/m-p/93130#M76619</guid>
      <dc:creator>Shiva_Kondu</dc:creator>
      <dc:date>2019-07-24T11:47:18Z</dc:date>
    </item>
  </channel>
</rss>

