Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

CDSW 1.2.0 parcel installation is stuck in start after upgrade from 1.1.1 package installation

CDSW 1.2.0 parcel installation is stuck in start after upgrade from 1.1.1 package installation

Hi, I tried to upgrade my 1.1.1 installation of CDSW to 1.2.0 parcel installation. But the Workbench gets stuck on startup in following state:

 

2017-10-23 11:11:26,408 INFO cdsw.status:OK: Application running as root check
2017-10-23 11:11:26,428 ERROR cdsw.status:Status check failed for services: [docker, kubelet, cdsw-app, cdsw-host-controller]
2017-10-23 11:11:26,451 INFO cdsw.status:OK: Sysctl params check
2017-10-23 11:11:26,461 INFO cdsw.status:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|         NAME         |   STATUS   |           CREATED-AT          |   VERSION   |   EXTERNAL-IP   |                     OS-IMAGE                    |       KERNEL-VERSION      |   GPU   |   STATEFUL   |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   dn184.pf4h.local   |    True    |   2017-10-23 09:03:02+00:00   |   v1.6.11   |       None      |   Red Hat Enterprise Linux Server 7.2 (Maipo)   |   3.10.0-327.el7.x86_64   |    0    |     True     |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
2017-10-23 11:11:26,462 INFO cdsw.status:1/1 nodes are ready.
2017-10-23 11:11:26,492 INFO cdsw.status:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|                     NAME                     |   READY   |    STATUS   |   RESTARTS   |           CREATED-AT          |        POD-IP       |       HOST-IP       |   ROLE   |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|            etcd-dn184.pf4h.local             |    1/1    |   Running   |      0       |   2017-10-23 09:04:08+00:00   |   192.168.239.184   |   192.168.239.184   |   None   |
|       kube-apiserver-dn184.pf4h.local        |    1/1    |   Running   |      0       |   2017-10-23 09:04:05+00:00   |   192.168.239.184   |   192.168.239.184   |   None   |
|   kube-controller-manager-dn184.pf4h.local   |    1/1    |   Running   |      0       |   2017-10-23 09:03:58+00:00   |   192.168.239.184   |   192.168.239.184   |   None   |
|          kube-dns-3911048160-8klgz           |    3/3    |   Running   |      0       |   2017-10-23 09:03:15+00:00   |      100.66.0.4     |   192.168.239.184   |   None   |
|               kube-proxy-7bdr8               |    1/1    |   Running   |      0       |   2017-10-23 09:03:15+00:00   |   192.168.239.184   |   192.168.239.184   |   None   |
|       kube-scheduler-dn184.pf4h.local        |    1/1    |   Running   |      0       |   2017-10-23 09:03:49+00:00   |   192.168.239.184   |   192.168.239.184   |   None   |
|       node-problem-detector-v0.1-4g2j6       |    1/1    |   Running   |      0       |   2017-10-23 09:04:31+00:00   |   192.168.239.184   |   192.168.239.184   |   None   |
|               weave-net-70vj1                |    2/2    |   Running   |      0       |   2017-10-23 09:03:15+00:00   |   192.168.239.184   |   192.168.239.184   |   None   |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
2017-10-23 11:11:26,492 INFO cdsw.status:All required pods are ready in cluster kube-system.
2017-10-23 11:11:26,541 INFO cdsw.status:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|                  NAME                  |   READY   |     STATUS    |   RESTARTS   |           CREATED-AT          |        POD-IP       |       HOST-IP       |           ROLE           |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|          cron-962987953-6wcdb          |    1/1    |    Running    |      0       |   2017-10-23 09:04:31+00:00   |      100.66.0.6     |   192.168.239.184   |           cron           |
|           db-875553086-jt2lr           |    1/1    |    Running    |      0       |   2017-10-23 09:04:31+00:00   |      100.66.0.8     |   192.168.239.184   |            db            |
|        db-migrate-d573dd7-twxxk        |    0/1    |   Succeeded   |      0       |   2017-10-23 09:04:31+00:00   |     100.66.0.10     |   192.168.239.184   |        db-migrate        |
|           engine-deps-t9c86            |    1/1    |    Running    |      0       |   2017-10-23 09:04:30+00:00   |      100.66.0.5     |   192.168.239.184   |       engine-deps        |
|   ingress-controller-506514573-k6tk7   |    1/1    |    Running    |      0       |   2017-10-23 09:04:30+00:00   |   192.168.239.184   |   192.168.239.184   |    ingress-controller    |
|        livelog-1589742313-rcpv1        |    1/1    |    Running    |      0       |   2017-10-23 09:04:31+00:00   |      100.66.0.7     |   192.168.239.184   |         livelog          |
|      reconciler-1584998901-z17nv       |    1/1    |    Running    |      0       |   2017-10-23 09:04:31+00:00   |      100.66.0.9     |   192.168.239.184   |        reconciler        |
|       spark-port-forwarder-tql0q       |    1/1    |    Running    |      0       |   2017-10-23 09:04:31+00:00   |   192.168.239.184   |   192.168.239.184   |   spark-port-forwarder   |
|           web-53233289-419hq           |    1/1    |    Running    |      0       |   2017-10-23 09:04:31+00:00   |     100.66.0.12     |   192.168.239.184   |           web            |
|           web-53233289-nhc60           |    1/1    |    Running    |      0       |   2017-10-23 09:04:31+00:00   |     100.66.0.13     |   192.168.239.184   |           web            |
|           web-53233289-sn3cx           |    1/1    |    Running    |      0       |   2017-10-23 09:04:31+00:00   |     100.66.0.11     |   192.168.239.184   |           web            |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
2017-10-23 11:11:26,542 INFO cdsw.status:All required pods are ready in cluster default.
2017-10-23 11:11:26,548 INFO cdsw.status:All required Application services are configured.
2017-10-23 11:11:26,553 INFO cdsw.status:All required config maps are ready.
2017-10-23 11:11:26,573 INFO cdsw.status:All required secrets are available.
2017-10-23 11:11:26,577 INFO cdsw.status:Persistent volumes are ready.
2017-10-23 11:11:26,580 INFO cdsw.status:Persistent volume claims are ready.
2017-10-23 11:11:26,585 INFO cdsw.status:Ingresses are ready.
2017-10-23 11:11:26,731 ERROR cdsw.status:Web is not yet up.
2017-10-23 11:11:26,731 INFO cdsw.monitor:{'metric': 'cdsw_status', 'timestampMs': 1508749886731, 'type': 'FAILURE', 'message': 'Web is not yet up.', 'entity': {'type': 'SERVICE', 'name': 'cdsw'}}
2017-10-23 11:11:26,731 INFO cdsw.monitor:Sending status to CM: {'statusRecords': [{'metric': 'cdsw_status', 'timestampMs': 1508749886731, 'type': 'FAILURE', 'message': 'Web is not yet up.', 'entity': {'type': 'SERVICE', 'name': 'cdsw'}}]}
2017-10-23 11:11:26,739 INFO cdsw.monitor:Successfully sent status to CM

But I can login to the Workbench. Only sessions cannot be startet, they all fail with status 1.

 

1 REPLY 1
Highlighted

Re: CDSW 1.2.0 parcel installation is stuck in start after upgrade from 1.1.1 package installation

After some configuration changes I found the reason: The proxy settings caused the problem. I started the cdsw without proxies and now it comes up and is working properly. I used the recommended NO_PROXY settings from the installation guide which worked with 1.1.1, but these don't work with 1.2.0. Is there an update for this setting?

Don't have an account?
Coming from Hortonworks? Activate your account here