Member since
05-16-2019
8
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1555 | 11-07-2019 09:54 AM |
11-07-2019
09:54 AM
1 Kudo
Was able to resolve this myself while encountering more and more familiar problems. These are the important take-aways: update the rest of CDH to same version as CM (6.3.1), enable ipv6 for all hosts, prepare the nodes using the CM, restart CDSW Probably forgot a couple of steps here, but these are the ones I remember. At least, the goal of resolving the issue without a rollback was achieved.
... View more
11-07-2019
01:06 AM
After upgrading the Cloudera Manager from 6.2.0 to 6.3.1, the CDSW won't come online anymore. In the health logs, this line stands out:
MaxRetryError: HTTPSConnectionPool(host='xxx.xxx.xxx.xxx', port=6443): Max retries exceeded with url: /api/v1/secrets (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7f364478d210>: Failed to establish a new connection: [Errno 111] Connection refused',))
The port is closed on the CDSW master node, and I couldn't find any service in the docs that depends on it. Is it Kubernetes?
Further, from cdsw validate it seems that some chains are missing from iptables:
The following chains are missing from iptables: [KUBE-EXTERNAL-SERVICES, WEAVE-NPC-EGRESS, WEAVE-NPC, WEAVE-NPC-EGRESS-ACCEPT, KUBE-SERVICES, WEAVE-NPC-INGRESS, WEAVE-NPC-EGRESS-DEFAULT, WEAVE-NPC-DEFAULT, WEAVE-NPC-EGRESS-CUSTOM]
However, I cannot remember any step in the installation process that required to set such rules, so I assume that was automated.
Is there a way to resolve the issue without rolling back the version?
... View more
Labels:
11-04-2019
01:58 AM
Hi, I can confirm that setting the env variable HADOOP_CONF_DIR to only /etc/hadoop/conf in the project settings resolves the issue with CDH 1.5 and CDSW 1.6.1. However, as @Finn points out, this works only on a project basis, and fixing this for the entire workbench seems preferable.
... View more
10-28-2019
05:35 AM
After deploying an instance of the CDSW and starting to play with the sample project, we've encountered an error while trying to interact with the underlying HDFS as described in the docs: We know it is possible to set and save the environment variables on a project basis, but the same issue occurred nonetheless. Anyone encountered this before or knows how to resolve it?
... View more
Labels:
10-24-2019
02:41 AM
This fixed my issue. Thanks!
... View more
10-23-2019
07:08 AM
Unfortunately not, because in Hadoop authentication the only field shown is to set the HADOOP_USER_NAME env variable, as shown in the screenshot: I believe we started to kerberize the cluster some time ago but stopped relatively early in the process due to time constraints. Maybe a setting somewhere in the cluster is to blame?
... View more
08-08-2019
05:24 AM
We are running a non-kerberized cluster with around ten nodes. However, the workbench was displaying Kerberos configuration for Hadoop Authentication, because the host of the workbench had a krb5.conf file. As described in the docs (https://www.cloudera.com/documentation/data-science-workbench/1-5-x/topics/cdsw_kerberos.html) we stoppend the workbench, deleted the file, restarted the service. However, now we're encountering this roblem when starting new sessions on the workbench. Which configuration has to be reset to get rid of this?
... View more
Labels:
05-16-2019
07:07 AM
Could you supply what your /var/log/cloudera-scm-agent/certmanager.log looked like after the successful installation? For comparative purposes?
... View more