Member since
08-12-2014
19
Posts
3
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
9857 | 04-10-2018 10:38 AM |
07-06-2020
10:17 AM
After testing this, we are now seeing these messages for any type of Spark/Scala session in CDSW Spark history logs show java.io.FileNotFoundException: /tmp/spark-driver.log (Permission denied) Removed the overrides for spark-env.sh and now those failing sessions are working again. More testing needed.
... View more
07-04-2020
10:43 PM
HADOOP_CONF_DIR=$SPARK_CONF_DIR/yarn-conf/* HIVE_CONF_DIR=${HIVE_CONF_DIR:-/etc/hive/conf} if [ -d "$HIVE_CONF_DIR" ]; then HADOOP_CONF_DIR="$HADOOP_CONF_DIR:$HIVE_CONF_DIR" fi I'm currently testing the above setting. It is essentially the same as what was already in the original spark-env.sh , I just modified the first line to not use the :- HADOOP_CONF_DIR default . This approach is an alternative to using static values.
... View more
04-22-2020
10:38 PM
Good article and is still relevant for me. I would also add the following for Kudu tables create table hive.TABLE_PARAMS_BKUP as select * from hive.TABLE_PARAMS ;
UPDATE hive.TABLE_PARAMS
SET PARAM_VALUE = 'masterhostname01.fqdn,masterhostname02.fqdn,masterhostname03.fqdn'
WHERE PARAM_KEY = 'kudu.master_addresses'
and PARAM_VALUE like '%oldmasterhostname%'; And also for Sentry URI's that contain the HDFS namespace create table sentry.SENTRY_DB_PRIVILEGE_BKUP as select * from sentry.SENTRY_DB_PRIVILEGE ;
UPDATE sentry.SENTRY_DB_PRIVILEGE
SET URI = REPLACE(URI, 'hdfs://oldcluster-ns', 'hdfs://newcluster-ns/')
WHERE URI like '%oldcluster-ns%';
... View more
07-11-2018
08:44 AM
Thank you for the response. Any idea on roadmap dates or targeted quarters?
... View more
06-29-2018
12:47 PM
1 Kudo
Considering that Hive is more mature than Impala and many of your customers(including me) are most likely considering switching from Hive to Impala, wouldn't it make sense for you to support this interoperability? We have over 7000 tables in our data lake, so just coming back with "well you'll just have to convert those tables to use strings or timestamps instead" is not solution that works for us. Your sales force and product managers are pushing Impala as a solution to Fast BI and Analytics needs. With this hurdle in place, it's pretty much a non-starter. Folks will just continue to pump their data into a traditional database for BI and Analytics. With 3.0 just released, how in the world do you not support the DATE data type?
... View more
Labels:
- Labels:
-
Apache Impala
04-10-2018
10:38 AM
I now have CDSW up and running. I'm not sure which one of these did the trick or if there was some other force at play. We found a bug in ip6tables.service (RHEL 7.4) that was producing error messages like this: Apr 10 10:06:56 [redacted] systemd[1]: [/usr/lib/systemd/system/ip6tables.service:3] Failed to add dependency on syslog.target,iptables.service, ignoring: Invalid argument so we changed the After parameter from comma delimited to space delimited. before change: After=syslog.target,iptables.service after change: After=syslog.target iptables.service Bug link: https://bugzilla.redhat.com/show_bug.cgi?id=1499367 Here are the commands that were run edit /usr/lib/systemd/system/ip6tables.service systemctl stop iptables systemctl disable iptables systemctl stop ip6tables systemctl disable ip6tables /usr/bin/cdsw reset /usr/bin/cdsw init
... View more
04-09-2018
01:21 PM
All, I'm still facing the same issue. If any of you have the kube-dns pod running with all 3 containers running successfully (kubedns,dnsmasq and sidecar), can you run the following and reply back with the output...it would be greatly appreciated. Get the pod names from the output of this command kubectl get pods --all-namespaces then get the CLUSTER-IP from this command kubectl get services --sort-by=.metadata.name then execute nslookup commands on the running pods e.g. kubectl exec <kube-dns-pod-name> -c sidecar --namespace=kube-system -- nslookup <CLUSTER-IP>
kubectl exec <kube-dns-pod-name> -c dnsmasq --namespace=kube-system -- nslookup <CLUSTER-IP>
kubectl exec <kube-dns-pod-name> -c kubedns --namespace=kube-system -- nslookup <CLUSTER-IP>
e.g.
kubectl exec kube-dns-3911048160-lhtvm -c kubedns --namespace=kube-system -- nslookup 100.77.0.1 I may be barking up the wrong tree, but I'm trying to figure out why my containers timeout when trying to connect to https://100.77.0.1:443 Also, if you could post a copy of your /etc/cdsw/config/cdsw.conf (with sensitive information redacted or masked) that would be great.
... View more
03-29-2018
09:05 AM
kubedns and dnsmasq both appear to be failing sudo /usr/bin/cdsw init ... Waiting for kube-system cluster to come up. This could take a few minutes... ERROR:: Unable to bring up kube-system cluster.: 1 ERROR:: Unable to start kubernetes system pods.: 1 ... $ sudo kubectl --namespace=kube-system get pods
NAME READY STATUS RESTARTS AGE
etcd-udodapp05 1/1 Running 0 16m
kube-apiserver-udodapp05 1/1 Running 0 16m
kube-controller-manager-udodapp05 1/1 Running 0 16m
kube-dns-3911048160-99klb 2/3 CrashLoopBackOff 13 15m
kube-proxy-02z9b 1/1 Running 0 15m
kube-scheduler-udodapp05 1/1 Running 0 15m
weave-net-4fzw6 2/2 Running 0 15m $ cat cdsw.conf
JAVA_HOME=/usr/java/jdk1.7.0_67-cloudera/
MASTER_IP=[redacted]
DOMAIN=[redacted]
DOCKER_BLOCK_DEVICES=/dev/mapper/imgvg-imglv
APPLICATION_BLOCK_DEVICE=/dev/mapper/appvg-applv
NO_PROXY="127.0.0.1,localhost,[redacted],100.66.0.1,100.66.0.2,100.66.0.3,100.66.0.4,100.66.0.5,100.66.0.6,100.66.0.7,100.66.0.8,100.66.0.9,100.66.0.10,100.66.0.11,100.66.0.12,100.66.0.13,100.66.0.14,100.66.0.15,100.66.0.16,100.66.0.17,100.66.0.18,100.66.0.19,100.66.0.20,100.66.0.21,100.66.0.22,100.66.0.23,100.66.0.24,100.66.0.25,100.66.0.26,100.66.0.27,100.66.0.28,100.66.0.29,100.66.0.30,100.66.0.31,100.66.0.32,100.66.0.33,100.66.0.34,100.66.0.35,100.66.0.36,100.66.0.37,100.66.0.38,100.66.0.39,100.66.0.40,100.66.0.41,100.66.0.42,100.66.0.43,100.66.0.44,100.66.0.45,100.66.0.46,100.66.0.47,100.66.0.48,100.66.0.49,100.66.0.50,100.77.0.129,100.77.0.130,100.77.0.1,100.77.0.10" $ sudo kubectl logs -f --since=1h po/kube-dns-3911048160-99klb dnsmasq --namespace=kube-system
I0320 22:03:25.264188 1 main.go:76] opts: {{/usr/sbin/dnsmasq [-k --cache-size=1000 --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053] true} /etc/k8s/dns/dnsmasq-nanny 10000000000}
I0320 22:03:25.265432 1 nanny.go:86] Starting dnsmasq [-k --cache-size=1000 --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053]
I0320 22:03:25.298956 1 nanny.go:111]
I0320 22:03:25.298956 1 nanny.go:108] dnsmasq[25]: started, version 2.78-security-prerelease cachesize 1000
W0320 22:03:25.299025 1 nanny.go:112] Got EOF from stdout
I0320 22:03:25.299031 1 nanny.go:108] dnsmasq[25]: compile time options: IPv6 GNU-getopt no-DBus no-i18n no-IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth no-DNSSEC loop-detect inotify
I0320 22:03:25.299044 1 nanny.go:108] dnsmasq[25]: using nameserver 127.0.0.1#10053 for domain ip6.arpa
I0320 22:03:25.299052 1 nanny.go:108] dnsmasq[25]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa
I0320 22:03:25.299055 1 nanny.go:108] dnsmasq[25]: using nameserver 127.0.0.1#10053 for domain cluster.local
I0320 22:03:25.299065 1 nanny.go:108] dnsmasq[25]: reading /etc/resolv.conf
I0320 22:03:25.299068 1 nanny.go:108] dnsmasq[25]: using nameserver 127.0.0.1#10053 for domain ip6.arpa
I0320 22:03:25.299072 1 nanny.go:108] dnsmasq[25]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa
I0320 22:03:25.299076 1 nanny.go:108] dnsmasq[25]: using nameserver 127.0.0.1#10053 for domain cluster.local
I0320 22:03:25.299079 1 nanny.go:108] dnsmasq[25]: using nameserver [redacted]#53
I0320 22:03:25.299082 1 nanny.go:108] dnsmasq[25]: using nameserver [redacted]#53
I0320 22:03:25.299085 1 nanny.go:108] dnsmasq[25]: using nameserver [redacted]#53
I0320 22:03:25.299089 1 nanny.go:108] dnsmasq[25]: using nameserver [redacted]#53
I0320 22:03:25.299092 1 nanny.go:108] dnsmasq[25]: read /etc/hosts - 7 addresses $ sudo kubectl logs -f --since=1h po/kube-dns-3911048160-99klb kubedns --namespace=kube-system
I0320 21:58:22.617903 1 dns.go:48] version: 1.14.4-2-g5584e04
I0320 21:58:22.619053 1 server.go:70] Using configuration read from directory: /kube-dns-config with period 10s
I0320 21:58:22.619096 1 server.go:113] FLAG: --alsologtostderr="false"
I0320 21:58:22.619108 1 server.go:113] FLAG: --config-dir="/kube-dns-config"
I0320 21:58:22.619114 1 server.go:113] FLAG: --config-map=""
I0320 21:58:22.619118 1 server.go:113] FLAG: --config-map-namespace="kube-system"
I0320 21:58:22.619121 1 server.go:113] FLAG: --config-period="10s"
I0320 21:58:22.619129 1 server.go:113] FLAG: --dns-bind-address="0.0.0.0"
I0320 21:58:22.619132 1 server.go:113] FLAG: --dns-port="10053"
I0320 21:58:22.619137 1 server.go:113] FLAG: --domain="cluster.local."
I0320 21:58:22.619142 1 server.go:113] FLAG: --federations=""
I0320 21:58:22.619148 1 server.go:113] FLAG: --healthz-port="8081"
I0320 21:58:22.619151 1 server.go:113] FLAG: --initial-sync-timeout="1m0s"
I0320 21:58:22.619155 1 server.go:113] FLAG: --kube-master-url=""
I0320 21:58:22.619162 1 server.go:113] FLAG: --kubecfg-file=""
I0320 21:58:22.619165 1 server.go:113] FLAG: --log-backtrace-at=":0"
I0320 21:58:22.619171 1 server.go:113] FLAG: --log-dir=""
I0320 21:58:22.619175 1 server.go:113] FLAG: --log-flush-frequency="5s"
I0320 21:58:22.619180 1 server.go:113] FLAG: --logtostderr="true"
I0320 21:58:22.619183 1 server.go:113] FLAG: --nameservers=""
I0320 21:58:22.619186 1 server.go:113] FLAG: --stderrthreshold="2"
I0320 21:58:22.619189 1 server.go:113] FLAG: --v="2"
I0320 21:58:22.619192 1 server.go:113] FLAG: --version="false"
I0320 21:58:22.619202 1 server.go:113] FLAG: --vmodule=""
I0320 21:58:22.619292 1 server.go:176] Starting SkyDNS server (0.0.0.0:10053)
I0320 21:58:22.619587 1 server.go:198] Skydns metrics enabled (/metrics:10055)
I0320 21:58:22.619599 1 dns.go:147] Starting endpointsController
I0320 21:58:22.619603 1 dns.go:150] Starting serviceController
I0320 21:58:22.619713 1 logs.go:41] skydns: ready for queries on cluster.local. for tcp://0.0.0.0:10053 [rcache 0]
I0320 21:58:22.619737 1 logs.go:41] skydns: ready for queries on cluster.local. for udp://0.0.0.0:10053 [rcache 0]
I0320 21:58:23.119838 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver...
I0320 21:58:23.619844 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver...
E0320 21:58:23.623059 1 reflector.go:199] k8s.io/dns/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://100.77.0.1:443/api/v1/endpoints?resourceVersion=0: dial tcp 100.77.0.1:443: getsockopt: connection refused
E0320 21:58:23.623077 1 reflector.go:199] k8s.io/dns/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://100.77.0.1:443/api/v1/services?resourceVersion=0: dial tcp 100.77.0.1:443: getsockopt: connection refused
I0320 21:58:24.119875 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver...
I0320 21:58:24.619805 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver...
I0320 21:58:25.119883 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver...
I0320 21:58:25.619870 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver..
..............
I0320 21:59:22.119836 1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver...
F0320 21:59:22.619832 1 dns.go:168] Timeout waiting for initialization
... View more
Labels:
08-24-2016
10:51 AM
Thank you for the quick response Ben. We will be upgrading soon, so it is good to know that this will be fixed then. For now, I will just keep using native mysql commands for syncing. Thanks, /* Joey */
... View more
08-24-2016
10:33 AM
Could not find anything related to this or others that are experiencing the same issue. java.lang.IllegalArgumentException: Pathname /user/hdfs/.cm/hive/2016-08-24-16-52-30+00:00-8590 from /user/hdfs/.cm/hive/2016-08-24-16-52-30+00:00-8590 is not a valid DFS filename. I am using CDH 5.4.7. I can't find a way to change the settings on how this task is generating the log name to remote the "+" from the name. For now I am just going to use mysql function to backup and restore the Hive Metastore database on another cluster, to sync them. Thanks, /* Joey */
... View more
Labels: