Member since
01-27-2019
52
Posts
0
Kudos Received
0
Solutions
01-01-2021
07:46 PM
Cert details. [root@azure-r01wn01 ~]# openssl s_client -connect $(grep "server_host" /etc/cloudera-scm-agent/config.ini | sed s/server_host=//):7182 </dev/null | openssl x509 -text -noout
depth=0 C = US, ST = California, L = Los Angeles, O = MDS, OU = MDS, CN = srv-c01.mws.mds.xyz
verify error:num=18:self signed certificate
verify return:1
depth=0 C = US, ST = California, L = Los Angeles, O = MDS, OU = MDS, CN = srv-c01.mws.mds.xyz
verify return:1
140441195849616:error:14094412:SSL routines:ssl3_read_bytes:sslv3 alert bad certificate:s3_pkt.c:1493:SSL alert number 42
140441195849616:error:140790E5:SSL routines:ssl23_write:ssl handshake failure:s23_lib.c:177:
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 1594172762 (0x5f05255a)
Signature Algorithm: sha256WithRSAEncryption
Issuer: C=US, ST=California, L=Los Angeles, O=MDS, OU=MDS, CN=srv-c01.mws.mds.xyz
Validity
Not Before: Jul 19 02:46:18 2019 GMT
Not After : Jul 16 02:46:18 2029 GMT
Subject: C=US, ST=California, L=Los Angeles, O=MDS, OU=MDS, CN=srv-c01.mws.mds.xyz
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:c5:a9:00:83:12:9e:02:86:32:4e:2b:a7:c6:1a:
6b:9d:e3:56:00:53:22:01:d8:db:83:cd:14:79:6a:
85:27:20:f6:5d:86:0e:0b:af:df:46:dd:c3:23:72:
f0:bf:38:3e:cd:9f:92:e6:65:81:7b:26:32:50:fc:
81:0e:7b:dd:b4:61:6f:a7:56:ec:c8:fe:89:72:ec:
e5:e0:63:61:92:77:0b:36:41:98:93:14:6d:53:a0:
24:fb:fb:77:40:98:5b:2f:d2:3c:65:4f:8b:65:33:
e5:db:14:ce:01:d2:4f:9f:e4:c6:c8:35:50:09:a2:
f3:48:0a:ac:06:fd:66:42:30:10:a4:e7:fa:a8:2b:
0b:2b:ef:ce:83:82:4e:0d:86:34:ce:0c:8d:0c:a2:
f5:88:4d:38:9f:3b:dd:2e:6e:e3:8c:60:69:da:8d:
a4:d4:db:d5:cd:26:91:95:ca:a2:47:de:3c:f3:8f:
52:b8:e5:b0:09:26:af:77:fb:a3:5b:40:f6:e8:1b:
66:d7:b7:1b:da:2c:6c:34:99:76:de:c4:9b:80:69:
25:d5:12:2f:cb:9b:c5:d2:7e:15:a7:50:5f:54:5c:
9d:6b:8c:c0:9c:03:3f:96:f3:8a:2c:a6:05:ec:a4:
d3:83:84:61:13:da:57:6d:e8:8c:93:d9:40:38:24:
96:c9
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Extended Key Usage:
TLS Web Server Authentication, TLS Web Client Authentication, Code Signing, E-mail Protection, Time Stamping, OCSP Signing
X509v3 Subject Alternative Name:
DNS:srv-c01.mws.mds.xyz, DNS:cm-r01nn01.mws.mds.xyz, DNS:cm-r01nn02.mws.mds.xyz
X509v3 Subject Key Identifier:
F6:EA:97:6F:82:20:84:75:E9:63:71:2F:16:D6:41:8B:64:05:07:0D
Signature Algorithm: sha256WithRSAEncryption
4f:35:6d:18:dc:5c:4a:65:db:8c:62:75:0b:f8:da:2b:14:72:
22:f7:3a:ba:15:17:58:41:46:3b:6b:6e:40:db:6b:be:e5:07:
82:d1:37:0a:d6:4e:96:14:f6:87:ca:ff:d3:5f:a9:94:de:81:
e7:a1:28:94:0a:19:0b:f4:dc:ed:0a:a5:77:78:20:53:3f:3f:
03:54:67:a0:c4:a1:de:49:7d:e8:fc:2d:76:bd:7b:a5:98:cd:
45:7e:ba:21:79:e2:91:7d:f3:e9:d6:5d:b7:91:34:30:3a:e4:
3a:38:e9:33:9b:26:2e:3e:6c:c9:3d:5d:48:81:cb:35:2f:ff:
7a:ff:22:c2:f8:b5:a2:01:d0:54:7f:f2:08:33:89:78:80:af:
72:2d:d7:df:61:f0:4a:7f:d2:19:0d:c6:0c:51:ee:4e:c1:ed:
8d:8b:4f:82:17:47:6b:03:1a:f2:8b:00:cc:17:8a:75:ca:72:
c0:a4:a7:12:87:32:16:89:15:2c:80:d1:07:fd:37:e8:bf:f5:
87:6b:a2:dd:9d:a4:c4:2c:68:f8:d9:15:dd:3c:40:6d:8b:e0:
6d:c4:87:6d:39:a9:6b:91:f6:0a:bc:7c:63:e7:f0:37:cb:7a:
5f:35:6c:5c:f9:bb:cb:58:1a:b9:9c:49:ab:24:ac:2a:c9:2d:
3f:b2:2f:68
[root@azure-r01wn01 ~]#
[root@azure-r01wn01 ~]#
[root@azure-r01wn01 ~]#
[root@azure-r01wn01 ~]#
[root@azure-r01wn01 ~]# openssl s_client -connect $(grep -v '^#' /etc/cloudera-scm-agent/config.ini | grep "server_host=" | sed s/server_host=//):7182 -CAfile $(grep -v '^#' /etc/cloudera-scm-agent/config.ini | grep "verify_cert_file=" |sed s/verify_cert_file=//) -verify_hostname $(grep -v '^#' /etc/cloudera-scm-agent/config.ini | grep "server_host=" | sed s/server_host=//)</dev/null
CONNECTED(00000003)
depth=0 C = US, ST = California, L = Los Angeles, O = MDS, OU = MDS, CN = srv-c01.mws.mds.xyz
verify return:1
140276232329104:error:14094412:SSL routines:ssl3_read_bytes:sslv3 alert bad certificate:s3_pkt.c:1493:SSL alert number 42
140276232329104:error:140790E5:SSL routines:ssl23_write:ssl handshake failure:s23_lib.c:177:
---
Certificate chain
0 s:/C=US/ST=California/L=Los Angeles/O=MDS/OU=MDS/CN=srv-c01.mws.mds.xyz
i:/C=US/ST=California/L=Los Angeles/O=MDS/OU=MDS/CN=srv-c01.mws.mds.xyz
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIEHDCCAwSgAwIBAgIEXwUlWjANBgkqhkiG9w0BAQsFADByMQswCQYDVQQGEwJV
UzETMBEGA1UECBMKQ2FsaWZvcm5pYTEUMBIGA1UEBxMLTG9zIEFuZ2VsZXMxDDAK
BgNVBAoTA01EUzEMMAoGA1UECxMDTURTMRwwGgYDVQQDExNzcnYtYzAxLm13cy5t
ZHMueHl6MB4XDTE5MDcxOTAyNDYxOFoXDTI5MDcxNjAyNDYxOFowcjELMAkGA1UE
BhMCVVMxEzARBgNVBAgTCkNhbGlmb3JuaWExFDASBgNVBAcTC0xvcyBBbmdlbGVz
MQwwCgYDVQQKEwNNRFMxDDAKBgNVBAsTA01EUzEcMBoGA1UEAxMTc3J2LWMwMS5t
d3MubWRzLnh5ejCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMWpAIMS
ngKGMk4rp8Yaa53jVgBTIgHY24PNFHlqhScg9l2GDguv30bdwyNy8L84Ps2fkuZl
gXsmMlD8gQ573bRhb6dW7Mj+iXLs5eBjYZJ3CzZBmJMUbVOgJPv7d0CYWy/SPGVP
i2Uz5dsUzgHST5/kxsg1UAmi80gKrAb9ZkIwEKTn+qgrCyvvzoOCTg2GNM4MjQyi
9YhNOJ873S5u44xgadqNpNTb1c0mkZXKokfePPOPUrjlsAkmr3f7o1tA9ugbZte3
G9osbDSZdt7Em4BpJdUSL8ubxdJ+FadQX1RcnWuMwJwDP5bziiymBeyk04OEYRPa
V23ojJPZQDgklskCAwEAAaOBuTCBtjBFBgNVHSUEPjA8BggrBgEFBQcDAQYIKwYB
BQUHAwIGCCsGAQUFBwMDBggrBgEFBQcDBAYIKwYBBQUHAwgGCCsGAQUFBwMJME4G
A1UdEQRHMEWCE3Nydi1jMDEubXdzLm1kcy54eXqCFmNtLXIwMW5uMDEubXdzLm1k
cy54eXqCFmNtLXIwMW5uMDIubXdzLm1kcy54eXowHQYDVR0OBBYEFPbql2+CIIR1
6WNxLxbWQYtkBQcNMA0GCSqGSIb3DQEBCwUAA4IBAQBPNW0Y3FxKZduMYnUL+Nor
FHIi9zq6FRdYQUY7a25A22u+5QeC0TcK1k6WFPaHyv/TX6mU3oHnoSiUChkL9Nzt
CqV3eCBTPz8DVGegxKHeSX3o/C12vXulmM1FfroheeKRffPp1l23kTQwOuQ6OOkz
myYuPmzJPV1Igcs1L/96/yLC+LWiAdBUf/IIM4l4gK9yLdffYfBKf9IZDcYMUe5O
we2Ni0+CF0drAxryiwDMF4p1ynLApKcShzIWiRUsgNEH/Tfov/WHa6LdnaTELGj4
2RXdPEBti+BtxIdtOalrkfYKvHxj5/A3y3pfNWxc+bvLWBq5nEmrJKwqyS0/si9o
-----END CERTIFICATE-----
.
.
.
.
.
.
.
---
SSL handshake has read 18243 bytes and written 138 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-RSA-AES256-GCM-SHA384
Session-ID: 5FEFEAC965EF94EEEA66EA13E233E18323258810C92903D96B3A57571739DEB4
Session-ID-ctx:
Master-Key: 6F693441CEDC0AF262F25FC41236CBE03B59BF78CF3FBD13A574C5BCD3095680985C7F5D2BFBDFA67AC932359C519E37
Key-Arg : None
Krb5 Principal: None
PSK identity: None
PSK identity hint: None
Start Time: 1609558729
Timeout : 300 (sec)
Verify return code: 0 (ok)
---
[root@azure-r01wn01 ~]# # grep -Ei srv /etc/cloudera-scm-agent/config.ini
server_host=srv-c01.mws.mds.xyz
... View more
12-28-2020
11:46 PM
Hello, How is Cloudera determining the 'host' part in this message? Running CDH 6.3 and receiving the following: ERROR Failed fetching torrent: Peer certificate subjectAltName does not match host, expected 10.3.0.134, got DNS:srv-c01.mws.mds.xyz, DNS:cm-r01nn01.mws.mds.xyz, DNS:cm-r01nn02.mws.mds.xyz Yet the reverse and forward lookup work just fine. Why is it receiving an IP for the host? Not able to make heads or tails out of the code yet. vi /opt/cloudera/cm-agent/lib/python2.7/site-packages/M2Crypto/httpslib.py +69
vi /opt/cloudera/cm-agent/lib/python2.7/site-packages/M2Crypto/SSL/Connection.py +313
vi /opt/cloudera/cm-agent/lib/python2.7/site-packages/M2Crypto/SSL/Checker.py +125 [29/Dec/2020 02:24:42 +0000] 20442 Thread-13 downloader ERROR Failed fetching torrent: Peer certificate subjectAltName does not match host, expected 10.3.0.134, got DNS:srv-c01.mws.mds.xyz, DNS:cm-r01nn01.mws.mds.xyz, DNS:cm-r01nn02.mws.mds.xyz
Traceback (most recent call last):
File "/opt/cloudera/cm-agent/lib/python2.7/site-packages/cmf/downloader.py", line 264, in download
cmf.https.ssl_url_opener.fetch_to_file(torrent_url, torrent_file)
File "/opt/cloudera/cm-agent/lib/python2.7/site-packages/cmf/https.py", line 193, in fetch_to_file
resp = self.open(req_url)
File "/opt/cloudera/cm-agent/lib/python2.7/site-packages/cmf/https.py", line 188, in open
return self.opener(url, *pargs, **kwargs)
File "/usr/lib64/python2.7/urllib2.py", line 431, in open
response = self._open(req, data)
File "/usr/lib64/python2.7/urllib2.py", line 449, in _open
'_open', req)
File "/usr/lib64/python2.7/urllib2.py", line 409, in _call_chain
result = func(*args)
File "/opt/cloudera/cm-agent/lib/python2.7/site-packages/cmf/https.py", line 179, in https_open
return self.do_open(opener, req)
File "/usr/lib64/python2.7/urllib2.py", line 1211, in do_open
h.request(req.get_method(), req.get_selector(), req.data, headers)
File "/usr/lib64/python2.7/httplib.py", line 1041, in request
self._send_request(method, url, body, headers)
File "/usr/lib64/python2.7/httplib.py", line 1075, in _send_request
self.endheaders(body)
File "/usr/lib64/python2.7/httplib.py", line 1037, in endheaders
self._send_output(message_body)
File "/usr/lib64/python2.7/httplib.py", line 881, in _send_output
self.send(msg)
File "/usr/lib64/python2.7/httplib.py", line 843, in send
self.connect()
File "/opt/cloudera/cm-agent/lib/python2.7/site-packages/M2Crypto/httpslib.py", line 69, in connect
sock.connect((self.host, self.port))
File "/opt/cloudera/cm-agent/lib/python2.7/site-packages/M2Crypto/SSL/Connection.py", line 313, in connect
if not check(self.get_peer_cert(), self.addr[0]):
File "/opt/cloudera/cm-agent/lib/python2.7/site-packages/M2Crypto/SSL/Checker.py", line 125, in __call__
fieldName='subjectAltName')
WrongHost: Peer certificate subjectAltName does not match host, expected 10.3.0.134, got DNS:srv-c01.mws.mds.xyz, DNS:cm-r01nn01.mws.mds.xyz, DNS:cm-r01nn02.mws.mds.xyz
^C
root / var log cloudera-scm-agent dig -x 10.3.0.134
; <<>> DiG 9.9.4-RedHat-9.9.4-73.el7_6 <<>> -x 10.3.0.134
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 48590
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 3
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;134.0.3.10.in-addr.arpa. IN PTR
;; ANSWER SECTION:
134.0.3.10.in-addr.arpa. 86400 IN PTR cm-r01nn01.mws.mds.xyz.
;; AUTHORITY SECTION:
0.3.10.in-addr.arpa. 86400 IN NS idmipa03.mws.mds.xyz.
0.3.10.in-addr.arpa. 86400 IN NS idmipa04.mws.mds.xyz.
;; ADDITIONAL SECTION:
idmipa03.mws.mds.xyz. 1200 IN A 192.168.0.154
idmipa04.mws.mds.xyz. 1200 IN A 192.168.0.155
;; Query time: 1 msec
;; SERVER: 192.168.0.51#53(192.168.0.51)
;; WHEN: Tue Dec 29 02:24:57 EST 2020
;; MSG SIZE rcvd: 166
root / var log cloudera-scm-agent dig cm-r01nn01
; <<>> DiG 9.9.4-RedHat-9.9.4-73.el7_6 <<>> cm-r01nn01
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 37372
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;cm-r01nn01. IN A
;; AUTHORITY SECTION:
. 9788 IN SOA a.root-servers.net. nstld.verisign-grs.com. 2020122900 1800 900 604800 86400
;; Query time: 1 msec
;; SERVER: 192.168.0.51#53(192.168.0.51)
;; WHEN: Tue Dec 29 02:25:04 EST 2020
;; MSG SIZE rcvd: 114
root / var log cloudera-scm-agent dig cm-r01nn01.mws.mds.xyz
; <<>> DiG 9.9.4-RedHat-9.9.4-73.el7_6 <<>> cm-r01nn01.mws.mds.xyz
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 20538
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 3
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;cm-r01nn01.mws.mds.xyz. IN A
;; ANSWER SECTION:
cm-r01nn01.mws.mds.xyz. 1200 IN A 10.3.0.134
;; AUTHORITY SECTION:
mws.mds.xyz. 86400 IN NS idmipa03.mws.mds.xyz.
mws.mds.xyz. 86400 IN NS idmipa04.mws.mds.xyz.
;; ADDITIONAL SECTION:
idmipa03.mws.mds.xyz. 1200 IN A 192.168.0.154
idmipa04.mws.mds.xyz. 1200 IN A 192.168.0.155
;; Query time: 1 msec
;; SERVER: 192.168.0.51#53(192.168.0.51)
;; WHEN: Tue Dec 29 02:25:08 EST 2020
;; MSG SIZE rcvd: 145
root / var log cloudera-scm-agent nslookup cm-r01nn01
Server: 192.168.0.51
Address: 192.168.0.51#53
Name: cm-r01nn01.mws.mds.xyz
Address: 10.3.0.134
root / var log cloudera-scm-agent nslookup 10.3.0.134
Server: 192.168.0.51
Address: 192.168.0.51#53
134.0.3.10.in-addr.arpa name = cm-r01nn01.mws.mds.xyz.
root / var log cloudera-scm-agent cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.3.0.134 cm-r01nn01.mws.mds.xyz cm-r01nn01
root / var log cloudera-scm-agent
... View more
Labels:
- Labels:
-
Cloudera Manager
10-28-2020
06:58 PM
Hey All, Also, where is the /blockScannerReport? Cloudera lists ports 9865 for dfs.datanode.https.address but no process is running on that port. Nor is there any service on the standard port of 50075. The only thing listed is port 1006 for dfs.datanode.http.address. However, that asks for a password and the default Cloudera password isn't working. Need to get some stats on block scans however not able too at the moment. Thx, TK
... View more
10-19-2020
06:46 PM
Hello, Is there a way to validate HDFS data directories to ensure missing blocks won't be reported before HDFS or the rest of the CDH services get started up? Let's say I have 10 racks and 10 workers per rack. I want to reboot each worker but before I do, I would like to run data disk integrity checks before and after. If they are different after a reboot, then I know filesystem corruption occurred and I really shouldn't begin restarting new workers. Assuming rack awareness is enabled, what commands could I run offline to give me 100% certainty that no missing blocks will be reported on a given node? What will give me that 100% certainty? Running storage commands to determine drive failures? Smartctl? File checksums (sha256sum)? Filesystem integrity checks (fsck)? What would be the equivalent of the HDFS missing blocks check but without HDFS or any other CDH services running? I would also like to know how the Cloudera missing blocks check works? Does the Cloudera missing blocks check not only verify that the file block exists on the correct data disk and directory but also that the checksum of the file and it's integrity match what is expected? Thx,
... View more
Labels:
- Labels:
-
HDFS
05-27-2020
07:24 PM
Hello All, I would like to be able to get the following metrics via Cloudera API or DB SQL query? Is this possible? If so, how could I do this via: 1) API Calls 2) SQL Query (if stored in a DB) 3) Or retrieve it from the Host Monitor Timeseries files? Ex. [root@cm-r01en02 stream_2020-05-27T16:30:29.974Z]# pwd /var/lib/cloudera-host-monitor/ts/stream/partitions/stream_2020-05-27T16:30:29.974Z I'm looking for data such as this or even just the tmpfs usage for all hosts: File Systems Disk Mount Point Usage nas-c01.unix.dom.com:/n/mnt /n/mnt 10.2 GiB/127.9 GiB /dev/mapper/centos-home /home 32.2 MiB/19.5 GiB /dev/sda1 /boot 255.0 MiB/496.7 MiB cm_processes /run/cloudera-scm-agent/process 52.3 MiB/5.8 GiB /dev/mapper/centos-root / 31.7 GiB/60.0 GiB tmpfs /run 231.1 MiB/5.8 GiB tmpfs /dev/shm 0 B/5.8 GiB Thx, TK
... View more
Labels:
- Labels:
-
Cloudera Manager
05-18-2020
08:40 PM
Realizing I didn't close this off. The suggestions in this post worked perfectly to move me along and eventually setup full TLS encryption. Thanks very much guy's for the help. Very much appreciated!
... View more
03-14-2020
10:56 AM
How to set zookeeper.security.auth_to_local in the zookeeper configuration?
Thx,
... View more
- Tags:
- cdh
- configuration
- zk
Labels:
- Labels:
-
Apache Zookeeper
03-14-2020
10:53 AM
Given these hostnames exist for this one server: 1) host01.dom1.com 2) althost01.dom2.com 3) althost01.dom3.com I added the entries into /etc/hosts like this: 127.0.0.1localhost localhost.localdomain localhost4 localhost.localdomain4 1.2.3.4 host01.dom1.com althost01.dom2.com althost01 . (rest of servers) . . . Reverse lookup on 1.2.3.4 still returns: althost01.dom3.com wich is not one of the entries in /etc/hosts . So thinking there must be something else that directly sends it over to the DNS, if not /etc/resolv.conf entries as per my earlier suggestion. Thx
... View more
03-11-2020
09:32 AM
That still didn't work. I tried to add various entries into /etc/hosts but no luck, even with nsswitch.conf hosts set to files, when doing reverse lookups, the request is still sent to the /etc/resolv.conf nameservers which is then sent up to be resolved by the DNS. Do you have specific /etc/hosts example that will work and properly return reverse PTR records? Thx,
... View more
03-09-2020
07:04 PM
Hi
I have an example scenario where a given VM has 3 hostnames/aliases:
1) host01.dom.com
2) althost01.dom.com
3) host-h01.dom.com
All these point to the same IP, 123.123.123.123 . When doing a reverse DNS lookup, only the third one is returned. This causes issues, of course, if the hosts have been added to CM using the 1) name.
I've no access to the DNS server.
Is there a way to change the default CDP / CDH / CM behaviour or lookup so a reverse lookup by the above IP returns hostname 1) instead of 3)?
In other ways, is there a way to 'trick' Cloudera into thinking that the hostname is 1)? Or am I locked into using the 3) option?
The reason for this is if a scenario arises where 3) is not a very friendly name to work with and we need something friendlier but also where we have no control of the DNS from our side.
Thx,
... View more
Labels:
09-19-2019
05:50 PM
Not the best approach to getting rid of these messages but it gave me what I wanted. I set highest logging level to ERROR instead so everything else is not printed: tom@mds.xyz@cm-r01en01:~] 🙂 $ cat /etc/spark/conf/log4j.properties
log4j.rootLogger=${root.logger}
root.logger=ERROR,console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n
shell.log.level=ERROR
log4j.logger.org.spark-project.jetty=WARN
log4j.logger.org.spark-project.jetty.util.component.AbstractLifeCycle=ERROR
log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=ERROR
log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=ERROR
log4j.logger.org.apache.parquet=ERROR
log4j.logger.org.apache.hadoop.hive.metastore.RetryingHMSHandler=FATAL
log4j.logger.org.apache.hadoop.hive.ql.exec.FunctionRegistry=ERROR
log4j.logger.org.apache.spark.repl.Main=${shell.log.level}
log4j.logger.org.apache.spark.api.python.PythonGatewayServer=${shell.log.level}
tom@mds.xyz@cm-r01en01:~] 🙂 $
tom@mds.xyz@cm-r01en01:~] 🙂 $
tom@mds.xyz@cm-r01en01:~] 🙂 $ digg /etc/spark/conf/log4j.properties /etc/spark/conf/log4j.properties-original
-sh: digg: command not found
tom@mds.xyz@cm-r01en01:~] 😞 $ diff /etc/spark/conf/log4j.properties /etc/spark/conf/log4j.properties-original
2c2
< root.logger=ERROR,console
---
> root.logger=DEBUG,console
10,11c10,11
< log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=ERROR
< log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=ERROR
---
> log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=INFO
> log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=INFO
tom@mds.xyz@cm-r01en01:~] 😞 $ Now I get my spark-shell without the INFO, DEBUG or WARNING messages all over it. Still interested in a final solution if possible. I only see it fixed in Spark 3.0 . Cheers, TK
... View more
09-18-2019
04:49 AM
Including the log file of the shell session as a link: https://tinyurl.com/y2kfmke8 Cheers, TK
... View more
09-16-2019
10:20 PM
Moving back to the original question: "INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all" My configration is as follows: [root@cm-r01en01 conf]# cat log4j.properties
log4j.rootLogger=${root.logger}
root.logger=INFO,console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n
shell.log.level=INFO
log4j.logger.org.spark-project.jetty=WARN
log4j.logger.org.spark-project.jetty.util.component.AbstractLifeCycle=ERROR
log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=INFO
log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=INFO
log4j.logger.org.apache.parquet=ERROR
log4j.logger.org.apache.hadoop.hive.metastore.RetryingHMSHandler=FATAL
log4j.logger.org.apache.hadoop.hive.ql.exec.FunctionRegistry=ERROR
log4j.logger.org.apache.spark.repl.Main=${shell.log.level}
log4j.logger.org.apache.spark.api.python.PythonGatewayServer=${shell.log.level}
[root@cm-r01en01 conf]#
[root@cm-r01en01 conf]#
[root@cm-r01en01 conf]# cat spark-defaults.conf
spark.authenticate=false
spark.driver.log.dfsDir=/user/spark/driverLogs
spark.driver.log.persistToDfs.enabled=true
spark.dynamicAllocation.enabled=true
spark.dynamicAllocation.executorIdleTimeout=60
spark.dynamicAllocation.minExecutors=0
spark.dynamicAllocation.schedulerBacklogTimeout=1
spark.eventLog.enabled=true
spark.io.encryption.enabled=false
spark.network.crypto.enabled=false
spark.serializer=org.apache.spark.serializer.KryoSerializer
spark.shuffle.service.enabled=true
spark.shuffle.service.port=7337
spark.ui.enabled=true
spark.ui.killEnabled=true
spark.lineage.log.dir=/var/log/spark/lineage
spark.lineage.enabled=true
spark.master=yarn
spark.submit.deployMode=client
spark.eventLog.dir=hdfs://cm-r01nn02.mws.mds.xyz:8020/user/spark/applicationHistory
spark.yarn.historyServer.address=<a href="http://cm-r01en01.mws.mds.xyz:18088" target="_blank">http://cm-r01en01.mws.mds.xyz:18088</a>
spark.yarn.jars=local:/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/spark/jars/*,local:/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/spark/hive/*
spark.driver.extraLibraryPath=/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hadoop/lib/native
spark.executor.extraLibraryPath=/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hadoop/lib/native
spark.yarn.am.extraLibraryPath=/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hadoop/lib/native
spark.yarn.config.gatewayPath=/opt/cloudera/parcels
spark.yarn.config.replacementPath={{HADOOP_COMMON_HOME}}/../../..
spark.yarn.historyServer.allowTracking=true
spark.yarn.appMasterEnv.MKL_NUM_THREADS=1
spark.executorEnv.MKL_NUM_THREADS=1
spark.yarn.appMasterEnv.OPENBLAS_NUM_THREADS=1
spark.executorEnv.OPENBLAS_NUM_THREADS=1
spark.extraListeners=com.cloudera.spark.lineage.NavigatorAppListener
spark.sql.queryExecutionListeners=com.cloudera.spark.lineage.NavigatorQueryListener
[root@cm-r01en01 conf]# I can get into the shell however the console is overwhelmed with the above error messages preventing me from doing anything useful with it. Aiming to run a few spark commands to get started learning it. Cheers, TK
... View more
09-15-2019
04:52 PM
I restarted things before capturing the second reply above to free space on the cluster. PID 13778 would be the same thing as the original PID 23190 I listed. The process always takes up 1.8+ GB. After a short while it climbs to 2.8GB.
... View more
09-12-2019
11:10 PM
Apologies. You're right. mysql connector running through the Cloudera user: [root@cm-r01nn01 ~]# top
top - 02:06:24 up 19 days, 4:04, 1 user, load average: 0.26, 0.28, 0.29
Tasks: 197 total, 1 running, 196 sleeping, 0 stopped, 0 zombie
%Cpu(s): 1.0 us, 0.5 sy, 0.0 ni, 98.6 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 8008640 total, 286564 free, 5060212 used, 2661864 buff/cache
KiB Swap: 4063228 total, 4063228 free, 0 used. 2205572 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3919 root 20 0 1260148 47188 8428 S 3.8 0.6 2:13.33 python2
13778 clouder+ 20 0 8263556 2.8g 17472 S 3.8 37.1 2208:20 java
3877 root 20 0 2680196 63480 9772 S 1.2 0.8 1:46.97 cmagent
4160 root 20 0 619212 28772 9028 S 1.2 0.4 0:32.63 python2
6281 httpfs 20 0 3860244 213008 29404 S 1.2 2.7 0:29.09 java
7367 hive 20 0 2118308 312892 41972 S 1.2 3.9 1:09.55 java
14318 root 20 0 172308 2404 1628 R 1.2 0.0 0:00.49 top
1 root 20 0 54952 7272 4268 S 0.0 0.1 6:44.58 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:01.41 kthreadd
3 root 20 0 0 0 0 S 0.0 0.0 1:27.50 ksoftirqd/0
5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0H
7 root rt 0 0 0 0 S 0.0 0.0 0:04.12 migration/0
8 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcu_bh
9 root 20 0 0 0 0 S 0.0 0.0 29:36.83 rcu_sched
10 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 lru-add-drain
11 root rt 0 0 0 0 S 0.0 0.0 0:12.34 watchdog/0
12 root rt 0 0 0 0 S 0.0 0.0 0:08.09 watchdog/1
13 root rt 0 0 0 0 S 0.0 0.0 0:02.97 migration/1
14 root 20 0 0 0 0 S 0.0 0.0 1:21.07 ksoftirqd/1
16 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/1:0H
17 root rt 0 0 0 0 S 0.0 0.0 0:07.84 watchdog/2
18 root rt 0 0 0 0 S 0.0 0.0 0:00.64 migration/2
19 root 20 0 0 0 0 S 0.0 0.0 1:20.44 ksoftirqd/2
21 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/2:0H
22 root rt 0 0 0 0 S 0.0 0.0 0:08.05 watchdog/3
23 root rt 0 0 0 0 S 0.0 0.0 0:00.39 migration/3
24 root 20 0 0 0 0 S 0.0 0.0 1:20.37 ksoftirqd/3
26 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/3:0H
27 root rt 0 0 0 0 S 0.0 0.0 0:08.38 watchdog/4
28 root rt 0 0 0 0 S 0.0 0.0 0:00.51 migration/4
[root@cm-r01nn01 ~]# ps -ef|grep -Ei 13778
clouder+ 13778 1 8 Aug25 ? 1-12:48:21 /usr/java/jdk1.8.0_181-cloudera/bin/java -cp .:/usr/share/java/mysql-connector-java.jar:/usr/share/java/oracle-connector-java.jar:/usr/share/java/postgresql-connector-java.jar:lib/* -server -Dlog4j.configuration=file:/etc/cloudera-scm-server/log4j.properties -Dfile.encoding=UTF-8 -Dcmf.root.logger=INFO,LOGFILE -Dcmf.log.dir=/var/log/cloudera-scm-server -Dcmf.log.file=cloudera-scm-server.log -Dcmf.jetty.threshhold=WARN -Dcmf.schema.dir=/opt/cloudera/cm/schema -Djava.awt.headless=true -Djava.net.preferIPv4Stack=true -Dpython.home=/opt/cloudera/cm/python -XX:+HeapDumpOnOutOfMemoryError -Xmx2G -XX:MaxPermSize=256m -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:OnOutOfMemoryError=kill -9 %p com.cloudera.server.cmf.Main
root 14404 4844 0 02:06 pts/0 00:00:00 grep --color=auto -Ei 13778
[root@cm-r01nn01 ~]# [root@cm-r01nn01 ~]# top
top - 02:06:24 up 19 days, 4:04, 1 user, load average: 0.26, 0.28, 0.29
Tasks: 197 total, 1 running, 196 sleeping, 0 stopped, 0 zombie
%Cpu(s): 1.0 us, 0.5 sy, 0.0 ni, 98.6 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 8008640 total, 286564 free, 5060212 used, 2661864 buff/cache
KiB Swap: 4063228 total, 4063228 free, 0 used. 2205572 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3919 root 20 0 1260148 47188 8428 S 3.8 0.6 2:13.33 python2
13778 clouder+ 20 0 8263556 2.8g 17472 S 3.8 37.1 2208:20 java
3877 root 20 0 2680196 63480 9772 S 1.2 0.8 1:46.97 cmagent
4160 root 20 0 619212 28772 9028 S 1.2 0.4 0:32.63 python2
6281 httpfs 20 0 3860244 213008 29404 S 1.2 2.7 0:29.09 java
7367 hive 20 0 2118308 312892 41972 S 1.2 3.9 1:09.55 java
14318 root 20 0 172308 2404 1628 R 1.2 0.0 0:00.49 top
1 root 20 0 54952 7272 4268 S 0.0 0.1 6:44.58 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:01.41 kthreadd
3 root 20 0 0 0 0 S 0.0 0.0 1:27.50 ksoftirqd/0
5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0H
7 root rt 0 0 0 0 S 0.0 0.0 0:04.12 migration/0
8 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcu_bh
9 root 20 0 0 0 0 S 0.0 0.0 29:36.83 rcu_sched
10 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 lru-add-drain
11 root rt 0 0 0 0 S 0.0 0.0 0:12.34 watchdog/0
12 root rt 0 0 0 0 S 0.0 0.0 0:08.09 watchdog/1
13 root rt 0 0 0 0 S 0.0 0.0 0:02.97 migration/1
14 root 20 0 0 0 0 S 0.0 0.0 1:21.07 ksoftirqd/1
16 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/1:0H
17 root rt 0 0 0 0 S 0.0 0.0 0:07.84 watchdog/2
18 root rt 0 0 0 0 S 0.0 0.0 0:00.64 migration/2
19 root 20 0 0 0 0 S 0.0 0.0 1:20.44 ksoftirqd/2
21 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/2:0H
22 root rt 0 0 0 0 S 0.0 0.0 0:08.05 watchdog/3
23 root rt 0 0 0 0 S 0.0 0.0 0:00.39 migration/3
24 root 20 0 0 0 0 S 0.0 0.0 1:20.37 ksoftirqd/3
26 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/3:0H
27 root rt 0 0 0 0 S 0.0 0.0 0:08.38 watchdog/4
28 root rt 0 0 0 0 S 0.0 0.0 0:00.51 migration/4
[root@cm-r01nn01 ~]# ps -ef|grep -Ei 13778
clouder+ 13778 1 8 Aug25 ? 1-12:48:21 /usr/java/jdk1.8.0_181-cloudera/bin/java -cp .:/usr/share/java/mysql-connector-java.jar:/usr/share/java/oracle-connector-java.jar:/usr/share/java/postgresql-connector-java.jar:lib/* -server -Dlog4j.configuration=file:/etc/cloudera-scm-server/log4j.properties -Dfile.encoding=UTF-8 -Dcmf.root.logger=INFO,LOGFILE -Dcmf.log.dir=/var/log/cloudera-scm-server -Dcmf.log.file=cloudera-scm-server.log -Dcmf.jetty.threshhold=WARN -Dcmf.schema.dir=/opt/cloudera/cm/schema -Djava.awt.headless=true -Djava.net.preferIPv4Stack=true -Dpython.home=/opt/cloudera/cm/python -XX:+HeapDumpOnOutOfMemoryError -Xmx2G -XX:MaxPermSize=256m -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:OnOutOfMemoryError=kill -9 %p com.cloudera.server.cmf.Main
root 14404 4844 0 02:06 pts/0 00:00:00 grep --color=auto -Ei 13778
[root@cm-r01nn01 ~]# systemctl status cloudera-scm-agent
â cloudera-scm-agent.service - Cloudera Manager Agent Service
Loaded: loaded (/usr/lib/systemd/system/cloudera-scm-agent.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2019-09-13 00:55:44 EDT; 1h 11min ago
Main PID: 3877 (cmagent)
CGroup: /system.slice/cloudera-scm-agent.service
ââ3877 /usr/bin/python2 /opt/cloudera/cm-agent/bin/cm agent
Sep 13 00:55:55 cm-r01nn01.mws.mds.xyz cm[3877]: warnings.warn(msg, RuntimeWarning)
Sep 13 01:10:58 cm-r01nn01.mws.mds.xyz cm[3877]: 2591-kms-KMS: added process group
Sep 13 01:10:58 cm-r01nn01.mws.mds.xyz cm[3877]: 2592-zookeeper-server: added process group
Sep 13 01:12:01 cm-r01nn01.mws.mds.xyz cm[3877]: 2598-hdfs-HTTPFS: added process group
Sep 13 01:12:01 cm-r01nn01.mws.mds.xyz cm[3877]: 2600-hdfs-SECONDARYNAMENODE: added process group
Sep 13 01:17:05 cm-r01nn01.mws.mds.xyz cm[3877]: [2019-09-13 01:17:05,746 pyinotify ERROR] add_watch: cannot watch /var/log/hive/audit WD=-1, Er...(ENOENT)
Sep 13 01:18:01 cm-r01nn01.mws.mds.xyz cm[3877]: 2649-hive-HIVESERVER2: added process group
Sep 13 01:18:01 cm-r01nn01.mws.mds.xyz cm[3877]: 2650-hive-WEBHCAT: added process group
Sep 13 01:54:01 cm-r01nn01.mws.mds.xyz cm[3877]: 2649-hive-HIVESERVER2: stopped
Sep 13 01:54:01 cm-r01nn01.mws.mds.xyz cm[3877]: 2649-hive-HIVESERVER2: removed process group
Hint: Some lines were ellipsized, use -l to show in full.
[root@cm-r01nn01 ~]# systemctl status cloudera-scm-agent -l
â cloudera-scm-agent.service - Cloudera Manager Agent Service
Loaded: loaded (/usr/lib/systemd/system/cloudera-scm-agent.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2019-09-13 00:55:44 EDT; 1h 12min ago
Main PID: 3877 (cmagent)
CGroup: /system.slice/cloudera-scm-agent.service
ââ3877 /usr/bin/python2 /opt/cloudera/cm-agent/bin/cm agent
Sep 13 00:55:55 cm-r01nn01.mws.mds.xyz cm[3877]: warnings.warn(msg, RuntimeWarning)
Sep 13 01:10:58 cm-r01nn01.mws.mds.xyz cm[3877]: 2591-kms-KMS: added process group
Sep 13 01:10:58 cm-r01nn01.mws.mds.xyz cm[3877]: 2592-zookeeper-server: added process group
Sep 13 01:12:01 cm-r01nn01.mws.mds.xyz cm[3877]: 2598-hdfs-HTTPFS: added process group
Sep 13 01:12:01 cm-r01nn01.mws.mds.xyz cm[3877]: 2600-hdfs-SECONDARYNAMENODE: added process group
Sep 13 01:17:05 cm-r01nn01.mws.mds.xyz cm[3877]: [2019-09-13 01:17:05,746 pyinotify ERROR] add_watch: cannot watch /var/log/hive/audit WD=-1, Errno=No such file or directory (ENOENT)
Sep 13 01:18:01 cm-r01nn01.mws.mds.xyz cm[3877]: 2649-hive-HIVESERVER2: added process group
Sep 13 01:18:01 cm-r01nn01.mws.mds.xyz cm[3877]: 2650-hive-WEBHCAT: added process group
Sep 13 01:54:01 cm-r01nn01.mws.mds.xyz cm[3877]: 2649-hive-HIVESERVER2: stopped
Sep 13 01:54:01 cm-r01nn01.mws.mds.xyz cm[3877]: 2649-hive-HIVESERVER2: removed process group
[root@cm-r01nn01 ~]#
... View more
09-12-2019
09:53 PM
Is there anywhere to reduce the cloudera-scm-agent memory usage? Currently it's consuming 1.9G:
top - 00:49:46 up 10 days, 11:06, 2 users, load average: 0.62, 0.43, 0.50
Tasks: 195 total, 1 running, 194 sleeping, 0 stopped, 0 zombie
%Cpu(s): 2.5 us, 0.2 sy, 0.0 ni, 97.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 8008644 total, 3828140 free, 3228256 used, 952248 buff/cache
KiB Swap: 4063228 total, 3974644 free, 88584 used. 4180168 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
24743 clouder+ 20 0 7107092 278440 5388 S 16.6 3.5 1329:59 java
13611 root 20 0 1260404 41488 2252 S 2.7 0.5 1:27.75 python2
23190 clouder+ 20 0 8526144 1.9g 47492 S 1.3 25.5 2470:33 java
Even with a stopped cluster.
Anyway to tweak this on smaller POC clusters?
Would anyone be able to ballpark memory settings for small Cloudera clusters and services?
Need to restart the agent and clear caches to free up memory. Wondering if that could be reduced instead.
Cheers, TK
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Cloudera Manager
09-03-2019
09:50 PM
I've adjusted the auth_to_local rules as follows: RULE:[2:$1@$0](HTTP@\QMWS.MDS.XYZ\E$)s/@\QMWS.MDS.XYZ\E$//
RULE:[1:$1@$0](.*@\QMWS.MDS.XYZ\E$)s/@\QMWS.MDS.XYZ\E$///L
RULE:[2:$1@$0](.*@\QMWS.MDS.XYZ\E$)s/@\QMWS.MDS.XYZ\E$///L
RULE:[2:$1@$0](HTTP@\Qmws.mds.xyz\E$)s/@\Qmws.mds.xyz\E$//
RULE:[1:$1@$0](.*@\Qmws.mds.xyz\E$)s/@\Qmws.mds.xyz\E$///L
RULE:[2:$1@$0](.*@\Qmws.mds.xyz\E$)s/@\Qmws.mds.xyz\E$///L
RULE:[2:$1@$0](HTTP@\QMDS.XYZ\E$)s/@\QMDS.XYZ\E$//
RULE:[1:$1@$0](.*@\QMDS.XYZ\E$)s/@\QMDS.XYZ\E$///L
RULE:[2:$1@$0](.*@\QMDS.XYZ\E$)s/@\QMDS.XYZ\E$///L
RULE:[2:$1@$0](HTTP@\Qmds.xyz\E$)s/@\Qmds.xyz\E$//
RULE:[1:$1@$0](.*@\Qmds.xyz\E$)s/@\Qmds.xyz\E$///L
RULE:[2:$1@$0](.*@\Qmds.xyz\E$)s/@\Qmds.xyz\E$///L
DEFAULT And now when I create the folder in this manner, spark-shell starts using the below folder: drwxr-xr-x - tom tom 0 2019-09-04 00:45 /user/tom Since I have multiple users from multiple domains, collisions can occur if same user exists in two different domains. So I would be curious if you know how to create the folders in this manner: /user/<DOMAIN>/<USER> to avoid potential conflict with same named user but in different domains. Cheers, TK
... View more
09-03-2019
08:48 PM
Odd then. This format worked: drwxr-xr-x - tom@MDS.XYZ tom@MDS.XYZ 0 2019-09-03 21:54 /user/tom@MDS.XYZ Perhaps I should be expecting something in the settings?
... View more
09-03-2019
08:19 PM
What I'm getting now is the below and wondering what the solution might be, having tried the ones on the community so far without success: INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all This message floods the spark-shell console rendering it unusable. What I did so far to try and check it: 1) Reverse lookups work. 2) Forward lookups work. 3) UID of CM Agent are unique. 4) RHEL 7 UID's are unique. Looks like it might be related to this bug so I may just have to wait it out or grab a copy of the latest Spark somehow to fix it? https://issues.apache.org/jira/browse/SPARK-28005 Cheers, TK
... View more
09-03-2019
07:02 PM
I had the directory /user/tom and tried the following permissions: tom:tom
tom@mds.xyz:tom@mds.xyz
tom@MDS.XYZ:tom@MDS.XYZ No luck. Till I saw this message: 19/09/03 21:51:53 WARN fs.TrashPolicyDefault: Can't create trash directory: hdfs://cm-r01nn02.mws.mds.xyz:8020/user/tom@MDS.XYZ/.Trash/Current/user/mds.xyz/tom
org.apache.hadoop.security.AccessControlException: Permission denied: user=tom@MDS.XYZ, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x Telling me the right folder was supposed to be /user/tom@MDS.XYZ. So that's what I set and spark-shell now works. It really has to do with this issue: 19/09/03 21:51:33 INFO util.KerberosName: No auth_to_local rules applied to tom@MDS.XYZ And I really need to define auth_to_local to create the folders in this manner: /user/domain/user But I'm not sure how just yet. Cheers, TK
... View more
09-02-2019
07:37 PM
Just to clarify, did you mean you "can" or "can't" stop client side gateway roles?
... View more
09-02-2019
07:35 PM
I've upped the memory to get over the issue stated above. Now I get: Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=tom@MDS.XYZ, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x How do I configured Spark to write to individual user folders such as /user/tom? Cheers, TK
... View more
09-02-2019
07:18 AM
Hey Eric, Thanks! Yes, I've added the Hive and Spark gateway role to the host. The Spark gateways are distributed to the same nodes. However, since your comment, I notice both Hive and Spark gateways are offline. Can't start them as of this writing. Getting: Command Start is not currently available for execution. whenever I try to start the role. So definitely an issue there. Kerberos credentials appear ok. I can regenerate them without an issue. Running kinit using the hdfs.keytab works as expected. On a closer look, I do get this error which I'll try to fix up after this comment. 19/09/02 09:56:42 ERROR repl.Main: Failed to initialize Spark session.
java.lang.IllegalArgumentException: Required executor memory (1024), overhead (384 MB), and PySpark memory (0 MB) is above the max threshold (256 MB) of this cluster! Please check the values of 'yarn.scheduler.maximum-allocation-mb' and/or 'yarn.nodemanager.resource.memory-mb'. This is a fairly small cluster for POC type of work so I would rather tweak Spark memory requirements rather than increase max memory per container. Not able to figure that out yet. Cheers, TK
... View more
08-31-2019
08:38 AM
Hey All,
I'm trying to run spark-shell for the first time on a CM / CDH 6.3 installation. But getting the below instead.
19/08/31 11:05:24 DEBUG ipc.Client: The ping interval is 60000 ms.
19/08/31 11:05:24 DEBUG ipc.Client: Connecting to cm-r01nn02.mws.mds.xyz/192.168.0.133:8032
19/08/31 11:05:24 DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:795)
19/08/31 11:05:24 DEBUG security.SaslRpcClient: Sending sasl message state: NEGOTIATE
19/08/31 11:05:24 DEBUG security.SaslRpcClient: Get token info proto:interface org.apache.hadoop.yarn.api.ApplicationClientProtocolPB info:org.apache.hadoop.yarn.security.client.ClientRMSecurityInfo$2@238c63df
19/08/31 11:05:24 DEBUG client.RMDelegationTokenSelector: Looking for a token with service 192.168.0.133:8032
19/08/31 11:05:24 DEBUG security.SaslRpcClient: tokens aren't supported for this protocol or user doesn't have one
19/08/31 11:05:24 DEBUG security.SaslRpcClient: client isn't using kerberos
19/08/31 11:05:24 DEBUG security.UserGroupInformation: PrivilegedActionException as:root (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
19/08/31 11:05:24 DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:719)
19/08/31 11:05:24 WARN ipc.Client: Exception encountered while connecting to the server : org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
19/08/31 11:05:24 DEBUG security.UserGroupInformation: PrivilegedActionException as:root (auth:SIMPLE) cause:java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
19/08/31 11:05:24 DEBUG ipc.Client: closing ipc connection to cm-r01nn02.mws.mds.xyz/192.168.0.133:8032: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:756)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:719)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:812)
at org.apache.hadoop.ipc.Client$Connection.access$3600(Client.java:410)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1560)
at org.apache.hadoop.ipc.Client.call(Client.java:1391)
at org.apache.hadoop.ipc.Client.call(Client.java:1355)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy16.getClusterMetrics(Unknown Source)
at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getClusterMetrics(ApplicationClientProtocolPBClientImpl.java:251)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy17.getClusterMetrics(Unknown Source)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getYarnClusterMetrics(YarnClientImpl.java:604)
at org.apache.spark.deploy.yarn.Client$$anonfun$submitApplication$1.apply(Client.scala:169)
at org.apache.spark.deploy.yarn.Client$$anonfun$submitApplication$1.apply(Client.scala:169)
at org.apache.spark.internal.Logging$class.logInfo(Logging.scala:57)
at org.apache.spark.deploy.yarn.Client.logInfo(Client.scala:62)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:168)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:60)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:186)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:511)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2549)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:944)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:935)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:935)
at org.apache.spark.repl.Main$.createSparkSession(Main.scala:106)
at $line3.$read$$iw$$iw.<init>(<console>:15)
at $line3.$read$$iw.<init>(<console>:43)
at $line3.$read.<init>(<console>:45)
at $line3.$read$.<init>(<console>:49)
at $line3.$read$.<clinit>(<console>)
at $line3.$eval$.$print$lzycompute(<console>:7)
at $line3.$eval$.$print(<console>:6)
at $line3.$eval.$print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:793)
at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1054)
at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:645)
at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:644)
at scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31)
at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19)
at scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:644)
at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:576)
at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:572)
at scala.tools.nsc.interpreter.IMain$$anonfun$quietRun$1.apply(IMain.scala:231)
at scala.tools.nsc.interpreter.IMain$$anonfun$quietRun$1.apply(IMain.scala:231)
at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:221)
at scala.tools.nsc.interpreter.IMain.quietRun(IMain.scala:231)
at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1.apply(SparkILoop.scala:109)
at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1.apply(SparkILoop.scala:109)
at scala.collection.immutable.List.foreach(List.scala:392)
at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply$mcV$sp(SparkILoop.scala:109)
at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:109)
at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:109)
at scala.tools.nsc.interpreter.ILoop.savingReplayStack(ILoop.scala:91)
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:108)
at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$org$apache$spark$repl$SparkILoop$$anonfun$$loopPostInit$1$1.apply$mcV$sp(SparkILoop.scala:211)
at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$org$apache$spark$repl$SparkILoop$$anonfun$$loopPostInit$1$1.apply(SparkILoop.scala:199)
at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$org$apache$spark$repl$SparkILoop$$anonfun$$loopPostInit$1$1.apply(SparkILoop.scala:199)
at scala.tools.nsc.interpreter.ILoop$$anonfun$mumly$1.apply(ILoop.scala:189)
at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:221)
at scala.tools.nsc.interpreter.ILoop.mumly(ILoop.scala:186)
at org.apache.spark.repl.SparkILoop$$anonfun$process$1.org$apache$spark$repl$SparkILoop$$anonfun$$loopPostInit$1(SparkILoop.scala:199)
at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$startup$1$1.apply(SparkILoop.scala:267)
at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$startup$1$1.apply(SparkILoop.scala:247)
at org.apache.spark.repl.SparkILoop$$anonfun$process$1.withSuppressedSettings$1(SparkILoop.scala:235)
at org.apache.spark.repl.SparkILoop$$anonfun$process$1.startup$1(SparkILoop.scala:247)
at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply$mcZ$sp(SparkILoop.scala:282)
at org.apache.spark.repl.SparkILoop.runClosure(SparkILoop.scala:159)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:182)
at org.apache.spark.repl.Main$.doMain(Main.scala:78)
at org.apache.spark.repl.Main$.main(Main.scala:58)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:851)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:926)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:935)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
at org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:173)
at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:390)
at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:614)
at org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:410)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:799)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:795)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:795)
... 95 more
19/08/31 11:05:24 DEBUG ipc.Client: IPC Client (483582792) connection to cm-r01nn02.mws.mds.xyz/192.168.0.133:8032 from root: closed
19/08/31 11:05:24 INFO retry.RetryInvocationHandler: java.io.IOException: Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local host is: "cm-r01en01.mws.mds.xyz/192.168.0.140"; destination host is: "cm-r01nn02.mws.mds.xyz":8032; , while invoking ApplicationClientProtocolPBClientImpl.getClusterMetrics over null after 6 failover attempts. Trying to failover after sleeping for 19516ms.
19/08/31 11:05:24 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:25 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:26 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:27 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:28 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:29 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:30 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:31 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:32 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:33 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:34 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:35 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:36 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:37 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:38 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:39 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:40 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:41 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:42 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:43 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:43 DEBUG ipc.Client: The ping interval is 60000 ms.
19/08/31 11:05:43 DEBUG ipc.Client: Connecting to cm-r01nn02.mws.mds.xyz/192.168.0.133:8032
19/08/31 11:05:43 DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:795)
19/08/31 11:05:43 DEBUG security.SaslRpcClient: Sending sasl message state: NEGOTIATE
19/08/31 11:05:43 DEBUG security.SaslRpcClient: Get token info proto:interface org.apache.hadoop.yarn.api.ApplicationClientProtocolPB info:org.apache.hadoop.yarn.security.client.ClientRMSecurityInfo$2@321558f8
19/08/31 11:05:43 DEBUG client.RMDelegationTokenSelector: Looking for a token with service 192.168.0.133:8032
19/08/31 11:05:43 DEBUG security.SaslRpcClient: tokens aren't supported for this protocol or user doesn't have one
19/08/31 11:05:43 DEBUG security.SaslRpcClient: client isn't using kerberos
19/08/31 11:05:43 DEBUG security.UserGroupInformation: PrivilegedActionException as:root (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
19/08/31 11:05:43 DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:719)
19/08/31 11:05:43 WARN ipc.Client: Exception encountered while connecting to the server : org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
19/08/31 11:05:43 DEBUG security.UserGroupInformation: PrivilegedActionException as:root (auth:SIMPLE) cause:java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
19/08/31 11:05:43 DEBUG ipc.Client: closing ipc connection to cm-r01nn02.mws.mds.xyz/192.168.0.133:8032: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:756)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:719)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:812)
at org.apache.hadoop.ipc.Client$Connection.access$3600(Client.java:410)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1560)
at org.apache.hadoop.ipc.Client.call(Client.java:1391)
at org.apache.hadoop.ipc.Client.call(Client.java:1355)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy16.getClusterMetrics(Unknown Source)
at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getClusterMetrics(ApplicationClientProtocolPBClientImpl.java:251)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy17.getClusterMetrics(Unknown Source)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getYarnClusterMetrics(YarnClientImpl.java:604)
at org.apache.spark.deploy.yarn.Client$$anonfun$submitApplication$1.apply(Client.scala:169)
at org.apache.spark.deploy.yarn.Client$$anonfun$submitApplication$1.apply(Client.scala:169)
at org.apache.spark.internal.Logging$class.logInfo(Logging.scala:57)
at org.apache.spark.deploy.yarn.Client.logInfo(Client.scala:62)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:168)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:60)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:186)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:511)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2549)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:944)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:935)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:935)
at org.apache.spark.repl.Main$.createSparkSession(Main.scala:106)
at $line3.$read$$iw$$iw.<init>(<console>:15)
at $line3.$read$$iw.<init>(<console>:43)
at $line3.$read.<init>(<console>:45)
at $line3.$read$.<init>(<console>:49)
at $line3.$read$.<clinit>(<console>)
at $line3.$eval$.$print$lzycompute(<console>:7)
at $line3.$eval$.$print(<console>:6)
at $line3.$eval.$print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:793)
at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1054)
at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:645)
at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:644)
at scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31)
at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19)
at scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:644)
at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:576)
at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:572)
at scala.tools.nsc.interpreter.IMain$$anonfun$quietRun$1.apply(IMain.scala:231)
at scala.tools.nsc.interpreter.IMain$$anonfun$quietRun$1.apply(IMain.scala:231)
at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:221)
at scala.tools.nsc.interpreter.IMain.quietRun(IMain.scala:231)
at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1.apply(SparkILoop.scala:109)
at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1.apply(SparkILoop.scala:109)
at scala.collection.immutable.List.foreach(List.scala:392)
at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply$mcV$sp(SparkILoop.scala:109)
at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:109)
at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:109)
at scala.tools.nsc.interpreter.ILoop.savingReplayStack(ILoop.scala:91)
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:108)
at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$org$apache$spark$repl$SparkILoop$$anonfun$$loopPostInit$1$1.apply$mcV$sp(SparkILoop.scala:211)
at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$org$apache$spark$repl$SparkILoop$$anonfun$$loopPostInit$1$1.apply(SparkILoop.scala:199)
at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$org$apache$spark$repl$SparkILoop$$anonfun$$loopPostInit$1$1.apply(SparkILoop.scala:199)
at scala.tools.nsc.interpreter.ILoop$$anonfun$mumly$1.apply(ILoop.scala:189)
at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:221)
at scala.tools.nsc.interpreter.ILoop.mumly(ILoop.scala:186)
at org.apache.spark.repl.SparkILoop$$anonfun$process$1.org$apache$spark$repl$SparkILoop$$anonfun$$loopPostInit$1(SparkILoop.scala:199)
at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$startup$1$1.apply(SparkILoop.scala:267)
at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$startup$1$1.apply(SparkILoop.scala:247)
at org.apache.spark.repl.SparkILoop$$anonfun$process$1.withSuppressedSettings$1(SparkILoop.scala:235)
at org.apache.spark.repl.SparkILoop$$anonfun$process$1.startup$1(SparkILoop.scala:247)
at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply$mcZ$sp(SparkILoop.scala:282)
at org.apache.spark.repl.SparkILoop.runClosure(SparkILoop.scala:159)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:182)
at org.apache.spark.repl.Main$.doMain(Main.scala:78)
at org.apache.spark.repl.Main$.main(Main.scala:58)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:851)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:926)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:935)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
at org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:173)
at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:390)
at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:614)
at org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:410)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:799)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:795)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:795)
... 95 more
19/08/31 11:05:43 DEBUG ipc.Client: IPC Client (483582792) connection to cm-r01nn02.mws.mds.xyz/192.168.0.133:8032 from root: closed
19/08/31 11:05:43 INFO retry.RetryInvocationHandler: java.io.IOException: Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local host is: "cm-r01en01.mws.mds.xyz/192.168.0.140"; destination host is: "cm-r01nn02.mws.mds.xyz":8032; , while invoking ApplicationClientProtocolPBClientImpl.getClusterMetrics over null after 7 failover attempts. Trying to failover after sleeping for 33704ms.
19/08/31 11:05:44 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:45 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:46 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:47 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:48 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:49 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:50 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:51 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:52 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:53 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:54 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:55 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:56 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:57 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:58 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:59 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:06:00 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:06:01 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:06:02 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:06:03 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:06:04 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:06:05 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:06:06 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:06:07 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:06:08 INFO storage.DiskBlockManager: Shutdown hook called
19/08/31 11:06:08 INFO util.ShutdownHookManager: Shutdown hook called
19/08/31 11:06:08 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-9a9338cb-f16b-48e0-b0cd-7ddfcc148a13/repl-52ba4c53-3478-4ead-93e7-d20ecbd2e866
19/08/31 11:06:08 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-9a9338cb-f16b-48e0-b0cd-7ddfcc148a13/userFiles-5f218430-30bb-4a7e-87df-7ee235183578
19/08/31 11:06:08 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-9a9338cb-f16b-48e0-b0cd-7ddfcc148a13
19/08/31 11:06:08 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-79f3eecb-69d4-4b21-85dc-6746fc33f65c
19/08/31 11:06:08 DEBUG ipc.Client: stopping client from cache: org.apache.hadoop.ipc.Client@19656e21
19/08/31 11:06:08 DEBUG util.ShutdownHookManager: Completed shutdown in 0.062 seconds; Timeouts: 0
19/08/31 11:06:08 DEBUG util.ShutdownHookManager: ShutdownHookManger completed shutdown.
[root@cm-r01en01 process]# dig -x 192.168.0.140
; <<>> DiG 9.9.4-RedHat-9.9.4-73.el7_6 <<>> -x 192.168.0.140
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 39821
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 3
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;140.0.168.192.in-addr.arpa. IN PTR
;; ANSWER SECTION:
140.0.168.192.in-addr.arpa. 1200 IN PTR cm-r01en01.mws.mds.xyz.
;; AUTHORITY SECTION:
0.168.192.in-addr.arpa. 86400 IN NS idmipa03.mws.mds.xyz.
0.168.192.in-addr.arpa. 86400 IN NS idmipa04.mws.mds.xyz.
;; ADDITIONAL SECTION:
idmipa03.mws.mds.xyz. 1200 IN A 192.168.0.154
idmipa04.mws.mds.xyz. 1200 IN A 192.168.0.155
;; Query time: 1 msec
;; SERVER: 192.168.0.154#53(192.168.0.154)
;; WHEN: Sat Aug 31 11:06:18 EDT 2019
;; MSG SIZE rcvd: 169
[root@cm-r01en01 process]# dig -x 192.168.0.133
; <<>> DiG 9.9.4-RedHat-9.9.4-73.el7_6 <<>> -x 192.168.0.133
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 11817
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 3
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;133.0.168.192.in-addr.arpa. IN PTR
;; ANSWER SECTION:
133.0.168.192.in-addr.arpa. 1200 IN PTR cm-r01nn02.mws.mds.xyz.
;; AUTHORITY SECTION:
0.168.192.in-addr.arpa. 86400 IN NS idmipa04.mws.mds.xyz.
0.168.192.in-addr.arpa. 86400 IN NS idmipa03.mws.mds.xyz.
;; ADDITIONAL SECTION:
idmipa03.mws.mds.xyz. 1200 IN A 192.168.0.154
idmipa04.mws.mds.xyz. 1200 IN A 192.168.0.155
;; Query time: 1 msec
;; SERVER: 192.168.0.154#53(192.168.0.154)
;; WHEN: Sat Aug 31 11:26:10 EDT 2019
;; MSG SIZE rcvd: 169
[root@cm-r01en01 process]#
I try the same as a non previlidged AD / FreeIPA user but with same results:
19/08/31 11:33:07 DEBUG ipc.Client: The ping interval is 60000 ms.
19/08/31 11:33:07 DEBUG ipc.Client: Connecting to cm-r01nn02.mws.mds.xyz/192.168.0.133:8032
19/08/31 11:33:07 DEBUG security.UserGroupInformation: PrivilegedAction as:tom@mds.xyz (auth:SIMPLE) from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:795)
19/08/31 11:33:07 DEBUG security.SaslRpcClient: Sending sasl message state: NEGOTIATE
19/08/31 11:33:07 DEBUG security.SaslRpcClient: Get token info proto:interface org.apache.hadoop.yarn.api.ApplicationClientProtocolPB info:org.apache.hadoop.yarn.security.client.ClientRMSecurityInfo$2@6d4df1d2
19/08/31 11:33:07 DEBUG client.RMDelegationTokenSelector: Looking for a token with service 192.168.0.133:8032
19/08/31 11:33:07 DEBUG security.SaslRpcClient: tokens aren't supported for this protocol or user doesn't have one
19/08/31 11:33:07 DEBUG security.SaslRpcClient: client isn't using kerberos
19/08/31 11:33:07 DEBUG security.UserGroupInformation: PrivilegedActionException as:tom@mds.xyz (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
19/08/31 11:33:07 DEBUG security.UserGroupInformation: PrivilegedAction as:tom@mds.xyz (auth:SIMPLE) from:org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:719)
19/08/31 11:33:07 WARN ipc.Client: Exception encountered while connecting to the server : org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
19/08/31 11:33:07 DEBUG security.UserGroupInformation: PrivilegedActionException as:tom@mds.xyz (auth:SIMPLE) cause:java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
19/08/31 11:33:07 DEBUG ipc.Client: closing ipc connection to cm-r01nn02.mws.mds.xyz/192.168.0.133:8032: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:756)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:719)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:812)
at org.apache.hadoop.ipc.Client$Connection.access$3600(Client.java:410)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1560)
at org.apache.hadoop.ipc.Client.call(Client.java:1391)
at org.apache.hadoop.ipc.Client.call(Client.java:1355)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy16.getClusterMetrics(Unknown Source)
at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getClusterMetrics(ApplicationClientProtocolPBClientImpl.java:251)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy17.getClusterMetrics(Unknown Source)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getYarnClusterMetrics(YarnClientImpl.java:604)
at org.apache.spark.deploy.yarn.Client$$anonfun$submitApplication$1.apply(Client.scala:169)
at org.apache.spark.deploy.yarn.Client$$anonfun$submitApplication$1.apply(Client.scala:169)
at org.apache.spark.internal.Logging$class.logInfo(Logging.scala:57)
at org.apache.spark.deploy.yarn.Client.logInfo(Client.scala:62)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:168)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:60)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:186)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:511)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2549)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:944)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:935)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:935)
at org.apache.spark.repl.Main$.createSparkSession(Main.scala:106)
at $line3.$read$$iw$$iw.<init>(<console>:15)
at $line3.$read$$iw.<init>(<console>:43)
at $line3.$read.<init>(<console>:45)
at $line3.$read$.<init>(<console>:49)
at $line3.$read$.<clinit>(<console>)
at $line3.$eval$.$print$lzycompute(<console>:7)
at $line3.$eval$.$print(<console>:6)
at $line3.$eval.$print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:793)
at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1054)
at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:645)
at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:644)
at scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31)
at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19)
at scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:644)
at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:576)
at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:572)
at scala.tools.nsc.interpreter.IMain$$anonfun$quietRun$1.apply(IMain.scala:231)
at scala.tools.nsc.interpreter.IMain$$anonfun$quietRun$1.apply(IMain.scala:231)
at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:221)
at scala.tools.nsc.interpreter.IMain.quietRun(IMain.scala:231)
at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1.apply(SparkILoop.scala:109)
at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1.apply(SparkILoop.scala:109)
at scala.collection.immutable.List.foreach(List.scala:392)
at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply$mcV$sp(SparkILoop.scala:109)
at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:109)
at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:109)
at scala.tools.nsc.interpreter.ILoop.savingReplayStack(ILoop.scala:91)
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:108)
at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$org$apache$spark$repl$SparkILoop$$anonfun$$loopPostInit$1$1.apply$mcV$sp(SparkILoop.scala:211)
at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$org$apache$spark$repl$SparkILoop$$anonfun$$loopPostInit$1$1.apply(SparkILoop.scala:199)
at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$org$apache$spark$repl$SparkILoop$$anonfun$$loopPostInit$1$1.apply(SparkILoop.scala:199)
at scala.tools.nsc.interpreter.ILoop$$anonfun$mumly$1.apply(ILoop.scala:189)
at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:221)
at scala.tools.nsc.interpreter.ILoop.mumly(ILoop.scala:186)
at org.apache.spark.repl.SparkILoop$$anonfun$process$1.org$apache$spark$repl$SparkILoop$$anonfun$$loopPostInit$1(SparkILoop.scala:199)
at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$startup$1$1.apply(SparkILoop.scala:267)
at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$startup$1$1.apply(SparkILoop.scala:247)
at org.apache.spark.repl.SparkILoop$$anonfun$process$1.withSuppressedSettings$1(SparkILoop.scala:235)
at org.apache.spark.repl.SparkILoop$$anonfun$process$1.startup$1(SparkILoop.scala:247)
at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply$mcZ$sp(SparkILoop.scala:282)
at org.apache.spark.repl.SparkILoop.runClosure(SparkILoop.scala:159)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:182)
at org.apache.spark.repl.Main$.doMain(Main.scala:78)
at org.apache.spark.repl.Main$.main(Main.scala:58)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:851)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:926)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:935)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
at org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:173)
at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:390)
at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:614)
at org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:410)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:799)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:795)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:795)
... 95 more
19/08/31 11:33:07 DEBUG ipc.Client: IPC Client (1263257405) connection to cm-r01nn02.mws.mds.xyz/192.168.0.133:8032 from tom@mds.xyz: closed
19/08/31 11:33:07 INFO retry.RetryInvocationHandler: java.io.IOException: Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local host is: "cm-r01en01.mws.mds.xyz/192.168.0.140"; destination host is: "cm-r01nn02.mws.mds.xyz":8032; , while invoking ApplicationClientProtocolPBClientImpl.getClusterMetrics over null after 1 failover attempts. Trying to failover after sleeping for 17516ms.
19/08/31 11:33:07 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:33:08 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:33:09 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:33:10 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:33:11 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:33:12 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:33:13 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:33:14 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:33:15 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:33:16 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:33:17 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:33:18 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:33:19 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:33:20 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:33:21 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:33:22 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:33:23 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:33:24 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:33:25 DEBUG ipc.Client: The ping interval is 60000 ms.
19/08/31 11:33:25 DEBUG ipc.Client: Connecting to cm-r01nn02.mws.mds.xyz/192.168.0.133:8032
19/08/31 11:33:25 DEBUG security.UserGroupInformation: PrivilegedAction as:tom@mds.xyz (auth:SIMPLE) from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:795)
19/08/31 11:33:25 DEBUG security.SaslRpcClient: Sending sasl message state: NEGOTIATE
Has anyone seen the same and could suggest what to do to move forward with this?
A few points: 1) Reverse and forward lookups work fine from the OS side. 2) Kerberos credentials generate without issue. Cheers, TK
... View more
Labels:
- Labels:
-
Apache Spark
-
Cloudera Manager
08-17-2019
07:07 PM
I'm receiving the following message from my CDH 6.2 installation. I do see some older posts explaining the issue however as they are old posts, I don't believe they are relevent to this version.
Would anyone know the specifics of what this check does or looks for on RHEL 7 hosts that's specific to Cloudera versions?
Mismatched CDH versions: host has NONE but role expects 6
Cheers, TK
... View more
Labels:
- Labels:
-
Cloudera Manager
08-14-2019
08:39 PM
Currently stuck on this error: Run a set of services for the first time
Completed only 8/9 steps. First failure: Failed to execute command Install Oozie ShareLib on service Oozie
Aug 14, 11:32:18 PM 4.9m
Execute 8 steps in sequence
Completed only 8/9 steps. First failure: Failed to execute command Install Oozie ShareLib on service Oozie
Aug 14, 11:32:23 PM 4.8m
Execute 9 steps in parallel
Completed only 8/9 steps. First failure: Failed to execute command Install Oozie ShareLib on service Oozie
Aug 14, 11:32:23 PM 4.8m
Installing the Oozie ShareLib in HDFS
Failed to execute command Install Oozie ShareLib on service Oozie
Oozie
Aug 14, 11:32:23 PM 4.8m
Install Oozie ShareLib
Failed to install Oozie ShareLib.
Oozie
Aug 14, 11:32:23 PM 4.8m
Upload Oozie ShareLib
Command aborted because of exception: Command timed-out after 270 seconds
Oozie Server (cm-r01en01)
Aug 14, 11:32:37 PM 4.6m
Upload Oozie ShareLib
Command aborted because of exception: Command timed-out after 270 seconds
Oozie Server (cm-r01en01)
Aug 14, 11:32:37 PM 4.6m
$> oozie/oozie.sh ["install-sharelib","oozie-sharelib-yarn","hdfs://cm-r01nn02.mws.mds.xyz:8020","775","/user/oozie","8"]
stdout
stderr
Role Log
Wed Aug 14 23:32:58 EDT 2019
JAVA_HOME=/usr/java/latest
using 6 as CDH_VERSION
CONF_DIR=/var/run/cloudera-scm-agent/process/166-oozie-OOZIE-SERVER-upload-sharelib
CMF_CONF_DIR=
Found Hadoop that supports Erasure Coding. Trying to disable Erasure Coding for path: /user/oozie/share/lib
Done
the destination path for sharelib is: /user/oozie/share/lib/lib_20190814233305
Running 1738 copy tasks on 8 threads
Full log file
$> oozie/oozie.sh ["install-sharelib","oozie-sharelib-yarn","hdfs://cm-r01nn02.mws.mds.xyz:8020","775","/user/oozie","8"]
stdout
stderr
Role Log
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/jars/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/jars/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/jars/slf4j-simple-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See <a href="http://www.slf4j.org/codes.html#multiple_bindings" target="_blank">http://www.slf4j.org/codes.html#multiple_bindings</a> for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console. Set system property 'org.apache.logging.log4j.simplelog.StatusLogger.level' to TRACE to show Log4j2 internal initialization logging.
Full log file Same effect as the earlier error: nothing else installs after. Not sure if it's significant however originaly had RPM's installed not parcels. Reverted to parcels since the installer failed even detecting a PostgreSQL DB Cluster when RPM's were used (Had a separate post on that here). Cheers, TK
... View more
08-14-2019
06:59 PM
This affects the Oozie setup and configuration. The other services are not installed since this error breaks the installation chain. This is the list of the steps and where it get's stuck on: Completed 1 of 1 step(s).
Show All Steps Show Only Failed Steps Show Only Running Steps
Run a set of services for the first time
Completed only 8/9 steps. First failure: Failed to execute command Install Oozie ShareLib on service Oozie
Aug 14, 12:30:25 AM 5m
Execute 8 steps in sequence
Completed only 8/9 steps. First failure: Failed to execute command Install Oozie ShareLib on service Oozie
Aug 14, 12:30:30 AM 4.9m
Execute 9 steps in parallel
Completed only 8/9 steps. First failure: Failed to execute command Install Oozie ShareLib on service Oozie
Aug 14, 12:30:30 AM 4.9m
Installing the Oozie ShareLib in HDFS
Failed to execute command Install Oozie ShareLib on service Oozie
Oozie
Aug 14, 12:30:30 AM 4.9m
Install Oozie ShareLib
Failed to install Oozie ShareLib.
Oozie
Aug 14, 12:30:31 AM 4.9m
Upload Oozie ShareLib
Command aborted because of exception: Command timed-out after 270 seconds
Oozie Server (cm-r01en01)
Aug 14, 12:30:50 AM 4.6m
Upload Oozie ShareLib
Command aborted because of exception: Command timed-out after 270 seconds
Oozie Server (cm-r01en01)
Aug 14, 12:30:50 AM 4.6m
$>
oozie/oozie.sh ["install-sharelib","oozie-sharelib-yarn","hdfs://cm-r01nn02.mws.mds.xyz:8020","775","/user/oozie","8"]
stdout
stderr
Role Log
log4j:ERROR Could not instantiate class [org.cloudera.log4j.redactor.RedactorAppender].
java.lang.ClassNotFoundException: org.cloudera.log4j.redactor.RedactorAppender
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.apache.log4j.helpers.Loader.loadClass(Loader.java:198)
at org.apache.log4j.helpers.OptionConverter.instantiateByClassName(OptionConverter.java:327)
at org.apache.log4j.helpers.OptionConverter.instantiateByKey(OptionConverter.java:124)
at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:785)
at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
at org.apache.log4j.PropertyConfigurator.parseCatsAndRenderers(PropertyConfigurator.java:672)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:516)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:395)
at org.apache.log4j.PropertyWatchdog.doOnChange(PropertyConfigurator.java:955)
at org.apache.log4j.helpers.FileWatchdog.checkAndConfigure(FileWatchdog.java:89)
at org.apache.log4j.helpers.FileWatchdog.<init>(FileWatchdog.java:58)
at org.apache.log4j.PropertyWatchdog.<init>(PropertyConfigurator.java:947)
at org.apache.log4j.PropertyConfigurator.configureAndWatch(PropertyConfigurator.java:473)
at org.apache.oozie.service.XLogService.init(XLogService.java:178)
at org.apache.oozie.service.Services.setServiceInternal(Services.java:386)
at org.apache.oozie.service.Services.<init>(Services.java:111)
at org.apache.oozie.tools.OozieSharelibCLI.run(OozieSharelibCLI.java:194)
at org.apache.oozie.tools.OozieSharelibCLI.main(OozieSharelibCLI.java:97)
log4j:ERROR Could not instantiate appender named "redactor1".
... View more
08-14-2019
04:56 AM
Hey All,
Getting the following on a brand new 6.2 installation and wondering what is the 'proper' fix for this.
I can start symlinking to get things working but perhaps I haven't deployed a package or taken another step I should have taken earlier?
+ /usr/java/latest/bin/java -Xms52428800 -Xmx52428800 -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:G1HeapRegionSize=16M -XX:MinMetaspaceFreeRatio=50 -XX:MaxMetaspaceFreeRatio=80 -XX:+DisableExplicitGC -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/oozie_oozie-OOZIE_SERVER-901d5713a53510380392378fa81b483d_pid18817.hprof -XX:OnOutOfMemoryError=/opt/cloudera/cm-agent/service/common/killparent.sh -Doozie.home.dir=/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie -Doozie.config.dir=/run/cloudera-scm-agent/process/152-oozie-OOZIE-SERVER-upload-sharelib -Doozie.log.dir=/var/log/oozie -Doozie.log.file=oozie-cmf-oozie-OOZIE_SERVER-cm-r01en01.mws.mds.xyz.log.out -Doozie.config.file=oozie-site.xml -Doozie.log4j.file=log4j.properties -Doozie.log4j.reload=10 -Doozie.http.hostname=cm-r01en01.mws.mds.xyz -Doozie.http.port=11000 -Djava.net.preferIPv4Stack=true -Doozie.admin.port= -Dderby.stream.error.file=/var/log/oozie/derby.log -Doozie.instance.id=cm-r01en01.mws.mds.xyz -Djava.library.path=/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/hadoop/lib/native -cp ':/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/accessors-smart-1.2.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/activation-1.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/aggdesigner-algorithm-6.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/ant-1.6.5.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/antlr-2.7.7.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/antlr-runtime-3.4.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/aopalliance-1.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/aopalliance-repackaged-2.5.0-b32.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/apache-jsp-9.3.25.v20180904.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/apache-jstl-9.3.25.v20180904.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/apache-log4j-extras-1.2.17.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/asm-5.0.4.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/asm-commons-6.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/asm-tree-6.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/audience-annotations-0.5.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/automaton-1.11-8.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/avro.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/batik-anim-1.10.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/batik-awt-util-1.10.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/batik-bridge-1.10.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/batik-codec-1.10.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/batik-constants-1.10.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/batik-css-1.10.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/batik-dom-1.10.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/batik-ext-1.10.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/batik-gvt-1.10.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/batik-i18n-1.10.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/batik-parser-1.10.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/batik-rasterizer-1.10.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/batik-script-1.10.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/batik-svg-dom-1.10.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/batik-svggen-1.10.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/batik-svgrasterizer-1.10.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/batik-transcoder-1.10.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/batik-util-1.10.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/batik-xml-1.10.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/bcpkix-jdk15on-1.60.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/bcprov-jdk15on-1.60.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/bonecp-0.8.0.RELEASE.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/c3p0-0.9.1.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/commons-beanutils-1.9.3.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/commons-cli-1.4.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/commons-codec-1.9.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/commons-collections-3.2.2.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/commons-configuration2-2.1.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/commons-crypto-1.0.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/commons-daemon-1.0.13.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/commons-dbcp-1.4.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/commons-exec-1.3.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/commons-httpclient-3.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/commons-io-2.6.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/commons-lang3-3.7.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/commons-math3-3.1.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/commons-net-3.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/commons-pool-1.5.4.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/curator-client-2.7.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/curator-framework-2.7.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/curator-recipes-2.7.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/curator-x-discovery-2.7.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/datanucleus-api-jdo-4.2.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/datanucleus-core-4.1.6.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/datanucleus-rdbms-4.1.7.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/disruptor-3.3.6.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/dozer-5.5.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/ecj-4.4.2.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/ehcache-3.3.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/ehcache-core-2.6.3.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/fastutil-6.5.6.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/findbugs-annotations-1.3.9-1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/fst-2.50.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/geronimo-jms_1.1_spec-1.1.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/geronimo-jpa_2.0_spec-1.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/geronimo-jta_1.1_spec-1.1.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/gmetric4j-1.0.7.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/graphviz-java-0.7.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/gson-2.7.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/guice-3.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/guice-assistedinject-3.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/guice-servlet-4.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hadoop-annotations.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hadoop-auth.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hadoop-client-3.0.0-cdh6.2.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hadoop-common.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hadoop-hdfs-client.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hadoop-hdfs.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hadoop-mapreduce-client-common.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hadoop-mapreduce-client-core.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hadoop-mapreduce-client-jobclient.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hadoop-yarn-api.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hadoop-yarn-client.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hadoop-yarn-common.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hadoop-yarn-registry.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hadoop-yarn-server-applicationhistoryservice.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hadoop-yarn-server-common.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hadoop-yarn-server-resourcemanager.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hadoop-yarn-server-web-proxy.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hbase-client-2.1.0-cdh6.2.0-tests.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hbase-client.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hbase-common.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hbase-hadoop2-compat.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hbase-hadoop-compat.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hbase-http.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hbase-mapreduce.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hbase-metrics-api.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hbase-metrics.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hbase-procedure.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hbase-protocol.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hbase-protocol-shaded.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hbase-replication.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hbase-server.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hbase-shaded-miscellaneous.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hbase-shaded-netty.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hbase-shaded-protobuf.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hbase-zookeeper.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/HikariCP-2.6.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/HikariCP-java7-2.4.12.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hive-classification.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hive-hcatalog-pig-adapter.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hive-jdbc.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hive-llap-client.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hive-llap-common.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hive-llap-server.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hive-llap-tez.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hive-metastore.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hive-serde.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hive-service.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hive-service-rpc.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hive-shims-0.23.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hive-shims-common.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hive-shims.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hive-shims-scheduler.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hk2-api-2.5.0-b32.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hk2-locator-2.5.0-b32.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hk2-utils-2.5.0-b32.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/hsqldb-1.8.0.10.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/htrace-core4-4.1.0-incubating.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/httpclient-4.5.3.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/httpcore-4.4.6.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/j2v8_linux_x86_64-4.6.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/j2v8_macosx_x86_64-4.6.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/j2v8_win32_x86-4.6.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/j2v8_win32_x86_64-4.6.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jackson-annotations-2.9.8.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jackson-core-2.9.8.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jackson-core-asl-1.9.13.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jackson-databind-2.9.8.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jackson-jaxrs-1.9.2.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jackson-jaxrs-base-2.9.8.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jackson-jaxrs-json-provider-2.9.8.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jackson-mapper-asl-1.9.13.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jackson-module-jaxb-annotations-2.9.8.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jackson-xc-1.9.2.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jamon-runtime-2.3.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jansi-1.9.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jasper-compiler-5.5.23.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/JavaEWAH-1.1.6.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/javaparser-1.0.11.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/javassist-3.20.0-GA.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/java-util-1.9.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/javax.annotation-api-1.2.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/javax.el-3.0.1-b11.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/javax.inject-1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/javax.inject-2.5.0-b32.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/javax.jdo-3.2.0-m3.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/javax.servlet-api-3.1.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/javax.servlet.jsp-2.3.2.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/javax.servlet.jsp-api-2.3.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/javax.ws.rs-api-2.0.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/javolution-5.5.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jaxb2-basics-1.11.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jaxb2-basics-runtime-1.11.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jaxb2-basics-tools-1.11.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jaxb-api-2.2.11.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jcip-annotations-1.0-1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jcl-over-slf4j-1.7.25.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jcodings-1.0.18.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jcommander-1.30.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jdo-api-3.0.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jdom-1.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jersey-client-1.19.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jersey-client-2.25.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jersey-common-2.25.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jersey-container-servlet-core-2.25.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jersey-core-1.19.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jersey-guava-2.25.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jersey-guice-1.19.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jersey-json-1.19.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jersey-media-jaxb-2.25.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jersey-server-1.19.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jersey-server-2.25.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jersey-servlet-1.19.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jettison-1.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jetty-annotations-9.3.25.v20180904.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jetty-http-9.3.25.v20180904.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jetty-io-9.3.25.v20180904.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jetty-jaas-9.3.25.v20180904.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jetty-jndi-9.3.25.v20180904.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jetty-plus-9.3.25.v20180904.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jetty-runner-9.3.25.v20180904.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jetty-schemas-3.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jetty-security-9.3.25.v20180904.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jetty-server-9.3.25.v20180904.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jetty-servlet-9.3.25.v20180904.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jetty-util-9.3.25.v20180904.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jetty-util-ajax-9.3.25.v20180904.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jetty-webapp-9.3.25.v20180904.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jetty-xml-9.3.25.v20180904.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jline-2.12.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jms-1.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/joda-time-2.9.9.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/joni-2.1.11.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jpam-1.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jsch-0.1.54.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/json-io-2.5.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/json-simple-1.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/json-smart-2.3.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jsp-api-2.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jsr311-api-1.1.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jta-1.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jul-to-slf4j-1.7.25.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jython-standalone-2.7.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/jzlib-1.1.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/kerb-admin-1.0.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/kerb-client-1.0.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/kerb-common-1.0.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/kerb-core-1.0.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/kerb-crypto-1.0.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/kerb-identity-1.0.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/kerb-server-1.0.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/kerb-simplekdc-1.0.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/kerb-util-1.0.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/kerby-asn1-1.0.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/kerby-config-1.0.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/kerby-pkix-1.0.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/kerby-util-1.0.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/kerby-xdr-1.0.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/kryo-2.22.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/leveldbjni-all-1.8.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/libfb303-0.9.3.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/libthrift-0.9.3-1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/log4j-api-2.8.2.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/log4j-slf4j-impl-2.8.2.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/mail-1.4.7.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/metrics-core-3.1.2.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/metrics-ganglia-3.1.2.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/metrics-graphite-3.1.2.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/metrics-json-3.1.2.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/metrics-jvm-3.1.2.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/mssql-jdbc-6.2.1.jre7.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/nashorn-promise-0.1.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/netty-3.10.6.Final.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/netty-all-4.1.17.Final.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/nimbus-jose-jwt-4.41.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/objenesis-1.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/okhttp-2.7.5.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/okio-1.6.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/oozie-client-5.1.0-cdh6.2.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/oozie-client.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/oozie-core-5.1.0-cdh6.2.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/oozie-core.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/oozie-fluent-job-api-5.1.0-cdh6.2.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/oozie-fluent-job-api.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/oozie-sharelib-hcatalog-5.1.0-cdh6.2.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/oozie-sharelib-hcatalog.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/oozie-sharelib-oozie-5.1.0-cdh6.2.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/oozie-sharelib-oozie.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/oozie-tools-5.1.0-cdh6.2.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/oozie-tools.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/opencsv-2.3.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/openjpa-jdbc-2.4.2.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/openjpa-kernel-2.4.2.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/openjpa-lib-2.4.2.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/openjpa-persistence-2.4.2.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/openjpa-persistence-jdbc-2.4.2.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/org.eclipse.jgit-5.0.1.201806211838-r.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/osgi-resource-locator-1.0.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/paranamer-2.8.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/parquet-hadoop-bundle.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/pig.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/postgresql-9.0-801.jdbc4.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/quartz-2.1.7.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/re2j-1.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/serializer-2.7.2.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/serp-1.15.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/slf4j-api-1.7.25.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/slf4j-log4j12-1.7.25.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/slf4j-simple-1.7.25.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/slider-core-0.90.2-incubating.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/snappy-java-1.1.4.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/stax2-api-3.1.4.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/stringtemplate-3.2.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/taglibs-standard-impl-1.2.5.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/taglibs-standard-spec-1.2.5.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/tephra-api-0.6.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/tephra-core-0.6.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/tephra-hbase-compat-1.0-0.6.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/transaction-api-1.1.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/twill-api-0.6.0-incubating.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/twill-common-0.6.0-incubating.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/twill-core-0.6.0-incubating.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/twill-discovery-api-0.6.0-incubating.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/twill-discovery-core-0.6.0-incubating.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/twill-zookeeper-0.6.0-incubating.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/validation-api-1.1.0.Final.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/websocket-api-9.3.25.v20180904.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/websocket-client-9.3.25.v20180904.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/websocket-common-9.3.25.v20180904.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/websocket-server-9.3.25.v20180904.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/websocket-servlet-9.3.25.v20180904.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/woodstox-core-5.0.3.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/xalan-2.7.2.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/xbean-asm5-shaded-3.17.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/xercesImpl-2.11.0.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/xml-apis-1.4.01.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/xml-apis-ext-1.3.04.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/xmlgraphics-commons-2.3.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/xz-1.6.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libtools/zookeeper.jar:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/libext/*.jar' org.apache.oozie.tools.OozieSharelibCLI create -fs hdfs://cm-r01nn02.mws.mds.xyz:8020 -locallib /opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/oozie/oozie-sharelib-yarn -concurrency 8
log4j:ERROR Could not instantiate class [org.cloudera.log4j.redactor.RedactorAppender].
java.lang.ClassNotFoundException: org.cloudera.log4j.redactor.RedactorAppender
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.apache.log4j.helpers.Loader.loadClass(Loader.java:198)
at org.apache.log4j.helpers.OptionConverter.instantiateByClassName(OptionConverter.java:327)
at org.apache.log4j.helpers.OptionConverter.instantiateByKey(OptionConverter.java:124)
at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:785)
at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
at org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:648)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:514)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
at org.apache.log4j.LogManager.<clinit>(LogManager.java:127)
at org.apache.oozie.service.XLogService.init(XLogService.java:149)
at org.apache.oozie.service.Services.setServiceInternal(Services.java:386)
at org.apache.oozie.service.Services.<init>(Services.java:111)
at org.apache.oozie.tools.OozieSharelibCLI.run(OozieSharelibCLI.java:194)
at org.apache.oozie.tools.OozieSharelibCLI.main(OozieSharelibCLI.java:97)
log4j:ERROR Could not instantiate appender named "redactorForRootLogger".
log4j:ERROR Could not instantiate class [org.cloudera.log4j.redactor.RedactorAppender].
java.lang.ClassNotFoundException: org.cloudera.log4j.redactor.RedactorAppender
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.apache.log4j.helpers.Loader.loadClass(Loader.java:198)
at org.apache.log4j.helpers.OptionConverter.instantiateByClassName(OptionConverter.java:327)
at org.apache.log4j.helpers.OptionConverter.instantiateByKey(OptionConverter.java:124)
at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:785)
at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
at org.apache.log4j.PropertyConfigurator.parseCatsAndRenderers(PropertyConfigurator.java:672)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:516)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
at org.apache.log4j.LogManager.<clinit>(LogManager.java:127)
at org.apache.oozie.service.XLogService.init(XLogService.java:149)
at org.apache.oozie.service.Services.setServiceInternal(Services.java:386)
at org.apache.oozie.service.Services.<init>(Services.java:111)
at org.apache.oozie.tools.OozieSharelibCLI.run(OozieSharelibCLI.java:194)
at org.apache.oozie.tools.OozieSharelibCLI.main(OozieSharelibCLI.java:97)
log4j:ERROR Could not instantiate appender named "redactor2".
log4j:ERROR Could not instantiate class [org.cloudera.log4j.redactor.RedactorAppender].
java.lang.ClassNotFoundException: org.cloudera.log4j.redactor.RedactorAppender
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.apache.log4j.helpers.Loader.loadClass(Loader.java:198)
at org.apache.log4j.helpers.OptionConverter.instantiateByClassName(OptionConverter.java:327)
at org.apache.log4j.helpers.OptionConverter.instantiateByKey(OptionConverter.java:124)
at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:785)
at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
at org.apache.log4j.PropertyConfigurator.parseCatsAndRenderers(PropertyConfigurator.java:672)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:516)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
at org.apache.log4j.LogManager.<clinit>(LogManager.java:127)
at org.apache.oozie.service.XLogService.init(XLogService.java:149)
at org.apache.oozie.service.Services.setServiceInternal(Services.java:386)
at org.apache.oozie.service.Services.<init>(Services.java:111)
at org.apache.oozie.tools.OozieSharelibCLI.run(OozieSharelibCLI.java:194)
at org.apache.oozie.tools.OozieSharelibCLI.main(OozieSharelibCLI.java:97)
log4j:ERROR Could not instantiate appender named "redactor1".
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/jars/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/jars/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/jars/slf4j-simple-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See <a href="http://www.slf4j.org/codes.html#multiple_bindings" target="_blank">http://www.slf4j.org/codes.html#multiple_bindings</a> for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
ERROR StatusLogger Log4j2 could not find a logging implementation. Please add log4j-core to the classpath. Using SimpleLogger to log to the console...
log4j:ERROR Could not instantiate class [org.cloudera.log4j.redactor.RedactorAppender].
java.lang.ClassNotFoundException: org.cloudera.log4j.redactor.RedactorAppender
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.apache.log4j.helpers.Loader.loadClass(Loader.java:198)
at org.apache.log4j.helpers.OptionConverter.instantiateByClassName(OptionConverter.java:327)
at org.apache.log4j.helpers.OptionConverter.instantiateByKey(OptionConverter.java:124)
at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:785)
at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
at org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:648)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:514)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:395)
at org.apache.log4j.PropertyWatchdog.doOnChange(PropertyConfigurator.java:955)
at org.apache.log4j.helpers.FileWatchdog.checkAndConfigure(FileWatchdog.java:89)
at org.apache.log4j.helpers.FileWatchdog.<init>(FileWatchdog.java:58)
at org.apache.log4j.PropertyWatchdog.<init>(PropertyConfigurator.java:947)
at org.apache.log4j.PropertyConfigurator.configureAndWatch(PropertyConfigurator.java:473)
at org.apache.oozie.service.XLogService.init(XLogService.java:178)
at org.apache.oozie.service.Services.setServiceInternal(Services.java:386)
at org.apache.oozie.service.Services.<init>(Services.java:111)
at org.apache.oozie.tools.OozieSharelibCLI.run(OozieSharelibCLI.java:194)
at org.apache.oozie.tools.OozieSharelibCLI.main(OozieSharelibCLI.java:97)
log4j:ERROR Could not instantiate appender named "redactorForRootLogger".
log4j:ERROR Could not instantiate class [org.cloudera.log4j.redactor.RedactorAppender].
java.lang.ClassNotFoundException: org.cloudera.log4j.redactor.RedactorAppender
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.apache.log4j.helpers.Loader.loadClass(Loader.java:198)
at org.apache.log4j.helpers.OptionConverter.instantiateByClassName(OptionConverter.java:327)
at org.apache.log4j.helpers.OptionConverter.instantiateByKey(OptionConverter.java:124)
at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:785)
at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
at org.apache.log4j.PropertyConfigurator.parseCatsAndRenderers(PropertyConfigurator.java:672)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:516)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:395)
at org.apache.log4j.PropertyWatchdog.doOnChange(PropertyConfigurator.java:955)
at org.apache.log4j.helpers.FileWatchdog.checkAndConfigure(FileWatchdog.java:89)
at org.apache.log4j.helpers.FileWatchdog.<init>(FileWatchdog.java:58)
at org.apache.log4j.PropertyWatchdog.<init>(PropertyConfigurator.java:947)
at org.apache.log4j.PropertyConfigurator.configureAndWatch(PropertyConfigurator.java:473)
at org.apache.oozie.service.XLogService.init(XLogService.java:178)
at org.apache.oozie.service.Services.setServiceInternal(Services.java:386)
at org.apache.oozie.service.Services.<init>(Services.java:111)
at org.apache.oozie.tools.OozieSharelibCLI.run(OozieSharelibCLI.java:194)
at org.apache.oozie.tools.OozieSharelibCLI.main(OozieSharelibCLI.java:97)
log4j:ERROR Could not instantiate appender named "redactor2".
log4j:ERROR Could not instantiate class [org.cloudera.log4j.redactor.RedactorAppender].
java.lang.ClassNotFoundException: org.cloudera.log4j.redactor.RedactorAppender
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.apache.log4j.helpers.Loader.loadClass(Loader.java:198)
at org.apache.log4j.helpers.OptionConverter.instantiateByClassName(OptionConverter.java:327)
at org.apache.log4j.helpers.OptionConverter.instantiateByKey(OptionConverter.java:124)
at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:785)
at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
at org.apache.log4j.PropertyConfigurator.parseCatsAndRenderers(PropertyConfigurator.java:672)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:516)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:395)
at org.apache.log4j.PropertyWatchdog.doOnChange(PropertyConfigurator.java:955)
at org.apache.log4j.helpers.FileWatchdog.checkAndConfigure(FileWatchdog.java:89)
at org.apache.log4j.helpers.FileWatchdog.<init>(FileWatchdog.java:58)
at org.apache.log4j.PropertyWatchdog.<init>(PropertyConfigurator.java:947)
at org.apache.log4j.PropertyConfigurator.configureAndWatch(PropertyConfigurator.java:473)
at org.apache.oozie.service.XLogService.init(XLogService.java:178)
at org.apache.oozie.service.Services.setServiceInternal(Services.java:386)
at org.apache.oozie.service.Services.<init>(Services.java:111)
at org.apache.oozie.tools.OozieSharelibCLI.run(OozieSharelibCLI.java:194)
at org.apache.oozie.tools.OozieSharelibCLI.main(OozieSharelibCLI.java:97)
log4j:ERROR Could not instantiate appender named "redactor1".
[14/Aug/2019 00:35:33 +0000] 18806 MainThread redactor INFO Killing with SIGTERM
Cheers, TK
... View more
Labels:
- Labels:
-
Apache Oozie
-
Cloudera Manager
08-14-2019
04:49 AM
Switching to parcels worked. I could have just deployed things manually with rpm's but parcels make this much easier and so I switched to them. Posted a new issue however with Oozie. (Separate thread)
... View more
08-11-2019
09:49 PM
Spoke too soon. Manual install didn't do the trick. I'm actually uncertain which nodes the error message is coming from so unsure where to install Hue now. Will pick it up tomorrow. Cheers, TK
... View more