Member since
01-12-2022
25
Posts
3
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
855 | 04-03-2024 12:17 AM |
04-03-2024
12:17 AM
I think I've found the reason for the problem. It's not related to the Spark version. I used the Java process analysis tool Arthas to investigate and found that the AM startup process was blocked at the creation of the Timeline client. And this problem might be due to our TimeLine service using an embedded HBase service. When we configured the HBase used by the TimeLine service to our production environment's HBase, the problem disappeared.
... View more
03-15-2024
03:35 AM
1 Kudo
Hi everyone, our Hadoop cluster has recently encountered a strange issue. After starting the cluster, Spark jobs run normally, but after running for more than a week, job submission timeouts occur. Specifically, the jobs are all in the ACCEPTED state, but fail due to timeout after 2 minutes. The error log shows: Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/134.1.1.3:45866 remote=hdp144019.bigdata.com/134.1.1.19:45454] I found an app id, and then went to check the corresponding NodeManager log for attempting to start the AM. I found that the AM was never started at all. At the time when the job submission timed out, the NodeManager log reported the following log: 2024-03-15 00:04:17,668 INFO containermanager.ContainerManagerImpl (ContainerManagerImpl.java:handle(1607)) - couldn't find application application_1709910180593_38018 while processing FINISH_APPS event. The ResourceManager allocated resources for this application to the NodeManager but no active containers were found to process. 同时伴随着IPC相关的报错: 2024-03-15 00:04:17,082 WARN ipc.Server (Server.java:processResponse(1523)) - IPC Server handler 11 on 45454, call Call#31 Retry#0 org.apache.hadoop.yarn.api.ContainerManagementProtocolPB.startContainers from 134.1.1.8:32766: output error
2024-03-15 00:04:17,083 INFO ipc.Server (Server.java:run(2695)) - IPC Server handler 11 on 45454 caught an exception
java.nio.channels.ClosedChannelException
at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:268)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:459)
at org.apache.hadoop.ipc.Server.channelWrite(Server.java:3250)
at org.apache.hadoop.ipc.Server.access$1700(Server.java:137)
at org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:1473)
at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:1543)
at org.apache.hadoop.ipc.Server$Connection.sendResponse(Server.java:2593)
at org.apache.hadoop.ipc.Server$Connection.access$300(Server.java:1615)
at org.apache.hadoop.ipc.Server$RpcCall.doResponse(Server.java:940)
at org.apache.hadoop.ipc.Server$Call.sendResponse(Server.java:774)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:885)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682) Since I couldn't see any issues from the logs, I'm now unsure how to troubleshoot further. Someony have any suggestions? Spark Version: 3.3.2 HadoopVersion: HDP-3.1.5.0-152
... View more
Labels:
- Labels:
-
Apache Spark
-
Apache YARN
01-18-2024
12:48 AM
Thank you very much for your answer. I will try to adjust the allocation of heap memory. In addition, I would like to ask how the conclusion of using 1GB of memory for every 1 million blocks was drawn? Or is there a more precise calculation method that can lead to such a conclusion?
... View more
01-18-2024
12:48 AM
Thank you very much for your answer. I will try to adjust the allocation of heap memory. In addition, I would like to ask how the conclusion of using 1GB of memory for every 1 million blocks was drawn? Or is there a more precise calculation method that can lead to such a conclusion?
... View more
01-16-2024
12:52 AM
Hi, I have a Hadoop 3.1.1 cluster and recently I found that the NameNode heap memory usage in the cluster is very high. I saw 629973631 file objects in the cluster through WebUi, so according to my calculations, it should occupy no more than 90GB of memory, right? Why is the current memory usage consistently above 140GB? Is this related to me enabling erasure codes?
... View more
Labels:
- Labels:
-
Apache Ambari
-
HDFS
01-05-2024
12:34 AM
I think your intention is to retrieve these data for your own monitoring or reporting tasks. If so, you can try requesting JMX to obtain the relevant data, such as through http://namenode:port/jmx.
... View more
08-10-2023
11:05 PM
Hi, Scharan, thisi is the command's return: [root@hdp002 ~]# openssl s_client -connect hdp002.datasw.com:50470 -showcerts
CONNECTED(00000003)
depth=1 C = CN, ST = ShenZhen, L = GuangDong, O = DATASW, OU = PlatformTeam, CN = datsw
verify error:num=19:self signed certificate in certificate chain
---
Certificate chain
0 s:/C=CN/ST=GuangDong/L=ShenZhen/O=DATASW/OU=PlatformTeam/CN=hdp002.datasw
i:/C=CN/ST=ShenZhen/L=GuangDong/O=DATASW/OU=PlatformTeam/CN=datsw
-----BEGIN CERTIFICATE-----
MIIDXDCCAkQCCQDNmxfKgcxaOjANBgkqhkiG9w0BAQsFADBsMQswCQYDVQQGEwJD
TjERMA8GA1UECAwIU2hlblpoZW4xEjAQBgNVBAcMCUd1YW5nRG9uZzEPMA0GA1UE
CgwGREFUQVNXMRUwEwYDVQQLDAxQbGF0Zm9ybVRlYW0xDjAMBgNVBAMMBWRhdHN3
MB4XDTIxMDkxNDE3MjcwM1oXDTMxMDkxMjE3MjcwM1owdDELMAkGA1UEBhMCQ04x
EjAQBgNVBAgTCUd1YW5nRG9uZzERMA8GA1UEBxMIU2hlblpoZW4xDzANBgNVBAoT
BkRBVEFTVzEVMBMGA1UECxMMUGxhdGZvcm1UZWFtMRYwFAYDVQQDEw1oZHAwMDIu
ZGF0YXN3MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAgUquEQOPlyZs
vqfNZCB6qXh+wDoQlOMTPWfoHAtk5c041LwDrtkjF3I+uXntqcV/aB0ufSLA3M/j
2iiPt+8sosjp0cOakWXpOGx3Vv7MXteF/c8HMav+9yheY8qcoHfrRwPqpq9v0Ysz
31v1l6zJiuubF9JJpYMMmBkfwd0RUyLD079VkKAHtq6zAlpBm+zKVc6B0Xiddvw6
La2PB/c+vqVGVKsI+0RqiMdM1IXCuV76CT47riQ8G0PPs1OSu4HVI+J1j6P3R6He
rpIG1GstmcGjbR10qo/MDg1svwxiGOJxw1sN+65LQPLB3cTw4JmUMIJq8F1REiAZ
azP8I1oTsQIDAQABMA0GCSqGSIb3DQEBCwUAA4IBAQA4B1NwbQ0jBBccOfX0zCoQ
mHiIQvpMiiUYJRTAfUqueIFMwRF5aw85u2PAMnkgxqUmRG87RTgOg5dx3hPswS83
FQOeG2kUPXIR9HRaJZirRZ0vZByzsHl77N4Bb0y0Z0m1svbSczxH/RCEY5L7NvMR
fUSZjYW8LFk8nl3tH60u4f7cDJ+bTiDA37B7tUqvwrRHsvSJY9Ndp52QRUHxST3o
iutBSEOWfhBvPkv7U/B3WAnT9Dp0aSur2jIr5BP99qs13cy4qK0h5OAB+lgCEBt5
L4JKqWOv3W33j4vtNeYOSzfHgoi2JgKDP+imoVgnGeK1GNJsHMxecw4ef0Eik9sW
-----END CERTIFICATE-----
1 s:/C=CN/ST=ShenZhen/L=GuangDong/O=DATASW/OU=PlatformTeam/CN=datsw
i:/C=CN/ST=ShenZhen/L=GuangDong/O=DATASW/OU=PlatformTeam/CN=datsw
-----BEGIN CERTIFICATE-----
MIIDqzCCApOgAwIBAgIJAMmdxf5CbH3BMA0GCSqGSIb3DQEBCwUAMGwxCzAJBgNV
BAYTAkNOMREwDwYDVQQIDAhTaGVuWmhlbjESMBAGA1UEBwwJR3VhbmdEb25nMQ8w
DQYDVQQKDAZEQVRBU1cxFTATBgNVBAsMDFBsYXRmb3JtVGVhbTEOMAwGA1UEAwwF
ZGF0c3cwHhcNMjEwOTE0MDE1MTQwWhcNMzEwOTEyMDE1MTQwWjBsMQswCQYDVQQG
EwJDTjERMA8GA1UECAwIU2hlblpoZW4xEjAQBgNVBAcMCUd1YW5nRG9uZzEPMA0G
A1UECgwGREFUQVNXMRUwEwYDVQQLDAxQbGF0Zm9ybVRlYW0xDjAMBgNVBAMMBWRh
dHN3MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAuDsAyDscHIGXRmTz
EKzWjenR8c2f6hpLEyRTqvlI9AzTd5gZRMMWg7ax4erC7BPwva+RHZoeug9kE2HC
UHoGCP4YIZdEux5phPqv1vP/CBvbXnYZ4olMRSJDuf57TpAZjMTy5FHgs7QDzpCk
9Ez7CQWeXaaAaqnGo8SUWLATLadudSgkPDLSJL/h2IGjhxKPMyaHODxXQRxUIRbr
tI+8+9+siRLi+3EIhMXLT1oEOnsB/BQmawbNjyLtuZZoH8pyGJ3ByoM06zLWWMGI
eujFCOlSRMMzpEr/xLhJQQDFBLEFJKYOD0Z5QbqgrFkQtLOFpEnDbTXasQvlk4on
nLVemQIDAQABo1AwTjAdBgNVHQ4EFgQUUkBUUrsXSVF7MQDnB0hKjtpiOt0wHwYD
VR0jBBgwFoAUUkBUUrsXSVF7MQDnB0hKjtpiOt0wDAYDVR0TBAUwAwEB/zANBgkq
hkiG9w0BAQsFAAOCAQEAdU7iAr/F5lCLvPMfo9LA7JhI7IQdic/EhxELLuUELF7c
UBOOlJbWFxLYaZ6SwZ9lGa4d+wjNWoX+QvLt02PGZV3h0aB6O8E0827jjgI61r0C
UNSD3N3KadbK52st5W34sIssXqBNIga1w9knfWouiqNcHBixyZdYfWOwGLPSAbpC
K4os4yi5QU4YSvNwLO9GAYgem0p0Uel9By3m0cFmyFr+GcA+VAWltk7xBKOsCxam
nnQJE+djbMekmXW6cmujbqh02Q6LF0/6wNDMRnRFkDvF5WnT1XxQ7O+HFkeQXPED
qCkcKcHLqMxhK72iVlLgCq6n+oYLDxeODfHEjvo3sg==
-----END CERTIFICATE-----
---
Server certificate
subject=/C=CN/ST=GuangDong/L=ShenZhen/O=DATASW/OU=PlatformTeam/CN=hdp002.datasw
issuer=/C=CN/ST=ShenZhen/L=GuangDong/O=DATASW/OU=PlatformTeam/CN=datsw
---
No client certificate CA names sent
Peer signing digest: SHA512
Server Temp Key: ECDH, P-256, 256 bits
---
SSL handshake has read 2350 bytes and written 483 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA384
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-RSA-AES256-SHA384
Session-ID: 64D5CE575ACD383C1B9BED92D5F2FDC1C63308098FD241173411E62C2E5E0395
Session-ID-ctx:
Master-Key: D73511C7D981C1A2F7813E02F102BD23057A5A79C5E9E75C3BCE870AA40D7CE4F02E41115F28510CE7AF85C6F6675BE6
Key-Arg : None
Krb5 Principal: None
PSK identity: None
PSK identity hint: None
Start Time: 1691733590
Timeout : 300 (sec)
Verify return code: 19 (self signed certificate in certificate chain)
--- KIT is not installed on my Windows machine, so I use curl on the Linux server to access the http request of the namenode: curl -i --insecure --negotiate -u: "https://hdp002.datasw.com:50470/jmx?qry=Hadoop:service=NameNode,name=RpcActivityForPort8020"
... View more
08-07-2023
08:18 PM
I have a secure hadoop cluster with HDP3.1,I recently tried to interconnect this cluster with the Knox component to implement a secure proxy. This cluster has kerberos, Ldap, and https enabled, I create one config like this: <topology>
<gateway>
<provider>
<role>authentication</role>
<name>ShiroProvider</name>
<enabled>true</enabled>
<param>
<name>main.ldapRealm</name>
<value>org.apache.knox.gateway.shirorealm.KnoxLdapRealm</value>
</param>
<param>
<name>main.ldapContextFactory</name>
<value>org.apache.knox.gateway.shirorealm.KnoxLdapContextFactory</value>
</param>
<param>
<name>main.ldapRealm.contextFactory</name>
<value>$ldapContextFactory</value>
</param>
<param>
<name>main.ldapRealm.userDnTemplate</name>
<value>cn=admin,dc=datasw,dc=com</value>
</param>
<param>
<name>main.ldapRealm.contextFactory.url</name>
<value>ldap://hdp001.datasw.com:389</value>
</param>
<param>
<name>main.ldapRealm.contextFactory.authenticationMechanism</name>
<value>simple</value>
</param>
<param>
<name>urls./**</name>
<value>authcBasic</value>
</param>
</provider>
<provider>
<role>authentication</role>
<name>HadoopAuth</name>
<enabled>true</enabled>
<param>
<name>config.prefix</name>
<value>hadoop.auth.config</value>
</param>
<param>
<name>hadoop.auth.config.type</name>
<value>kerberos</value>
</param>
<param>
<name>hadoop.auth.config.simple.anonymous.allowed</name>
<value>false</value>
</param>
<param>
<name>hadoop.auth.config.token.validity</name>
<value>1800</value>
</param>
<param>
<name>hadoop.auth.config.cookie.domain</name>
<value>datasw.com</value>
</param>
<param>
<name>hadoop.auth.config.cookie.path</name>
<value>gateway/default</value>
</param>
<param>
<name>hadoop.auth.config.kerberos.principal</name>
<value>HTTP/hdp003.datasw@DATASW.COM</value>
</param>
<param>
<name>hadoop.auth.config.kerberos.keytab</name>
<value>/etc/security/keytabs/spnego.service.keytab</value>
</param>
<param>
<name>hadoop.auth.config.kerberos.name.rules</name>
<value>DEFAULT</value>
</param>
<param>
<name>fs.defaultFS</name>
<value>hdfs://hdfsCluster</value>
</param>
<param>
<name>dfs.internal.nameservices</name>
<value>hdfsCluster</value>
</param>
<param>
<name>dfs.ha.namenodes.hdfsCluster</name>
<value>nn1,nn2</value>
</param>
<param>
<name>dfs.nameservices</name>
<value>hdfsCluster</value>
</param>
<param>
<name>dfs.namenode.https-address</name>
<value>hdp001.datasw:50470</value>
</param>
<param>
<name>dfs.namenode.https-address.hdfsCluster.nn1</name>
<value>hdp001.datasw:50470</value>
</param>
<param>
<name>dfs.namenode.https-address.hdfsCluster.nn2</name>
<value>hdp002.datasw:50470</value>
</param>
</provider>
</gateway>
<service>
<role>HDFSUI</role>
<url>https://hdp002.datasw.com:50470</url>
</service>
</topology> and I copy the hadoop cluster's truststore.jks file to the $GATEWAY_HOME/data/security/keystores/ and set gateway.httpclient.truststore.path param in gateway-stie.xml: <property>
<name>gateway.httpclient.truststore.path</name>
<value>/usr/local/knox/data/security/keystores/truststore.jks</value>
</property>
<property>
<name>gateway.httpclient.truststore.type</name>
<value>JKS</value>
</property>
<property>
<name>gateway.httpclient.truststore.password.alias</name>
<value>pthdp</value>
</property> Then I restart the Knox gateway,but when I access the NameNode webUi, I receive the following error message: 2023-08-08 11:14:38,050 58fc3dbf-4c6e-4684-860d-0a4e443f85d2 WARN knox.gateway (DefaultDispatch.java:executeOutboundRequest(183)) - Connection exception dispatching request: https://hdp002.datasw.com:50470/?user.name=admin javax.net.ssl.SSLPeerUnverifiedException: Certificate for <hdp002.datasw.com> doesn't match any of the subject alternative names: []
javax.net.ssl.SSLPeerUnverifiedException: Certificate for <hdp002.datasw.com> doesn't match any of the subject alternative names: []
at org.apache.http.conn.ssl.SSLConnectionSocketFactory.verifyHostname(SSLConnectionSocketFactory.java:507) ~[httpclient-4.5.13.jar:4.5.13]
at org.apache.http.conn.ssl.SSLConnectionSocketFactory.createLayeredSocket(SSLConnectionSocketFactory.java:437) ~[httpclient-4.5.13.jar:4.5.13]
at org.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:384) ~[httpclient-4.5.13.jar:4.5.13]
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142) ~[httpclient-4.5.13.jar:4.5.13]
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) ~[httpclient-4.5.13.jar:4.5.13]
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) ~[httpclient-4.5.13.jar:4.5.13]
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) ~[httpclient-4.5.13.jar:4.5.13]
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) ~[httpclient-4.5.13.jar:4.5.13]
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) ~[httpclient-4.5.13.jar:4.5.13]
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) ~[httpclient-4.5.13.jar:4.5.13]
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) ~[httpclient-4.5.13.jar:4.5.13]
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) ~[httpclient-4.5.13.jar:4.5.13]
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) ~[httpclient-4.5.13.jar:4.5.13]
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56) ~[httpclient-4.5.13.jar:4.5.13]
at org.apache.knox.gateway.dispatch.DefaultDispatch.executeOutboundRequest(DefaultDispatch.java:166) ~[gateway-spi-2.0.0.jar:2.0.0]
at org.apache.knox.gateway.dispatch.DefaultDispatch.executeRequest(DefaultDispatch.java:152) ~[gateway-spi-2.0.0.jar:2.0.0]
at org.apache.knox.gateway.dispatch.DefaultDispatch.executeRequestWrapper(DefaultDispatch.java:135) ~[gateway-spi-2.0.0.jar:2.0.0]
at org.apache.knox.gateway.dispatch.DefaultDispatch.doGet(DefaultDispatch.java:300) ~[gateway-spi-2.0.0.jar:2.0.0]
at org.apache.knox.gateway.dispatch.GatewayDispatchFilter$GetAdapter.doMethod(GatewayDispatchFilter.java:183) ~[gateway-spi-2.0.0.jar:2.0.0]
at org.apache.knox.gateway.dispatch.GatewayDispatchFilter.doFilter(GatewayDispatchFilter.java:127) ~[gateway-spi-2.0.0.jar:2.0.0]
at org.apache.knox.gateway.filter.AbstractGatewayFilter.doFilter(AbstractGatewayFilter.java:58) ~[gateway-spi-2.0.0.jar:2.0.0]
at org.apache.knox.gateway.GatewayFilter$Holder.doFilter(GatewayFilter.java:377) ~[gateway-server-2.0.0.jar:2.0.0]
at org.apache.knox.gateway.GatewayFilter$Chain.doFilter(GatewayFilter.java:291) ~[gateway-server-2.0.0.jar:2.0.0]
at org.apache.knox.gateway.identityasserter.common.filter.AbstractIdentityAssertionFilter.doFilterInternal(AbstractIdentityAssertionFilter.java:193) ~[gateway-provider-identity-assertion-common-2.0.0.jar:2.0.0]
at org.apache.knox.gateway.identityasserter.common.filter.AbstractIdentityAssertionFilter.access$000(AbstractIdentityAssertionFilter.java:55) ~[gateway-provider-identity-assertion-common-2.0.0.jar:2.0.0]
at org.apache.knox.gateway.identityasserter.common.filter.AbstractIdentityAssertionFilter$1.run(AbstractIdentityAssertionFilter.java:161) ~[gateway-provider-identity-assertion-common-2.0.0.jar:2.0.0]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_291]
at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_291]
at org.apache.knox.gateway.identityasserter.common.filter.AbstractIdentityAssertionFilter.doAs(AbstractIdentityAssertionFilter.java:156) ~[gateway-provider-identity-assertion-common-2.0.0.jar:2.0.0]
at org.apache.knox.gateway.identityasserter.common.filter.AbstractIdentityAssertionFilter.continueChainAsPrincipal(AbstractIdentityAssertionFilter.java:146) ~[gateway-provider-identity-assertion-common-2.0.0.jar:2.0.0]
at org.apache.knox.gateway.identityasserter.common.filter.CommonIdentityAssertionFilter.doFilter(CommonIdentityAssertionFilter.java:241) ~[gateway-provider-identity-assertion-common-2.0.0.jar:2.0.0]
at org.apache.knox.gateway.GatewayFilter$Holder.doFilter(GatewayFilter.java:377) ~[gateway-server-2.0.0.jar:2.0.0]
at org.apache.knox.gateway.GatewayFilter$Chain.doFilter(GatewayFilter.java:291) ~[gateway-server-2.0.0.jar:2.0.0]
at org.apache.knox.gateway.filter.rewrite.api.UrlRewriteServletFilter.doFilter(UrlRewriteServletFilter.java:57) ~[gateway-provider-rewrite-2.0.0.jar:2.0.0]
at org.apache.knox.gateway.filter.AbstractGatewayFilter.doFilter(AbstractGatewayFilter.java:58) ~[gateway-spi-2.0.0.jar:2.0.0]
at org.apache.knox.gateway.GatewayFilter$Holder.doFilter(GatewayFilter.java:377) ~[gateway-server-2.0.0.jar:2.0.0]
at org.apache.knox.gateway.GatewayFilter$Chain.doFilter(GatewayFilter.java:291) ~[gateway-server-2.0.0.jar:2.0.0]
at org.apache.knox.gateway.filter.ShiroSubjectIdentityAdapter$CallableChain$1.run(ShiroSubjectIdentityAdapter.java:93) ~[gateway-provider-security-shiro-2.0.0.jar:2.0.0]
at org.apache.knox.gateway.filter.ShiroSubjectIdentityAdapter$CallableChain$1.run(ShiroSubjectIdentityAdapter.java:90) ~[gateway-provider-security-shiro-2.0.0.jar:2.0.0]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_291]
at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_291]
at org.apache.knox.gateway.filter.ShiroSubjectIdentityAdapter$CallableChain.call(ShiroSubjectIdentityAdapter.java:146) ~[gateway-provider-security-shiro-2.0.0.jar:2.0.0]
at org.apache.knox.gateway.filter.ShiroSubjectIdentityAdapter$CallableChain.call(ShiroSubjectIdentityAdapter.java:76) ~[gateway-provider-security-shiro-2.0.0.jar:2.0.0]
at org.apache.shiro.subject.support.SubjectCallable.doCall(SubjectCallable.java:90) ~[shiro-core-1.10.0.jar:1.10.0]
at org.apache.shiro.subject.support.SubjectCallable.call(SubjectCallable.java:83) ~[shiro-core-1.10.0.jar:1.10.0]
at org.apache.shiro.subject.support.DelegatingSubject.execute(DelegatingSubject.java:387) ~[shiro-core-1.10.0.jar:1.10.0]
at org.apache.knox.gateway.filter.ShiroSubjectIdentityAdapter.doFilter(ShiroSubjectIdentityAdapter.java:73) ~[gateway-provider-security-shiro-2.0.0.jar:2.0.0]
at org.apache.knox.gateway.GatewayFilter$Holder.doFilter(GatewayFilter.java:377) ~[gateway-server-2.0.0.jar:2.0.0]
at org.apache.knox.gateway.GatewayFilter$Chain.doFilter(GatewayFilter.java:291) ~[gateway-server-2.0.0.jar:2.0.0]
at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:61) ~[shiro-web-1.10.0.jar:1.10.0]
at org.apache.shiro.web.servlet.AdviceFilter.executeChain(AdviceFilter.java:108) ~[shiro-web-1.10.0.jar:1.10.0]
at org.apache.shiro.web.servlet.AdviceFilter.doFilterInternal(AdviceFilter.java:137) ~[shiro-web-1.10.0.jar:1.10.0]
at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:154) ~[shiro-web-1.10.0.jar:1.10.0]
at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:66) ~[shiro-web-1.10.0.jar:1.10.0]
at org.apache.shiro.web.servlet.AdviceFilter.executeChain(AdviceFilter.java:108) ~[shiro-web-1.10.0.jar:1.10.0]
at org.apache.shiro.web.servlet.AdviceFilter.doFilterInternal(AdviceFilter.java:137) ~[shiro-web-1.10.0.jar:1.10.0]
at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:154) ~[shiro-web-1.10.0.jar:1.10.0]
at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:66) ~[shiro-web-1.10.0.jar:1.10.0]
at org.apache.shiro.web.servlet.AbstractShiroFilter.executeChain(AbstractShiroFilter.java:458) ~[shiro-web-1.10.0.jar:1.10.0]
at org.apache.shiro.web.servlet.AbstractShiroFilter$1.call(AbstractShiroFilter.java:373) ~[shiro-web-1.10.0.jar:1.10.0] In order to achieve Knox proxy, What else do I need to do?
... View more
Labels:
- Labels:
-
Apache Knox
-
Kerberos
07-27-2023
01:01 AM
Hi, My hadoop cluster use HDP3.1.5-152.0, I start httpfs in cli command, and my webhdfs is work fine. But when i send request to the httpfs service , i got 500 code error like this: 15:50:39,064 WARN ServletHandler:632 - /webhdfs/v1/user
java.lang.IllegalArgumentException: Empty key
at javax.crypto.spec.SecretKeySpec.<init>(SecretKeySpec.java:96)
at org.apache.hadoop.security.authentication.util.Signer.computeSignature(Signer.java:93)
at org.apache.hadoop.security.authentication.util.Signer.sign(Signer.java:59)
at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:587)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)
at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1619)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:539)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
at org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:251)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
at java.lang.Thread.run(Thread.java:745) This error occurs regardless of whether my cluster has kerberos and ssl enabled, Please give me some help, thanks!
... View more
Labels:
- Labels:
-
HDFS
12-06-2022
07:34 PM
Hi,recently I have been trying to track my spark application using SparkListener. Now I have a problem: There is a application, the result of its execution is SUCCESSED on Yarn WebUI,but it's actually a failed application. And my listener class can not get the error message,the application log only show the Start events for failed jobs and tasks. Some can help me ?
... View more
Labels:
- Labels:
-
Apache Spark