Member since
02-21-2018
42
Posts
2
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1860 | 08-12-2021 07:54 AM | |
4737 | 07-22-2021 02:34 AM | |
3716 | 07-09-2021 08:25 AM | |
1901 | 10-26-2018 08:38 AM |
11-28-2023
09:17 AM
1 Kudo
Thanks @Majeti indeed, when enabling krb5 debug it shows an error when connecting to port 88 despite it was open (tcp). I just opened 88 port on udp and i got it worked
... View more
11-21-2023
09:01 AM
I have hdp cluster 2.6.5 (kerberized) on public cloud, and i need to access hdfs from outside (public access) through hdfs cli (not webhdfs) As the external hadoop-client host can't be enrolled through ambari i just downloaded hadoop 2.7.3 package and configured core-site.xml and hdfs-site.xml as bellow core-silte.xml <configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://datalake-cstest9</value>
</property>
<property>
<name>hadoop.security.authentication</name>
<value>kerberos</value>
</property>
</configuration> hdfs-site.xml <configuration>
<property>
<name>dfs.nameservices</name>
<value>datalake-cstest9</value>
</property>
<property>
<name>dfs.ha.namenodes.datalake-cstest9</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.datalake-cstest9.nn1</name>
<value>mnode0.7458907e-6f32-4f4a-b33e-6820be708ad4.datalake.com:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.datalake-cstest9.nn2</name>
<value>mnode1.7458907e-6f32-4f4a-b33e-6820be708ad4.datalake.com:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.datalake-cstest9.nn1</name>
<value>mnode0.7458907e-6f32-4f4a-b33e-6820be708ad4.datalake.com:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.datalake-cstest9.nn2</name>
<value>mnode1.7458907e-6f32-4f4a-b33e-6820be708ad4.datalake.com:50070</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.datalake-cstest9</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>shell(/bin/true)</value>
</property>
</configuration> I checked port are well configured ping ok ping mnode1.7458907e-6f32-4f4a-b33e-6820be708ad4.datalake.com
PING mnode1.7458907e-6f32-4f4a-b33e-6820be708ad4.datalake.com (51.*.*.*) 56(84) bytes of data.
64 bytes from mnode1.7458907e-6f32-4f4a-b33e-6820be708ad4.datalake.com (51.*.*.*): icmp_seq=1 ttl=49 time=89.8 ms
64 bytes from mnode1.7458907e-6f32-4f4a-b33e-6820be708ad4.datalake.com (51.*.*.*): icmp_seq=2 ttl=49 time=86.9 ms
64 bytes from mnode1.7458907e-6f32-4f4a-b33e-6820be708ad4.datalake.com (51.*.*.*): icmp_seq=3 ttl=49 time=86.8 ms
64 bytes from mnode1.7458907e-6f32-4f4a-b33e-6820be708ad4.datalake.com (51.*.*.*): icmp_seq=4 ttl=49 time=87.8 ms
64 bytes from mnode1.7458907e-6f32-4f4a-b33e-6820be708ad4.datalake.com (51.*.*.*): icmp_seq=5 ttl=49 time=86.9 ms ports 8020 & 50070 are open ping mnode1.7458907e-6f32-4f4a-b33e-6820be708ad4.datalake.com
PING mnode1.7458907e-6f32-4f4a-b33e-6820be708ad4.datalake.com (51.*.*.*) 56(84) bytes of data.
64 bytes from mnode1.7458907e-6f32-4f4a-b33e-6820be708ad4.datalake.com (51.*.*.*): icmp_seq=1 ttl=49 time=89.8 ms
64 bytes from mnode1.7458907e-6f32-4f4a-b33e-6820be708ad4.datalake.com (51.*.*.*): icmp_seq=2 ttl=49 time=86.9 ms
64 bytes from mnode1.7458907e-6f32-4f4a-b33e-6820be708ad4.datalake.com (51.*.*.*): icmp_seq=3 ttl=49 time=86.8 ms
64 bytes from mnode1.7458907e-6f32-4f4a-b33e-6820be708ad4.datalake.com (51.*.*.*): icmp_seq=4 ttl=49 time=87.8 ms
64 bytes from mnode1.7458907e-6f32-4f4a-b33e-6820be708ad4.datalake.com (51.*.*.*): icmp_seq=5 ttl=49 time=86.9 ms When i try to list hdfs folders, i'm getting this error hdfs dfs -ls /user
2023-11-20 16:04:27,389 WARN [main] security.UserGroupInformation (UserGroupInformation.java:hasSufficientTimeElapsed(1193)) - Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
2023-11-20 16:04:27,979 WARN [main] security.UserGroupInformation (UserGroupInformation.java:hasSufficientTimeElapsed(1193)) - Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
2023-11-20 16:04:30,240 WARN [main] security.UserGroupInformation (UserGroupInformation.java:hasSufficientTimeElapsed(1193)) - Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
2023-11-20 16:04:32,641 WARN [main] security.UserGroupInformation (UserGroupInformation.java:hasSufficientTimeElapsed(1193)) - Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
2023-11-20 16:04:33,900 WARN [main] ipc.Client (Client.java:run(678)) - Couldn't setup connection for <USER>@7458907E-6F32-4F4A-B33E-6820BE708AD4.DATALAKE.COM to mnode1.7458907e-6f32-4f4a-b33e-6820be708ad4.datalake.com/51.*.*.*:8020
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: ICMP Port Unreachable)]
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:413)
at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:560)
at org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:375)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:729)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:725)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:724)
at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:375)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1528)
at org.apache.hadoop.ipc.Client.call(Client.java:1451)
at org.apache.hadoop.ipc.Client.call(Client.java:1412)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2108)
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1301)
at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57)
at org.apache.hadoop.fs.Globber.glob(Globber.java:252)
at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1657)
at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:326)
at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:235)
at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:218)
at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
Caused by: GSSException: No valid credentials provided (Mechanism level: ICMP Port Unreachable)
at sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:777)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192)
... 40 more
Caused by: java.net.PortUnreachableException: ICMP Port Unreachable
at java.net.PlainDatagramSocketImpl.receive0(Native Method)
at java.net.AbstractPlainDatagramSocketImpl.receive(AbstractPlainDatagramSocketImpl.java:143)
at java.net.DatagramSocket.receive(DatagramSocket.java:812)
at sun.security.krb5.internal.UDPClient.receive(NetClient.java:206)
at sun.security.krb5.KdcComm$KdcCommunication.run(KdcComm.java:404)
at sun.security.krb5.KdcComm$KdcCommunication.run(KdcComm.java:364)
at java.security.AccessController.doPrivileged(Native Method)
at sun.security.krb5.KdcComm.send(KdcComm.java:348)
at sun.security.krb5.KdcComm.sendIfPossible(KdcComm.java:253)
at sun.security.krb5.KdcComm.send(KdcComm.java:229)
at sun.security.krb5.KdcComm.send(KdcComm.java:200)
at sun.security.krb5.KrbTgsReq.send(KrbTgsReq.java:221)
at sun.security.krb5.KrbTgsReq.sendAndGetCreds(KrbTgsReq.java:236)
at sun.security.krb5.internal.CredentialsUtil.serviceCredsSingle(CredentialsUtil.java:477)
at sun.security.krb5.internal.CredentialsUtil.serviceCredsReferrals(CredentialsUtil.java:369)
at sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:333)
at sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:314)
at sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:169)
at sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:490)
at sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:695)
2023-11-20 16:07:56,779 WARN [main] retry.RetryInvocationHandler (RetryInvocationHandler.java:invoke(122)) - Exception while invoking class org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo over mnode1.7458907e-6f32-4f4a-b33e-6820be708ad4.datalake.com/51.*.*.*:8020. Not retrying because failovers (15) exceeded maximum allowed (15)
java.io.IOException: Failed on local exception: java.io.IOException: Couldn't setup connection for <USER>@7458907E-6F32-4F4A-B33E-6820BE708AD4.DATALAKE.COM to mnode1.7458907e-6f32-4f4a-b33e-6820be708ad4.datalake.com/51.*.*.*:8020; Host Details : local host is: "vm-ubuntu.7458907e-6f32-4f4a-b33e-6820be708ad4.datalake.com/10.0.2.15"; destination host is: "mnode1.7458907e-6f32-4f4a-b33e-6820be708ad4.datalake.com":8020;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:773)
at org.apache.hadoop.ipc.Client.call(Client.java:1479)
at org.apache.hadoop.ipc.Client.call(Client.java:1412)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2108)
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1301)
at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57)
at org.apache.hadoop.fs.Globber.glob(Globber.java:252)
at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1657)
at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:326)
at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:235)
at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:218)
at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
Caused by: java.io.IOException: Couldn't setup connection for <USER>@7458907E-6F32-4F4A-B33E-6820BE708AD4.DATALAKE.COM to mnode1.7458907e-6f32-4f4a-b33e-6820be708ad4.datalake.com/51.*.*.*:8020
at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:679)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:650)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:737)
at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:375)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1528)
at org.apache.hadoop.ipc.Client.call(Client.java:1451)
... 28 more
Caused by: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: ICMP Port Unreachable)]
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:413)
at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:560)
at org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:375)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:729)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:725)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:724)
... 31 more
Caused by: GSSException: No valid credentials provided (Mechanism level: ICMP Port Unreachable)
at sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:777)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192)
... 40 more
Caused by: java.net.PortUnreachableException: ICMP Port Unreachable
at java.net.PlainDatagramSocketImpl.receive0(Native Method)
at java.net.AbstractPlainDatagramSocketImpl.receive(AbstractPlainDatagramSocketImpl.java:143)
at java.net.DatagramSocket.receive(DatagramSocket.java:812)
at sun.security.krb5.internal.UDPClient.receive(NetClient.java:206)
at sun.security.krb5.KdcComm$KdcCommunication.run(KdcComm.java:404)
at sun.security.krb5.KdcComm$KdcCommunication.run(KdcComm.java:364)
at java.security.AccessController.doPrivileged(Native Method)
at sun.security.krb5.KdcComm.send(KdcComm.java:348)
at sun.security.krb5.KdcComm.sendIfPossible(KdcComm.java:253)
at sun.security.krb5.KdcComm.send(KdcComm.java:229)
at sun.security.krb5.KdcComm.send(KdcComm.java:200)
at sun.security.krb5.KrbTgsReq.send(KrbTgsReq.java:221)
at sun.security.krb5.KrbTgsReq.sendAndGetCreds(KrbTgsReq.java:236)
at sun.security.krb5.internal.CredentialsUtil.serviceCredsSingle(CredentialsUtil.java:477)
at sun.security.krb5.internal.CredentialsUtil.serviceCredsReferrals(CredentialsUtil.java:369)
at sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:333)
at sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:314)
at sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:169)
at sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:490)
at sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:695)
... 43 more
ls: Failed on local exception: java.io.IOException: Couldn't setup connection for <USER>@7458907E-6F32-4F4A-B33E-6820BE708AD4.DATALAKE.COM to mnode1.7458907e-6f32-4f4a-b33e-6820be708ad4.datalake.com/51.*.*.*:8020; Host Details : local host is: "vm-ubuntu.7458907e-6f32-4f4a-b33e-6820be708ad4.datalake.com/10.0.2.15"; destination host is: "mnode1.7458907e-6f32-4f4a-b33e-6820be708ad4.datalake.com":8020; Kerberos ticket was obtained successfully Ticket cache: FILE:/tmp/krb5cc_1001
Default principal: <USER>@7458907E-6F32-4F4A-B33E-6820BE708AD4.DATALAKE.COM
Valid starting Expires Service principal
11/20/2023 15:54:26 11/21/2023 15:54:15 krbtgt/7458907E-6F32-4F4A-B33E-6820BE708AD4.DATALAKE.COM@7458907E-6F32-4F4A-B33E-6820BE708AD4.DATALAKE.COM Any ideas why can't access hdfs ? any missing informations
... View more
Labels:
- Labels:
-
HDFS
-
Hortonworks Data Platform (HDP)
06-09-2022
05:59 AM
Hi @rki_ Indeed, records dns was not created during enrollment process creating required records solved my issue Thanks a lot 😉
... View more
06-08-2022
01:39 AM
Hi @rki_ Yes, i confirm it's a dns problem. after adding the two nodes on /etc/hosts it works fine but as i'm using freeipa how can i acheive that without editing the /etc/hosts file ?
... View more
06-06-2022
02:07 AM
We recently added a two nodes to our cluster through ambari wizard, we installed datanode, nodemanager, Metrics Monitor, LogFeeder The datanode/nodemanager are starting correctly by not live topology_mappings.data was updated in both mnode and cnodes cat /etc/hadoop/conf/topology_mappings.data
[network_topology]
cnode2.2b87d4bc-6cf3-4350-aaf7-eff7227d1aef.datalake.com=/default-rack
10.1.2.172=/default-rack
cnode5.2b87d4bc-6cf3-4350-aaf7-eff7227d1aef.datalake.com=/default-rack
10.1.2.169=/default-rack
cnode4.2b87d4bc-6cf3-4350-aaf7-eff7227d1aef.datalake.com=/default-rack
10.1.2.175=/default-rack
cnode3.2b87d4bc-6cf3-4350-aaf7-eff7227d1aef.datalake.com=/default-rack
10.1.2.67=/default-rack
cnode1.2b87d4bc-6cf3-4350-aaf7-eff7227d1aef.datalake.com=/default-rack
10.1.2.188=/default-rack
cnode6.2b87d4bc-6cf3-4350-aaf7-eff7227d1aef.datalake.com=/default-rack
10.1.2.9=/default-rack datanodes have 2 external disks to store hdfs data [root@node6 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/vdb 200G 33M 200G 1% /grid/disk0
/dev/vdc 200G 33M 200G 1% /grid/disk We are using hdp 2.6.5 with freeipa as ldap, we checked that everything was created successfully (principals, keytabs ...) but logs are showing some warnings/errors with kerberos datanodes logs: 2022-06-06 10:45:39,357 WARN datanode.DataNode (BPServiceActor.java:retrieveNamespaceInfo(227)) - Problem connecting to server: mnode0.2b87d4bc-6cf3-4350-aaf7-eff7227d1aef.datalake.com/10.1.2.145:8020
2022-06-06 10:45:39,641 WARN datanode.DataNode (BPServiceActor.java:retrieveNamespaceInfo(227)) - Problem connecting to server: mnode1.2b87d4bc-6cf3-4350-aaf7-eff7227d1aef.datalake.com/10.1.2.106:8020 mnode logs: 2022-06-06 10:47:55,038 INFO ipc.Server (Server.java:doRead(1006)) - Socket Reader #1 for port 8020: readAndProcess from client 10.1.2.169 threw exception [org.apache.hadoop.security.authorize.AuthorizationException: User dn/cnode5.2b87d4bc-6cf3-4350-aaf7-eff7227d1aef.datalake.com@2B87D4BC-6CF3-4350-AAF7-EFF7227D1AEF.DATALAKE.COM (auth:KERBEROS) is not authorized for protocol interface org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol: this service is only accessible by dn/10.1.2.169@2B87D4BC-6CF3-4350-AAF7-EFF7227D1AEF.DATALAKE.COM]
... View more
Labels:
- Labels:
-
HDFS
-
Hortonworks Data Platform (HDP)
10-04-2021
07:03 AM
Hi @VidyaSargur Not my issue is not esolved yet, i'm testing @smruti's recommandations and i'm waiting for his feedback
... View more
09-30-2021
02:32 AM
Hi@asish Thanks for your advises, very useful. i'll try that and give you a feedback
... View more
09-30-2021
02:29 AM
hi @smruti Thanks for your reply, bellow values of Hive heap size HS2 Heap Size = 44201MB MS Heap Size = 14733MB Hive Client heap size = 1024MB hive.server2.thrift.max.worker.threads = 500 Is there any recommandation/documentation rom cloudera on how to calculate set right value ? Your last comment is very intersting, how can i check if my workload is distributed ?
... View more
09-28-2021
02:43 AM
We have an hdp cluster 2.6.5, hive service is installed with HA 3 Hive server 3 Metastores 1 HiveServer2 Interactive 1 WebHCat Server We are receiving "memory high usage" alerts from our monitoring tool, when i check the memory consumption on that nodes i can see that hive is consuming more than 80% of memory node When memory usage reach 98%, the hive server crash with the following error message [root@mnode4 hive]# head -n 20 hs_err_pid27508.log
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 1732247552 bytes for committing reserved memory.
# Possible reasons:
# The system is out of physical RAM or swap space
# In 32 bit mode, the process size limit was hit
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Use 64 bit Java on a 64 bit OS
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
# Out of Memory Error (os_linux.cpp:2627), pid=27508, tid=0x00007f43152a3700
#
# JRE version: Java(TM) SE Runtime Environment (8.0_112-b15) (build 1.8.0_112-b15) htop give the bellow view Why all these process are created ? How to reduce memory usage ?
... View more
Labels:
08-12-2021
07:54 AM
This issue occurs when kerberos authentication is enabled There is bug issue opened in ambari jira https://issues.apache.org/jira/browse/AMBARI-25127 To fix my problem, i just disabled kerberos authentication authentication.kerberos.enabled=false
... View more