Member since
06-29-2022
9
Posts
0
Kudos Received
0
Solutions
07-14-2022
02:07 AM
Hello, could anyone help with this warning? What does it mean? HDFS is not usable at the moment: 2022-07-13 13:54:24,140 WARN org.apache.hadoop.hdfs.DFSClient: Connection failure: Failed to connect to /X.X.X.223:9866 for file /tmp/.cloudera_health_monitoring_canary_files/.canary_file_2022_07_13-13_5
4_24.7b7b1aba16dd0018 for block BP-1398826736-X.X.X.220-1656342421752:blk_1073760105_19281:com.google.protobuf.InvalidProtocolBufferException: Message missing required fields: seqno, lastPacketInBlock, d
ataLen
com.google.protobuf.InvalidProtocolBufferException: Message missing required fields: seqno, lastPacketInBlock, dataLen
at com.google.protobuf.UninitializedMessageException.asInvalidProtocolBufferException(UninitializedMessageException.java:79)
at com.google.protobuf.AbstractParser.checkMessageInitialized(AbstractParser.java:68)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:191)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:203)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:208)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:48)
at org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos$PacketHeaderProto.parseFrom(DataTransferProtos.java:20951)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketHeader.setFieldsFromData(PacketHeader.java:130)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:179)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:102)
at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.readTrailingEmptyPacket(BlockReaderRemote.java:268)
at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.readNextPacket(BlockReaderRemote.java:233)
at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.read(BlockReaderRemote.java:169)
at org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(DFSInputStream.java:1072)
at org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:1014)
at org.apache.hadoop.hdfs.DFSInputStream.pread(DFSInputStream.java:1373)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1337)
at org.apache.hadoop.fs.FSInputStream.readFully(FSInputStream.java:124)
at org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:125)
at com.cloudera.cmf.cdh7client.hdfs.FSDataInputStreamImpl.readFully(FSDataInputStreamImpl.java:24)
at com.cloudera.cmon.firehose.polling.hdfs.HdfsCanary.readFile(HdfsCanary.java:205)
at com.cloudera.cmon.firehose.polling.hdfs.HdfsCanary.doWork(HdfsCanary.java:105)
at com.cloudera.cmon.firehose.polling.hdfs.HdfsCanary.doWork(HdfsCanary.java:47)
at com.cloudera.cmon.firehose.polling.AbstractFileSystemClientTask.doWorkWithClientConfig(AbstractFileSystemClientTask.java:55)
at com.cloudera.cmon.firehose.polling.AbstractCdhWorkUsingClientConfigs.doWork(AbstractCdhWorkUsingClientConfigs.java:45)
at com.cloudera.cmon.firehose.polling.CdhTask$InstrumentedWork.doWork(CdhTask.java:231)
at com.cloudera.cmf.cdhclient.util.ImpersonatingTaskWrapper.runTask(ImpersonatingTaskWrapper.java:72)
at com.cloudera.cmf.cdhclient.util.ImpersonatingTaskWrapper.access$000(ImpersonatingTaskWrapper.java:21)
at com.cloudera.cmf.cdhclient.util.ImpersonatingTaskWrapper$1.run(ImpersonatingTaskWrapper.java:107)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898)
at com.cloudera.cmf.cdh7client.security.UserGroupInformationImpl.doAs(UserGroupInformationImpl.java:42)
at com.cloudera.cmf.cdhclient.util.ImpersonatingTaskWrapper.doWork(ImpersonatingTaskWrapper.java:104)
at com.cloudera.cmf.cdhclient.CdhExecutor$SecurityWrapper$1.run(CdhExecutor.java:189)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898)
at com.cloudera.cmf.cdh7client.security.UserGroupInformationImpl.doAs(UserGroupInformationImpl.java:42)
at com.cloudera.cmf.cdhclient.CdhExecutor$SecurityWrapper.doWork(CdhExecutor.java:186)
at com.cloudera.cmf.cdhclient.CdhExecutor$1.call(CdhExecutor.java:125)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
2022-07-13 13:54:24,142 WARN org.apache.hadoop.hdfs.DFSClient: Connection failure: Failed to connect to /X.X.X.225:9866 for file /tmp/.cloudera_health_monitoring_canary_files/.canary_file_2022_07_13-13_54_24.7b7b1aba16dd0018 for block BP-1398826736-X.X.X.220-1656342421752:blk_1073760105_19281:com.google.protobuf.InvalidProtocolBufferException$InvalidWireTypeException: Protocol message tag had invalid wire type.
com.google.protobuf.InvalidProtocolBufferException$InvalidWireTypeException: Protocol message tag had invalid wire type.
at com.google.protobuf.InvalidProtocolBufferException.invalidWireType(InvalidProtocolBufferException.java:111)
at com.google.protobuf.UnknownFieldSet$Builder.mergeFieldFrom(UnknownFieldSet.java:557)
at com.google.protobuf.GeneratedMessage.parseUnknownField(GeneratedMessage.java:275)
at org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos$PacketHeaderProto.<init>(DataTransferProtos.java:20614)
at org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos$PacketHeaderProto.<init>(DataTransferProtos.java:20572)
at org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos$PacketHeaderProto$1.parsePartialFrom(DataTransferProtos.java:20675)
at org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos$PacketHeaderProto$1.parsePartialFrom(DataTransferProtos.java:20670)
at com.google.protobuf.AbstractParser.parsePartialFrom(AbstractParser.java:158)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:191)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:203)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:208)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:48)
at org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos$PacketHeaderProto.parseFrom(DataTransferProtos.java:20951)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketHeader.setFieldsFromData(PacketHeader.java:130)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:179)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:102)
at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.readTrailingEmptyPacket(BlockReaderRemote.java:268)
at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.readNextPacket(BlockReaderRemote.java:233)
at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.read(BlockReaderRemote.java:169)
at org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(DFSInputStream.java:1072)
at org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:1014)
at org.apache.hadoop.hdfs.DFSInputStream.pread(DFSInputStream.java:1373)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1337)
at org.apache.hadoop.fs.FSInputStream.readFully(FSInputStream.java:124)
at org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:125)
at com.cloudera.cmf.cdh7client.hdfs.FSDataInputStreamImpl.readFully(FSDataInputStreamImpl.java:24)
at com.cloudera.cmon.firehose.polling.hdfs.HdfsCanary.readFile(HdfsCanary.java:205)
at com.cloudera.cmon.firehose.polling.hdfs.HdfsCanary.doWork(HdfsCanary.java:105)
at com.cloudera.cmon.firehose.polling.hdfs.HdfsCanary.doWork(HdfsCanary.java:47)
at com.cloudera.cmon.firehose.polling.AbstractFileSystemClientTask.doWorkWithClientConfig(AbstractFileSystemClientTask.java:55)
at com.cloudera.cmon.firehose.polling.AbstractCdhWorkUsingClientConfigs.doWork(AbstractCdhWorkUsingClientConfigs.java:45)
at com.cloudera.cmon.firehose.polling.CdhTask$InstrumentedWork.doWork(CdhTask.java:231)
at com.cloudera.cmf.cdhclient.util.ImpersonatingTaskWrapper.runTask(ImpersonatingTaskWrapper.java:72)
at com.cloudera.cmf.cdhclient.util.ImpersonatingTaskWrapper.access$000(ImpersonatingTaskWrapper.java:21)
at com.cloudera.cmf.cdhclient.util.ImpersonatingTaskWrapper$1.run(ImpersonatingTaskWrapper.java:107)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898)
at com.cloudera.cmf.cdh7client.security.UserGroupInformationImpl.doAs(UserGroupInformationImpl.java:42)
at com.cloudera.cmf.cdhclient.util.ImpersonatingTaskWrapper.doWork(ImpersonatingTaskWrapper.java:104)
at com.cloudera.cmf.cdhclient.CdhExecutor$SecurityWrapper$1.run(CdhExecutor.java:189)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898)
at com.cloudera.cmf.cdh7client.security.UserGroupInformationImpl.doAs(UserGroupInformationImpl.java:42)
at com.cloudera.cmf.cdhclient.CdhExecutor$SecurityWrapper.doWork(CdhExecutor.java:186)
at com.cloudera.cmf.cdhclient.CdhExecutor$1.call(CdhExecutor.java:125)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
2022-07-13 13:54:24,143 WARN org.apache.hadoop.hdfs.DFSClient: Connection failure: Failed to connect to /X.X.X.228:9866 for file /tmp/.cloudera_health_monitoring_canary_files/.canary_file_2022_07_13-13_54_24.7b7b1aba16dd0018 for block BP-1398826736-X.X.X.220-1656342421752:blk_1073760105_19281:com.google.protobuf.InvalidProtocolBufferException: Protocol message contained an invalid tag (zero).
com.google.protobuf.InvalidProtocolBufferException: Protocol message contained an invalid tag (zero).
at com.google.protobuf.InvalidProtocolBufferException.invalidTag(InvalidProtocolBufferException.java:102)
at com.google.protobuf.CodedInputStream$ArrayDecoder.readTag(CodedInputStream.java:627)
at org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos$PacketHeaderProto.<init>(DataTransferProtos.java:20608)
at org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos$PacketHeaderProto.<init>(DataTransferProtos.java:20572)
at org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos$PacketHeaderProto$1.parsePartialFrom(DataTransferProtos.java:20675)
at org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos$PacketHeaderProto$1.parsePartialFrom(DataTransferProtos.java:20670)
at com.google.protobuf.AbstractParser.parsePartialFrom(AbstractParser.java:158)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:191)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:203)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:208)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:48)
at org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos$PacketHeaderProto.parseFrom(DataTransferProtos.java:20951)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketHeader.setFieldsFromData(PacketHeader.java:130)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:179)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:102)
at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.readTrailingEmptyPacket(BlockReaderRemote.java:268)
at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.readNextPacket(BlockReaderRemote.java:233)
at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.read(BlockReaderRemote.java:169)
at org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(DFSInputStream.java:1072)
at org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:1014)
at org.apache.hadoop.hdfs.DFSInputStream.pread(DFSInputStream.java:1373)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1337)
at org.apache.hadoop.fs.FSInputStream.readFully(FSInputStream.java:124)
at org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:125)
at com.cloudera.cmf.cdh7client.hdfs.FSDataInputStreamImpl.readFully(FSDataInputStreamImpl.java:24)
at com.cloudera.cmon.firehose.polling.hdfs.HdfsCanary.readFile(HdfsCanary.java:205)
at com.cloudera.cmon.firehose.polling.hdfs.HdfsCanary.doWork(HdfsCanary.java:105)
at com.cloudera.cmon.firehose.polling.hdfs.HdfsCanary.doWork(HdfsCanary.java:47)
at com.cloudera.cmon.firehose.polling.AbstractFileSystemClientTask.doWorkWithClientConfig(AbstractFileSystemClientTask.java:55)
at com.cloudera.cmon.firehose.polling.AbstractCdhWorkUsingClientConfigs.doWork(AbstractCdhWorkUsingClientConfigs.java:45)
at com.cloudera.cmon.firehose.polling.CdhTask$InstrumentedWork.doWork(CdhTask.java:231)
at com.cloudera.cmf.cdhclient.util.ImpersonatingTaskWrapper.runTask(ImpersonatingTaskWrapper.java:72)
at com.cloudera.cmf.cdhclient.util.ImpersonatingTaskWrapper.access$000(ImpersonatingTaskWrapper.java:21)
at com.cloudera.cmf.cdhclient.util.ImpersonatingTaskWrapper$1.run(ImpersonatingTaskWrapper.java:107)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898)
at com.cloudera.cmf.cdh7client.security.UserGroupInformationImpl.doAs(UserGroupInformationImpl.java:42)
at com.cloudera.cmf.cdhclient.util.ImpersonatingTaskWrapper.doWork(ImpersonatingTaskWrapper.java:104)
at com.cloudera.cmf.cdhclient.CdhExecutor$SecurityWrapper$1.run(CdhExecutor.java:189)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898)
at com.cloudera.cmf.cdh7client.security.UserGroupInformationImpl.doAs(UserGroupInformationImpl.java:42)
at com.cloudera.cmf.cdhclient.CdhExecutor$SecurityWrapper.doWork(CdhExecutor.java:186)
at com.cloudera.cmf.cdhclient.CdhExecutor$1.call(CdhExecutor.java:125)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
... View more
Labels:
- Labels:
-
Cloudera Data Platform (CDP)
-
HDFS
07-12-2022
06:16 AM
@Scharan java version: java-11-openjdk-11.0.15.0.9-2.el8_4.x86_64 Logs from debugging: hdfs dfs -ls
[UnixLoginModule]: succeeded importing info:
uid = 1012
gid = 491
supp gid = 491
Debug is true storeKey false useTicketCache true useKeyTab false doNotPrompt true ticketCache is null isInitiator true KeyTab is null refreshKrb5Config is true principal is null tryFirstPass is false useFirstPass is false storePass is false clearPass is false
Refreshing Kerberos configuration
Acquire TGT from Cache
Principal is null
null credentials from Ticket Cache
[Krb5LoginModule] authentication failed
Unable to obtain Principal Name for authentication
[UnixLoginModule]: added UnixPrincipal,
UnixNumericUserPrincipal,
UnixNumericGroupPrincipal(s),
to Subject But klist output shows principal: klist
Ticket cache: KCM:1012
Default principal: hdfssu@DOMAIN.COM
Valid starting Expires Service principal
07/12/2022 13:20:29 07/12/2022 23:20:29 krbtgt/DOMAIN.COM@DOMAIN.COM
renew until 07/19/2022 13:20:29
... View more
07-12-2022
05:22 AM
Hello, after enabling Kerberos, I created a user in AD and on host machine which is part of a group of superusers in CM: than I set permissions in Ranger like so: and after doing kinit and hdfs dfs -ls / I have an error: WARN ipc.Client: Exception encountered while connecting to the server : org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
ls: DestHost:destPort FQDN:8020 , LocalHost:localPort FQDN/X.X.X.220:0. Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS] Could someone help please?
... View more
Labels:
07-05-2022
06:56 AM
@araujo thank you for a fast response. What could be solution in your opinion?
... View more
07-05-2022
05:53 AM
Hi @araujo There was a mismatch between Kerberos and AD encryption types. Service monitor log: 2022-07-05 14:35:17,917 WARN org.apache.hadoop.hdfs.DFSClient: Connection failure: Failed to connect to /X.X.X.225:9866 for file /tmp/.cloudera_health_monitoring_canary_files/.canary_file_2022_07_05-14_34_59.8565a95826ef54f9 for block BP-1398826736-X.X.X.220-1656342421752:blk_1073752440_11616:com.google.protobuf.InvalidProtocolBufferException$InvalidWireTypeException: Protocol message tag had invalid wire type.
com.google.protobuf.InvalidProtocolBufferException$InvalidWireTypeException: Protocol message tag had invalid wire type.
at com.google.protobuf.InvalidProtocolBufferException.invalidWireType(InvalidProtocolBufferException.java:111)
at com.google.protobuf.UnknownFieldSet$Builder.mergeFieldFrom(UnknownFieldSet.java:557)
at com.google.protobuf.GeneratedMessage.parseUnknownField(GeneratedMessage.java:275)
at org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos$PacketHeaderProto.<init>(DataTransferProtos.java:20614)
at org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos$PacketHeaderProto.<init>(DataTransferProtos.java:20572)
at org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos$PacketHeaderProto$1.parsePartialFrom(DataTransferProtos.java:20675)
at org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos$PacketHeaderProto$1.parsePartialFrom(DataTransferProtos.java:20670)
at com.google.protobuf.AbstractParser.parsePartialFrom(AbstractParser.java:158)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:191)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:203)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:208)
at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:48)
at org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos$PacketHeaderProto.parseFrom(DataTransferProtos.java:20951)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketHeader.setFieldsFromData(PacketHeader.java:130)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:179)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:102)
at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.readTrailingEmptyPacket(BlockReaderRemote.java:268)
at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.readNextPacket(BlockReaderRemote.java:233)
at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.read(BlockReaderRemote.java:169)
at org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(DFSInputStream.java:1072)
at org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:1014)
at org.apache.hadoop.hdfs.DFSInputStream.pread(DFSInputStream.java:1373)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1337)
at org.apache.hadoop.fs.FSInputStream.readFully(FSInputStream.java:124)
at org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:125)
at com.cloudera.cmf.cdh7client.hdfs.FSDataInputStreamImpl.readFully(FSDataInputStreamImpl.java:24)
at com.cloudera.cmon.firehose.polling.hdfs.HdfsCanary.readFile(HdfsCanary.java:205)
at com.cloudera.cmon.firehose.polling.hdfs.HdfsCanary.doWork(HdfsCanary.java:105)
at com.cloudera.cmon.firehose.polling.hdfs.HdfsCanary.doWork(HdfsCanary.java:47)
at com.cloudera.cmon.firehose.polling.AbstractFileSystemClientTask.doWorkWithClientConfig(AbstractFileSystemClientTask.java:55)
at com.cloudera.cmon.firehose.polling.AbstractCdhWorkUsingClientConfigs.doWork(AbstractCdhWorkUsingClientConfigs.java:45)
at com.cloudera.cmon.firehose.polling.CdhTask$InstrumentedWork.doWork(CdhTask.java:231)
at com.cloudera.cmf.cdhclient.util.ImpersonatingTaskWrapper.runTask(ImpersonatingTaskWrapper.java:72)
at com.cloudera.cmf.cdhclient.util.ImpersonatingTaskWrapper.access$000(ImpersonatingTaskWrapper.java:21)
at com.cloudera.cmf.cdhclient.util.ImpersonatingTaskWrapper$1.run(ImpersonatingTaskWrapper.java:107)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898)
at com.cloudera.cmf.cdh7client.security.UserGroupInformationImpl.doAs(UserGroupInformationImpl.java:42)
at com.cloudera.cmf.cdhclient.util.ImpersonatingTaskWrapper.doWork(ImpersonatingTaskWrapper.java:104)
at com.cloudera.cmf.cdhclient.CdhExecutor$SecurityWrapper$1.run(CdhExecutor.java:189)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898)
at com.cloudera.cmf.cdh7client.security.UserGroupInformationImpl.doAs(UserGroupInformationImpl.java:42)
at com.cloudera.cmf.cdhclient.CdhExecutor$SecurityWrapper.doWork(CdhExecutor.java:186)
at com.cloudera.cmf.cdhclient.CdhExecutor$1.call(CdhExecutor.java:125)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
2022-07-05 14:35:17,917 WARN org.apache.hadoop.hdfs.DFSClient: No live nodes contain block BP-1398826736-X.X.X.220-1656342421752:blk_1073752440_11616 after checking nodes = [DatanodeInfoWithStorage[X.X.X.226:9866,DS-13ee530f-1bf7-4752-8e4b-c7dfc8d760c7,DISK], DatanodeInfoWithStorage[X.X.X.228:9866,DS-de389cd6-5b67-4e37-b6d5-40b945699832,DISK], DatanodeInfoWithStorage[X.X.X.225:9866,DS-0e7334d6-8fcd-4ee6-b554-fd2287465e02,DISK]], ignoredNodes = null
2022-07-05 14:35:17,917 WARN org.apache.hadoop.hdfs.DFSClient: Could not obtain block: BP-1398826736-X.X.X.220-1656342421752:blk_1073752440_11616 file=/tmp/.cloudera_health_monitoring_canary_files/.canary_file_2022_07_05-14_34_59.8565a95826ef54f9 No live nodes contain current block Block locations: DatanodeInfoWithStorage[X.X.X.226:9866,DS-13ee530f-1bf7-4752-8e4b-c7dfc8d760c7,DISK] DatanodeInfoWithStorage[X.X.X.228:9866,DS-de389cd6-5b67-4e37-b6d5-40b945699832,DISK] DatanodeInfoWithStorage[X.X.X.225:9866,DS-0e7334d6-8fcd-4ee6-b554-fd2287465e02,DISK] Dead nodes: DatanodeInfoWithStorage[X.X.X.226:9866,DS-13ee530f-1bf7-4752-8e4b-c7dfc8d760c7,DISK] DatanodeInfoWithStorage[X.X.X.228:9866,DS-de389cd6-5b67-4e37-b6d5-40b945699832,DISK] DatanodeInfoWithStorage[X.X.X.225:9866,DS-0e7334d6-8fcd-4ee6-b554-fd2287465e02,DISK]. Throwing a BlockMissingException How can I use command line after Kerberos was enabled? hdfs dfs -ls / #is not possible anymore 22/07/05 14:34:38 WARN ipc.Client: Exception encountered while connecting to the server : org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
ls: DestHost:destPort FQDN_02:8020 , LocalHost:localPort FQDN_01/X.X.X.220:0. Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
... View more
07-05-2022
12:21 AM
New update: Cluster is fully Kerberized but problem still exist... Health status changes from bad to good every minute. Any hint on this?
... View more
07-04-2022
02:53 AM
Hello, Cluster has been Kerberized (LDAP / AD / Kerberos) and I have errors when I try to start the cluster. Zookeeper service start with following error: 2022-07-01 13:24:14,341 ERROR org.apache.zookeeper.server.quorum.auth.SaslQuorumAuthServer: Failed to authenticate using SASL
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: Failure unspecified at GSS-API level (Mechanism level: Invalid argument (400) - Cannot find key of appropriate type to decrypt
AP-REQ - RC4 with HMAC)]
at jdk.security.jgss/com.sun.security.sasl.gsskerb.GssKrb5Server.evaluateResponse(GssKrb5Server.java:199)
at org.apache.zookeeper.server.quorum.auth.SaslQuorumAuthServer.authenticate(SaslQuorumAuthServer.java:99)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.handleConnection(QuorumCnxManager.java:563)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.receiveConnection(QuorumCnxManager.java:487)
at org.apache.zookeeper.server.quorum.QuorumCnxManager$QuorumConnectionReceiverThread.run(QuorumCnxManager.java:523)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: GSSException: Failure unspecified at GSS-API level (Mechanism level: Invalid argument (400) - Cannot find key of appropriate type to decrypt AP-REQ - RC4 with HMAC)
at java.security.jgss/sun.security.jgss.krb5.Krb5Context.acceptSecContext(Krb5Context.java:859)
at java.security.jgss/sun.security.jgss.GSSContextImpl.acceptSecContext(GSSContextImpl.java:361)
at java.security.jgss/sun.security.jgss.GSSContextImpl.acceptSecContext(GSSContextImpl.java:303)
at jdk.security.jgss/com.sun.security.sasl.gsskerb.GssKrb5Server.evaluateResponse(GssKrb5Server.java:167)
... 7 more
Caused by: KrbException: Invalid argument (400) - Cannot find key of appropriate type to decrypt AP-REQ - RC4 with HMAC
at java.security.jgss/sun.security.krb5.KrbApReq.authenticate(KrbApReq.java:278)
at java.security.jgss/sun.security.krb5.KrbApReq.<init>(KrbApReq.java:149)
at java.security.jgss/sun.security.jgss.krb5.InitSecContextToken.<init>(InitSecContextToken.java:139)
at java.security.jgss/sun.security.jgss.krb5.Krb5Context.acceptSecContext(Krb5Context.java:832)
... 10 more
2022-07-01 13:24:14,341 ERROR org.apache.zookeeper.server.quorum.QuorumCnxManager: Exception handling connection, addr: /x.x.x.222:35604, closing server connection
2022-07-01 13:24:14,476 INFO org.apache.zookeeper.server.quorum.auth.SaslQuorumAuthLearner: QuorumLearner will use GSSAPI as SASL mechanism.
2022-07-01 13:24:14,476 INFO org.apache.zookeeper.server.quorum.auth.SaslQuorumAuthLearner: QuorumLearner will use GSSAPI as SASL mechanism.
2022-07-01 13:24:14,477 ERROR org.apache.zookeeper.server.quorum.QuorumCnxManager: Exception while connecting, id: [2, FQDN/x.x.x.221:4181], addr: {}, closing learner connection
javax.security.sasl.SaslException: Authentication failed against server addr: FQDN/x.x.x.221:4181
at org.apache.zookeeper.server.quorum.auth.SaslQuorumAuthLearner.authenticate(SaslQuorumAuthLearner.java:126)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.startConnection(QuorumCnxManager.java:442)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.initiateConnection(QuorumCnxManager.java:353)
at org.apache.zookeeper.server.quorum.QuorumCnxManager$QuorumConnectionReqThread.run(QuorumCnxManager.java:402)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
2022-07-01 13:24:14,478 ERROR org.apache.zookeeper.server.quorum.QuorumCnxManager: Exception while connecting, id: [3, FQDN/x.x.x.222:4181], addr: {}, closing learner connection
javax.security.sasl.SaslException: Authentication failed against server addr: FQDN/x.x.x.222:4181
at org.apache.zookeeper.server.quorum.auth.SaslQuorumAuthLearner.authenticate(SaslQuorumAuthLearner.java:126)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.startConnection(QuorumCnxManager.java:442)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.initiateConnection(QuorumCnxManager.java:353)
at org.apache.zookeeper.server.quorum.QuorumCnxManager$QuorumConnectionReqThread.run(QuorumCnxManager.java:402)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
2022-07-01 13:24:14,906 WARN org.apache.zookeeper.server.NettyServerCnxn: Closing connection to /x.x.x.220:60416
java.io.IOException: ZK down
at org.apache.zookeeper.server.NettyServerCnxn.receiveMessage(NettyServerCnxn.java:474)
at org.apache.zookeeper.server.NettyServerCnxn.processMessage(NettyServerCnxn.java:360)
at org.apache.zookeeper.server.NettyServerCnxnFactory$CnxnChannelHandler.channelRead(NettyServerCnxnFactory.java:266)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:829)
2022-07-01 13:24:18,478 WARN org.apache.zookeeper.server.NettyServerCnxn: Closing connection to /x.x.x.220:60456
java.io.IOException: ZK down
at org.apache.zookeeper.server.NettyServerCnxn.receiveMessage(NettyServerCnxn.java:474)
at org.apache.zookeeper.server.NettyServerCnxn.processMessage(NettyServerCnxn.java:360)
at org.apache.zookeeper.server.NettyServerCnxnFactory$CnxnChannelHandler.channelRead(NettyServerCnxnFactory.java:266)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:829) This line confuses me because I'm using different encryption type: aes256-cts-hmac-sha1-96 Caused by: GSSException: Failure unspecified at GSS-API level (Mechanism level: Invalid argument (400) - Cannot find key of appropriate type to decrypt AP-REQ - RC4 with HMAC) HDFS and other services failed to start. Any advice would be appreciated.
... View more
Labels:
06-30-2022
02:35 AM
I forgot to mention that the Kerberization failed and then I disabled it. But when I go to Add Cluster there is a message: KDC is already setup...
... View more
06-29-2022
10:36 AM
Hello, I'm facing a problem with HDFS is in bad state because of Canary test failed. ERROR com.cloudera.cmon.firehose.polling.hdfs.HdfsCanary: (9 skipped) com.cloudera.cmon.firehose.polling.hdfs.HdfsCanary@70164e31 for hdfs://nameservice1: Failed to write to /tmp/.cl
oudera_health_monitoring_canary_files/.canary_file_2022_06_29-15_20_26.3f6b5657894eb2c0. Error: {}
java.io.IOException: Could not get block locations. Source file "/tmp/.cloudera_health_monitoring_canary_files/.canary_file_2022_06_29-15_20_26.3f6b5657894eb2c0" - Aborting...block==null
at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1491)
at org.apache.hadoop.hdfs.DataStreamer.processDatanodeOrExternalError(DataStreamer.java:1271)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:667) WARN org.apache.hadoop.hdfs.DataStreamer: Could not get block locations. Source file "/tmp/.cloudera_health_monitoring_canary_files/.canary_file_2022_06_29-15_24_31.ba376573face8227" - Aborting...block==null Canary settings: but when run command: hdfs dfs -ls /tmp/ output is: d--------- - hdfs supergroup 0 2022-06-29 15:24 /tmp/.cloudera_health_monitoring_canary_files so no permissions are set. If I try to set right permissions manually it still won’t work... When I disable Canary Health Check and remove .cloudera_health_monitoring_canary_files, and re-enable Canary again HDFS will create folder with no permissions although right permissions are set in HDFS Configuration. And strange thing is that I can find some files written despite of wrong permissions: /tmp/.cloudera_health_monitoring_canary_files/.canary_file_2022_06_29-15_24_31.ba376573face8227 Help please 🙂
... View more
Labels: