Member since
12-10-2015
76
Posts
30
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2140 | 03-10-2021 08:35 AM | |
1556 | 07-25-2019 06:34 AM | |
3438 | 04-20-2016 10:03 AM | |
2624 | 04-11-2016 03:07 PM |
09-15-2022
12:57 AM
Hi @rki_ , Thanks for the explanation. I had hoped to have found a reason for the index writer closed error of SolR. Thanks you anyway
... View more
09-13-2022
12:42 AM
hello to all, I have many connections in time_wait on the ipc port 1019 of the datanode: More than 600 time_wait and about 250 established. Is that normal? I’m afraid that’s why index write closed on solr errors (the index is on hdfs). The servers are downloaded and the datanode does not saturate the jvm heap. I couldn’t find any max connection configuration for port 1019 Any ideas? Environment: HDP 3.1.5.0-152 with HDFS 3.1.1 Thanks in advance
... View more
Labels:
- Labels:
-
HDFS
06-14-2022
01:58 AM
Hi all, I have an issue with compaction of Hive ACID table. Env HDP 3.1.5.0-152 with Hive 3.1.0 All compaction jobs fail with this stack trace: 2022-06-14 10:46:02,236 INFO [IPC Server handler 2 on 40882] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID : jvm_1653525342115_29428_m_157230162771970 asked for a task
2022-06-14 10:46:02,236 INFO [IPC Server handler 2 on 40882] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID: jvm_1653525342115_29428_m_157230162771970 given task: attempt_1653525342115_29428_m_000000_0
2022-06-14 10:46:03,989 INFO [IPC Server handler 2 on 40882] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1653525342115_29428_m_000000_0 is : 0.0
2022-06-14 10:46:03,994 ERROR [IPC Server handler 5 on 40882] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: attempt_1653525342115_29428_m_000000_0 - exited : java.lang.NullPointerException
at java.lang.System.arraycopy(Native Method)
at org.apache.hadoop.io.Text.set(Text.java:225)
at org.apache.orc.impl.StringRedBlackTree.add(StringRedBlackTree.java:59)
at org.apache.orc.impl.writer.StringTreeWriter.writeBatch(StringTreeWriter.java:70)
at org.apache.orc.impl.writer.StructTreeWriter.writeFields(StructTreeWriter.java:64)
at org.apache.orc.impl.writer.StructTreeWriter.writeBatch(StructTreeWriter.java:78)
at org.apache.orc.impl.writer.StructTreeWriter.writeRootBatch(StructTreeWriter.java:56)
at org.apache.orc.impl.WriterImpl.addRowBatch(WriterImpl.java:557)
at org.apache.hadoop.hive.ql.io.orc.WriterImpl.flushInternalBatch(WriterImpl.java:297)
at org.apache.hadoop.hive.ql.io.orc.WriterImpl.close(WriterImpl.java:334)
at org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat$1.close(OrcOutputFormat.java:316)
at org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.close(CompactorMR.java:1002)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:465)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168) Below in the log file I see this error: 2022-06-14 10:46:08,699 INFO [IPC Server handler 2 on 40882] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1653525342115_29428_m_000000_1 is : 0.0
2022-06-14 10:46:08,702 ERROR [IPC Server handler 5 on 40882] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: attempt_1653525342115_29428_m_000000_1 - exited : org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to CREATE_FILE /<hdfs>/<path>/<database_name>.db/<tablename>/_tmp_5b5a4f18-76ef-42c3-acb0-64b175679d54/base_0000005/bucket_00000 for DFSClient_attempt_1653525342115_29428_m_000000_1_-740576932_1 on 10.102.190.206 because this file lease is currently owned by DFSClient_attempt_1653525342115_29428_m_000000_0_-14754452_1 on 10.102.xxx.xxx
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2604)
at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.startFile(FSDirWriteFileOp.java:378)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2453)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2351)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:774)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:462)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1498)
at org.apache.hadoop.ipc.Client.call(Client.java:1444)
at org.apache.hadoop.ipc.Client.call(Client.java:1354)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy13.create(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:362)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy14.create(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:273)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1211)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1190)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1128)
at org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:531)
at org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:528)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:542)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:469)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1118)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1098)
at org.apache.orc.impl.PhysicalFsWriter.<init>(PhysicalFsWriter.java:95)
at org.apache.orc.impl.WriterImpl.<init>(WriterImpl.java:177)
at org.apache.hadoop.hive.ql.io.orc.WriterImpl.<init>(WriterImpl.java:94)
at org.apache.hadoop.hive.ql.io.orc.OrcFile.createWriter(OrcFile.java:378)
at org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat.getRawRecordWriter(OrcOutputFormat.java:299)
at org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.getWriter(CompactorMR.java:1029)
at org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.map(CompactorMR.java:966)
at org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.map(CompactorMR.java:939)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:465)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168) but if I try to list the file it not exists on hdfs (I obfuscated the path in the logs). Any idea to fix this issue? It's critical for me.
... View more
Labels:
- Labels:
-
Apache Hive
04-07-2021
06:04 AM
1 Kudo
Hi, you can try disabling the network adapter from VM configs.
... View more
03-23-2021
03:52 AM
Hi, aren't there networks problems?
... View more
03-19-2021
06:49 AM
1 Kudo
Hi, yes it's right. I don't recommend using the sandbox for testing of any upgrades of a production environment. Many times the the sendbox env have been very sundry from the stnadard installed environments. The sandbox is designed to understand the technology, but in your case I recommend having a test environment at scale compared to the production one, with the same versions and topology.
... View more
03-19-2021
01:03 AM
Hi, I don't know which version of hive you are running, but hive-cli has been deprecated: In HDP 3.0 and later, Hive does not support the following features: Apache Hadoop Distributed Copy (DistCp) WebHCat Hcat CLI Hive CLI (replaced by Beeline) SQL Standard Authorization MapReduce execution engine (replaced by Tez)
... View more
03-19-2021
12:52 AM
1 Kudo
Hi, here https://www.cloudera.com/downloads/cdp-private-cloud-trial.html you can find the form to fill in to download cdp private cloud trial edition. If you have subscriptions you can proceed from your personal area on https://my.cloudera.com and in your applications you should have downloads: downloads
... View more
03-18-2021
02:49 AM
1 Kudo
Correct approc
... View more
03-17-2021
03:40 AM
2 Kudos
Hi, I guess you can't edit the extract query and maybe use a join, correct? Because the problem is that for nifi the 2 jsons in the array are potentially two representations of the same identity. So it is difficult to find a reliable method to achieve the goal. I would re-start from data extraction ...
... View more