Member since
10-24-2015
207
Posts
18
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4436 | 03-04-2018 08:18 PM | |
4330 | 09-19-2017 04:01 PM | |
1809 | 01-28-2017 10:31 PM | |
976 | 12-08-2016 03:04 PM |
02-14-2018
04:03 PM
Hi, I dropped a huge table yesterday and today i did some config changes so had to restart all hdfs and yarn services.. soon after restart i see pending block deletion alert, its been more than half an hour, still there an the number is growing(blocks for deletion) every few minutes...any advise?
... View more
Labels:
- Labels:
-
Apache Hadoop
02-14-2018
03:58 PM
@kskp @aengineer Hi, I wanted to know how this got resolved? by itself? Actually i dropped a huge table yesterday and today morning i did some config changes and restarted services, after restart i get an alert on ambari saying hdfs block deletion pending... its been more than 30 min, ithe alert is still there on one of the namenode in HA.
... View more
02-09-2018
04:33 PM
Hi, I see you can user sync from a text file in ranger.... i wanted to make sure that even though it is a text file, the users in the text file have to be local unix users right? if they are not, then it will not work right?
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Ranger
02-08-2018
11:14 PM
Hi, How can i restrict user access to specific queues especially when one user can use multiple queues but a few users can use only one queue ... I have seen how to give access to Groups but is it same for userid's as well? i mean to say: User1 can access QueueA & QueueB & QueueC User2 can only QueueA User3 can only access QueueC
... View more
02-06-2018
04:50 PM
Hi all, I have some large tables in our hadoop cluster which are in text format, i would like to change all to orc ... is there something i need to worry about if all tables are orc? in what circumstances you dont use orc? Thanks.
... View more
Labels:
- Labels:
-
Apache Hadoop
02-06-2018
03:55 PM
I have a spark job running using hivecontext which runs a query and writes it to orc table, i see a lot of hive staging files within the same directory of the hdfs location of the table like this : /.hive-staging_hive_2017-04-26_13-33-45_342_4121326216613322007-1 how to avoid this and best way to delete them?
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Spark
02-05-2018
09:32 PM
@Rob K increase the tez container size and java opts, something like this: set hive.tez.container.size=2048; set hive.tez.java.opts=-Xmx1700m; (above is ~ 0.8 x tez container size)
... View more
01-15-2018
02:37 AM
@Benoit Rousseau @Geoffrey Shelton Okot yes i have 3 working journal nodes, but still this issue is recurring, never happened before I read about the IPC epoch time earlier and read and checked the link too, but i am not really sure how to resolve this issue? I do not see anything in gc logs either, there seems to be no network issue also.
... View more
01-14-2018
12:59 PM
Hi, This happened yesterday and today too. Yesterday there was an error like hive metastore connection failed, but actually the standby namenode shut down which i was able to see after hive restart.... restarted standby and everything was fine. and same thing happened today too.. the error in hdfs namenode log looks like this: 2018-01-14 07:07:00,039 WARN client.QuorumJournalManager (IPCLoggerChannel.java:call(388)) - Remote journal x.x.x.x:8485 failed to write txns 641590896-641590896. Will try to write to this JN again after the next log roll.
org.apache.hadoop.ipc.RemoteException(java.io.IOException): IPC's epoch 52 is less than the last promised epoch 53
at org.apache.hadoop.hdfs.qjournal.server.Journal.checkRequest(Journal.java:428)
at org.apache.hadoop.hdfs.qjournal.server.Journal.checkWriteRequest(Journal.java:456)
at org.apache.hadoop.hdfs.qjournal.server.Journal.journal(Journal.java:351)
at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.journal(JournalNodeRpcServer.java:152)
at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.journal(QJournalProtocolServerSideTranslatorPB.java:158)
at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25421)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1552)
at org.apache.hadoop.ipc.Client.call(Client.java:1496)
at org.apache.hadoop.ipc.Client.call(Client.java:1396)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
at com.sun.proxy.$Proxy11.journal(Unknown Source)
at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolTranslatorPB.journal(QJournalProtocolTranslatorPB.java:167)
at org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel$7.call(IPCLoggerChannel.java:385)
at org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel$7.call(IPCLoggerChannel.java:378)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2018-01-14 07:07:00,039 WARN client.QuorumJournalManager (IPCLoggerChannel.java:call(388)) - Remote journal x.x.x.x:8485 failed to write txns 641590896-641590896. Will try to write to this JN again after the next log roll.
org.apache.hadoop.ipc.RemoteException(java.io.IOException): IPC's epoch 52 is less than the last promised epoch 53
at org.apache.hadoop.hdfs.qjournal.server.Journal.checkRequest(Journal.java:428) ..... 2018-01-14 07:07:00,041 FATAL namenode.FSEditLog (JournalSet.java:mapJournalsAndReportErrors(398)) - Error: flush failed for required journal (JournalAndStream(mgr=QJM to [x.x.x.x.x:8485, x.x.x.x:8485, x.x.x.x:8485], stream=QuorumOutputStream starting at txid 641590896))
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Got too many exceptions to achieve quorum size 2/3. 3 exceptions thrown:
x.x.x.x:8485: IPC's epoch 52 is less than the last promised epoch 53 2018-01-14 07:07:10,063 WARN util.ShutdownHookManager (ShutdownHookManager.java:run(70)) - ShutdownHook 'ClientFinalizer' timeout, java.util.concurrent.TimeoutException
java.util.concurrent.TimeoutException
at java.util.concurrent.FutureTask.get(FutureTask.java:205)
at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:67)
2018-01-14 07:07:10,063 ERROR hdfs.DFSClient (DFSClient.java:closeAllFilesBeingWritten(950)) - Failed to close inode 119707511
java.io.IOException: Failed to shutdown streamer 2018-01-14 07:07:10,070 INFO provider.BaseAuditHandler (BaseAuditHandler.java:logStatus(312)) - Audit Status Log: name=hdfs.async.batch.hdfs, interval=01:28.634 minutes, events=95, succcessCount=95, totalEvents=2279779, totalSuccessCount=2279779
2018-01-14 07:07:10,077 INFO queue.AuditFileSpool (AuditFileSpool.java:stop(321)) - Stop called, queueName=hdfs.async.batch, consumer=hdfs.async.batch.hdfs
2018-01-14 07:07:10,091 INFO queue.AuditBatchQueue (AuditBatchQueue.java:runLogAudit(362)) - Exiting consumerThread.run() method. name=hdfs.async.batch
2018-01-14 07:07:10,091 INFO queue.AuditFileSpool (AuditFileSpool.java:runLogAudit(867)) - Caught exception in consumer thread. Shutdown might be in progress
2018-01-14 07:07:10,091 INFO namenode.NameNode (LogAdapter.java:info(47)) - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at str1.bi.grn.hcvlny.cv.net/x.x.x.x
... View more
Labels:
- Labels:
-
Apache Hadoop
01-09-2018
08:25 PM
@vgarg Thanks for the answer. I use 2.5.3 HDP, when i use concatenate once on the partition with many files, it only concatenates 1 or a few files each time, i have to do it multiple times to concatenate all to one large file.. was this an issue as well? could be please direct me to the issues of the concatenate in earlier versions?
... View more