Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1920 | 06-15-2020 05:23 AM | |
| 15466 | 01-30-2020 08:04 PM | |
| 2072 | 07-07-2019 09:06 PM | |
| 8118 | 01-27-2018 10:17 PM | |
| 4571 | 12-31-2017 10:12 PM |
07-02-2019
02:25 AM
The above question and the entire response thread below was originally posted in the Community Help track. On Tue Jul 2 02:23 UTC 2019, a member of the HCC moderation staff moved it to the Data Ingestion & Streaming track. The Community Help Track is intended for questions about using the HCC site itself, not technical questions about Kafka dependancies on Zookeeper.
... View more
07-01-2019
09:10 AM
@Michael Bronson The Doc which i shared talks about Standard Best Practice. The doc does not say that running ZK on Kafka host will not work. But as a best practice you should keep them on separate hosts due to load constraints. However, it is subject to your Pre Prod Environment Testing and Metrics Analysis on both the scenarios and then you can proceed with what suite your requirement.
... View more
07-02-2019
02:17 AM
The above question and the entire response thread below was originally posted in the Community Help track. On Tue Jul 2 01:01 UTC 2019, a member of the HCC moderation staff moved it to the Hadoop Core track. The Community Help Track is intended for questions about using the HCC site itself, not technical questions about custom installations of zookeeper.
... View more
06-27-2019
03:22 AM
The above question and the entire response thread below was originally posted in the Community Help track. On Thu Jun 27 03:00 UTC 2019, a member of the HCC moderation staff moved it to the Cloud & Operations track. The Community Help Track is intended for questions about using the HCC site itself, not technical questions.
... View more
06-17-2019
05:59 PM
Limit ls to few, I mean hdfs dfs -ls /tmp/hive/hive/14* The directory under is of zero bytes drwx------ - hive hdfs 0 2017-09-04 17:10 /tmp/hive/hive/149e8d6a-ad2a-433e-87be-6cb5b27e2b7b/_tmp_space.db Find out older one and start purging them manually till you get a breather !!!. After that get permission to implement Automatic approach
... View more
06-26-2019
01:14 PM
@Michael Bronson Yes you can delete /tmp/hive/hive if it is occupying the hdfs. Its better to schedule a script for every 15 days to cleanup the directory and enable e-mail notifications to get the alerts/warns accordingly. I tried the same in my org. due to storage crises. Thank you.
... View more
06-16-2019
05:32 AM
also I cant start the journal node ( on the bade namenode ) 2019-06-16 05:29:39,734 WARN namenode.FSImage (EditLogFileInputStream.java:scanEditLog(359)) - Caught exception after scanning through 0 ops from /hadoop/hdfs/journal/hdfsha/current/edits_inprogress_0000000000018783114 while determining its valid length. Position was 1032192
java.io.IOException: Can't scan a pre-transactional edit log.
at org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$LegacyReader.scanOp(FSEditLogOp.java:4974)
at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.scanNextOp(EditLogFileInputStream.java:245)
at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.scanEditLog(EditLogFileInputStream.java:355)
at org.apache.hadoop.hdfs.server.namenode.FileJournalManager$EditLogFile.scanLog(FileJournalManager.java:551)
at org.apache.hadoop.hdfs.qjournal.server.Journal.scanStorageForLatestEdits(Journal.java:192)
at org.apache.hadoop.hdfs.qjournal.server.Journal.<init>(Journal.java:152)
at org.apache.hadoop.hdfs.qjournal.server.JournalNode.getOrCreateJournal(JournalNode.java:90)
at org.apache.hadoop.hdfs.qjournal.server.JournalNode.getOrCreateJournal(JournalNode.java:99)
at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.getJournalState(JournalNodeRpcServer.java:127)
at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.getJournalState(QJournalProtocolServerSideTranslatorPB.java:118)
at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25415)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)
2019-06-16 05:29:39,734 WARN namenode.FSImage (EditLogFileInputStream.java:scanEditLog(364)) - After resync, position is 1032192
... View more
05-13-2019
07:32 AM
1 Kudo
you should be copying the directory structure as it is . Create folder hbase and hbase in /data_metrics/lib/ambari-metrics-collector/ and copy the contents from other place as it is.
... View more
05-13-2019
07:36 AM
1 Kudo
its not suggested to delete the Intermediate files of hbase directory directly. follow this : if you are ok with data erasing : https://cwiki.apache.org/confluence/display/AMBARI/Cleaning+up+Ambari+Metrics+System+Data
... View more
04-30-2019
03:16 PM
@Michael Bronson Any updates?
... View more