Member since
10-12-2016
16
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2589 | 10-13-2016 12:57 PM |
07-23-2018
08:25 PM
Rats. I am using 1.7.0. It is already uncommented.
... View more
07-23-2018
07:32 PM
We have tried to change these settings but no joy. I "ruined" the logback xml and nifi did not start. So the file is being read. However, any setting change I make does not take affect (nifi-app.log). Stock setup should rotate the logs every hour. We have not seen this behavior. Also, I changed the <maxFileSize> to 2MB but that doesn't help either. Anyone have the same experience? Seems odd. We are on Windows 10.
... View more
07-16-2018
08:04 PM
Thanks Matt! I was wondering if something like that was possible. I think the way Azure Event hub software works it would be impossible to change this. They manage the threads and they are long running, so, as you say, they never release the first thread that starts them up. It would take a rewrite of the Ehub listener logic to fix it. Thanks for clearing this up!
... View more
07-16-2018
07:45 PM
Thanks for the reply, Matt. However, I am not sure you read my post entirely. The ConsumeAzureEventHub does not seem to EVER stop pumping more messages in to the queue. Could it be written incorrectly? We have it set to 10,000 and have never seen it stop rising.
... View more
07-16-2018
07:15 PM
We are seeing the same thing specifically with the ConsumeAzureEventHub processor. It seems to completely ignore either the size or number settings. A simple PutFile works, but not this one. We have seen it go to >1,000,000 and over a 1 gig in size.
... View more
08-22-2017
05:34 PM
The Microsoft EHub has been fixed in the 2.0.0 version of Storm. Yes!
... View more
06-22-2017
08:33 PM
Awesome. Thanks for sharing your knowledge!
... View more
06-22-2017
12:42 PM
So you said: "this is actually a really idea if you've somehow done this, by the way..." But I think you left an adjective out. I suppose you mean: bad? We have been told by experts at Azure and Hashmap that we don't need to do major compaction and it is currently shut off. 1. We don't do any deletions from our system. Would this be the reason they say this? 2. We have been told that major compaction will block any writes to our tables (we can't have this). I was told this is untrue at PhoenixCon but when I asked Hashmap, they said that HDInsight has rewritten major compaction and that it blocks writes. 2. We want to start using TTL. If minor compaction deletes these records (that is what I took from above) is major compaction required? 3. Why is there so much confusion about this?! Everyone seems to think TTL requires major compaction.
... View more
06-21-2017
07:19 PM
Yup. That's me! We are still trying to figure this out. We have gotten 4 different answers from 4 different people. Hope things are going well!
... View more
06-21-2017
05:53 PM
The docs say: When an explicit deletion occurs in HBase, the data is not actually deleted. Instead, a tombstone marker is written. The tombstone marker prevents the data from being returned with queries. During a major compaction, the data is actually deleted, and the tombstone marker is removed from the StoreFile. If the deletion happens because of an expired TTL, no tombstone is created. Instead, the expired data is filtered out and is not written back to the compacted StoreFile. What does "expired data is filtered out and is not written back to the compacted StoreFile." mean? I have done some testing with TTL. I put 1 million records in a database. I checked the file size. Then I set the ttl to 1 minute and all the data dissapeared (the actual file got much smaller). Our database has major compaction shut off. Does a major compaction have to happen with TTL?
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Phoenix