Member since
01-04-2021
16
Posts
3
Kudos Received
0
Solutions
12-19-2022
03:51 AM
1 Kudo
I have tried several scenarios to generate cache miss in HBase with HDP 2.6.5. Different steps I followed include: 1) putting value in HBase using put command and fetching using get command. 2) putting value in HBase with put command , flushing and the trying to fetch the data. None of this creates cache misses. Infact the hits and hits caching keeps increasing by multiple counts during flushing. Misses and misses caching always remains zero. Why is this behaviour occurring? What is the difference between misses and misses caching and hits and hits caching? I will attach the screen shots of region server logs.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache HBase
-
HDFS
12-14-2022
09:57 PM
@krishnas_suresh @mqureshi Is there any specific configuration that would allow us to store files in a specific location in HDFS and store that url automatically in Hbase or should we do this manually?
... View more
12-13-2022
09:34 PM
@smdas Adding another scenario to picture. lets say a row key is already in blockcache. An update for that was just made. The row key has an existing value already in blockcache which is not the latest but the updated value is in memstore and if flushed then in Hfile. When a read occurs for the same row key then we look for the data first in blockcache and it will find a row key with old value. How does hbase know the value blockcache currently holds is not the latest and latest has to be fetched from Memstore or Hfile?
... View more
12-10-2022
10:04 AM
@smdas So even if a key is updated in Memcache and not updated in Blockcache the read merge updates the values of BlockCache with Memcache directly without updating HFile? Cause for HFile to be updated a flush has to happen right? Or is it that MemCache and BlockCache checks are done simultaneously?
... View more
12-08-2022
02:38 AM
1 Kudo
Give a scenario that data is written to HFile in Hbase. Now a read occurs and result is saved in blockcache. For the data that is there is in block cache an update occurs, which is saved in Memstore.
Now if the same data is read it will look for data first in Block Cache and if cache is not yet expired the result is found there. If that data is returned to client then it would be an inconsistent read.
Is it possible for an above scenario to occur or is my understanding wrong?
... View more
Labels:
- Labels:
-
Apache HBase
-
HDFS
12-08-2022
02:35 AM
In Hbase as per my reading the reads happens by first checking in blockcache, if missed then Memcache,if missed then use bloom filters to check for the record and finally use index on the HFile to read the data. But what if all the data is compressed? How can it find the index and read the data from a compressed Hfile? Even if read from where does the decompression occur? Is it from client?
... View more
Labels:
- Labels:
-
Apache HBase
-
HDFS
03-03-2021
11:45 PM
I have setup user authentication using ldap for nifi. Now when Iam able to add new users and achieve access restrictions to different users etc. But still now Iam not able to understand how to truly achieve multi-tenancy with nifi. For eg. when I add new users and to let them view the interface I give the policy to view the interface. But the user is still able to see all the components positions and connections even though not able to access them. This is not true multi-tenancy as I understand because users should not be able to see things even though not accessible to them. The same is problem with the controllers. Users are able to view them but not access or edit. It is also not possible to create a admin just for a tenant. Is there anyway we can truly achieve this in nifi? @MattWho @bbende @pam1
... View more
Labels:
- Labels:
-
Apache NiFi
03-02-2021
09:33 PM
@wikulinme where you able to solve this?
... View more
02-23-2021
07:31 PM
I have used this solution for nifi 1.12.1 and works right. I messed up some part while doing the tutorial.
... View more