Member since
12-28-2015
47
Posts
2
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
7088 | 05-24-2017 02:14 PM | |
2989 | 05-01-2017 06:53 AM | |
5842 | 05-02-2016 01:11 PM | |
6425 | 02-09-2016 01:40 PM |
05-02-2016
09:06 AM
How to list hdfs files according to timestamp? just like ls -lt in unix.
... View more
Labels:
- Labels:
-
HDFS
02-09-2016
01:40 PM
Vmshah, Do both users belong to the same group? d(rwx)(r)-x(r)-x -- according to permissions set, here user1 groups and others can read and execute the data. If you want only user 1 to read, write and execute the data then set the permissions accordingly.(eg: hadoop fs -chmod 700 /tmp/user1zone1/helloWorld.txt )
... View more
02-08-2016
08:58 PM
First of all both users are accessing the file because u may not have set the permissions of both the users accordingly to access that file. Dont get confused with Encryption and permission. Question you asked is something related to file level permissions and encryption has lot more use cases compare to permissions. When creating a new file in an encryption zone, the NameNode asks the KMS to generate a new EDEK encrypted with the encryption zone’s key. The EDEK is then stored persistently as part of the file’s metadata on the NameNode. When reading a file within an encryption zone, the NameNode provides the client with the file’s EDEK and the encryption zone key version used to encrypt the EDEK. The client then asks the KMS to decrypt the EDEK, which involves checking that the client has permission to access the encryption zone key version. Assuming that is successful, the client uses the DEK to decrypt the file’s contents. Hope this clears your question!!!
... View more
01-28-2016
05:39 PM
Thank you all for your time, logical workaround sounds good to me.
... View more
01-25-2016
06:01 AM
I didnt mean to use skipTrash, I was suggested to use that as I couldnt delete a file from encryption zone. If there is any way to use trash for encryption zone please let me know.
... View more
01-24-2016
07:18 PM
I am using -skipTrash to delete an HDFS file from encryption zone. Is there any way that I can use trash to recover a deleted file from encryption zone?
... View more
Labels:
- Labels:
-
HDFS
01-15-2016
11:27 AM
So even though I archive these files, I wont be saving any disk space, is that right.
... View more
01-15-2016
11:19 AM
I have 194945 files that are less than 50MB and these files occupying 884GB memory. how to calculate the memory that these files will occupy if I hadoop archive them. 2) Am I using my hdfs efficiently as there are small files and I am not wasting any memory here. 3) Does archiving really save my disk space or it just reduces the namesapce ovevrhead. Harsh can you give me a detailed picture of this.
... View more
01-15-2016
09:53 AM
I got below details through hadoop fsck / Total size: 41514639144544 B (Total open files size: 581 B) Total dirs: 40524 Total files: 124348 Total symlinks: 0 (Files currently being written: 7) Total blocks (validated): 340802 (avg. block size 121814540 B) (Total open file blocks (not validated): 7) Minimally replicated blocks: 340802 (100.0 %) I am usign 256MB block size. so 340802 blocks * 256 MB = 83.2TB * 3(replicas) =249.6 TB but in cloudera manager it shows 110 TB disk used. how is it possible? Does this mean even though block size is 256MB, small file doesnt use the whole block for itself?
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Cloudera Manager
-
HDFS
12-28-2015
09:51 AM
BP-929597290-192.0.0.2-1439573305237:blk_1074084574_344316 len=2 repl=3 [DatanodeInfoWithStorage[192.0.0.9:1000,DS-730a75d3-046c-4254-990a-4eee9520424f,DISK], DatanodeInfoWithStorage[192.0.0.1:1000,DS-fc6ee5c7-e76b-4faa-b663-58a60240de4c,DISK], DatanodeInfoWithStorage[192.0.0.3:1000,DS-8ab81b26-309e-42d6-ae14-26eb88387cad,DISK] What does Bp and BLK storing and y is it displayed for my fsck command
... View more
Labels:
- Labels:
-
HDFS
- « Previous
-
- 1
- 2
- Next »