Member since
10-13-2015
26
Posts
15
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2336 | 10-23-2015 10:05 PM |
11-24-2015
02:28 AM
1 Kudo
@Neeraj Sabharwal Yes, I had to create a local ambari-qa user to have the Hbase check status to work.
... View more
11-06-2015
06:37 PM
As of HDP 2.3.2 ..above components are supported by Ranger For Storm @hfaouaz@hortonworks.com Yes, Storm and Kafka needs to be kerberized
... View more
08-30-2016
03:42 AM
1 Kudo
if above steps don't work then please add/update the value of property 'ranger.truststore.file' and 'ranger.truststore.password' in the ranger-admin module according to your environment : According to steps mentioned above sample value would be : ranger.truststore.file=/usr/hdp/current/ranger-admin/cacertswithknox
ranger.truststore.password=changeit
... View more
11-04-2015
01:04 AM
@hfaouaz@hortonworks.com Errors or log entries ..please I am sure you did see this link
... View more
10-29-2015
08:21 PM
thank you. and in addition this tool is assuming you have manually created the store and you are pointing to it. it does not create one for you.
... View more
09-12-2016
03:57 AM
1 Kudo
I ran into similar issue and did what Miraj said, it works!
... View more
10-22-2015
07:18 PM
Knox provides solution for perimeter security and like any security component, (encryption or authorization) does add overhead to the processing time. With that said, in order to determine whether a security tool is needed or not, performance is not considered as a deciding factor. We can load balance the traffic across multiple knox instances to distributed the load to avoid too much degradation in performance.
... View more
11-28-2016
01:47 PM
I resolved the issue by using answer by @Hajime
... View more
10-21-2015
03:38 AM
1 Kudo
@hfaouaz@hortonworks.com - each HDFS block occupies ~250 bytes of RAM on NameNode (NN), plus an additional ~250 bytes will be required for each file and directory. Block size by default is 128 MB so you can do the calculation pertaining to how much RAM will support how many files. To guarantee persistence of the filesystem metadata the NN has to keep a copy of its memory structures on disk also the NN dirs as you mentioned and they will hold the fsimage and editlogs. Editlogs captures all changes that are happening to HDFS (such as new files and directories), think redo logs that most RDBM's use. The fsimage is a full snapshot of the metadata state. The fsimage file will not grow beyond the allocated NN memory set and the edit logs will get rotated once it hits a specific size. It always safest to allocate significantly more capacity for NN directory then needed say 4 times what is configured for NN memory, but if disk capacity isn't and issue allocate 500 GB+ if can spare (more capacity is very common especially when setting up a 3+3 or 4+4 RAID 10 mirrored set). Setting up RAID at the disk level like RAID1 or RAID 1/0 makes sense and thus having RAID allows for a single directory to be just fine.
... View more