Member since
10-01-2015
52
Posts
25
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2283 | 09-29-2016 02:09 PM | |
885 | 09-28-2016 12:32 AM | |
3916 | 08-30-2016 09:56 PM |
05-17-2017
06:52 AM
Thanks Mayank. Just wanted to add one thing to clarify for others who might have this problem, because I was wasting some time on this myself: To solve the SSLContext must not be null error, you correctly stated "distribute keystore and truststore file to all machines". I happened to only distribute them to all HBase Master nodes, but it's important to also deploy the same keystores to all region server machines.
... View more
09-13-2016
02:58 PM
This shall allow you to store column value larger than 2465 in future(if required) Ranger end patch is doing the same. Let me know if it fails after increasing column length to 4000.
... View more
10-13-2018
11:37 AM
@mkataria Did you get solution ,can you share steps performed with wild card cert.
... View more
08-30-2016
09:56 PM
Found it. I would have left it like that however since it in Production and sometimes you just need to know. For replication, we Falcon needs to be aware of local and remote Cluster ID via below property.
dfs.nameservices mine has value like "prodID, devID" What balancer does is it tries to reach both the nameservices and if I'm running the command in prod cluster as hdfs with proper ticket, it will throw a errror "hdfs-prod" (which is my principal w/o REALM) however it is still balancing the prod cluster, so the error although not clear is actually permission denied on remote name service (which makes sense) as the user still is "hdfs" however the principal is different "hdfs-dev" in my case. I ran the same command in Dev and cluster wwas rebalanced however I got the same error, this time Access denied for user hdfs-dev. Superuser privilege is required. Thanks for support @emaxwell, @mqureshi, @Kuldeep Kulkarni. I hope above answer will help others. (few of other hdfs commands have no effect/errors) Thanks Mayank
... View more
07-20-2016
08:50 PM
@mkataria With HDFS Snapshots there is no actual data copying up front for a new snapshot. It is simply a pointer to a record in time (point-in-time). So when you first take a snapshot, your HDFS storage usage will stay the same. It is only when you modify the data that data is copied/written. This follows the Copy on Write (COW) concept. Please take a look at the below JIRA. IT contains the discussion that lead to the design and is quite informative. https://issues.apache.org/jira/browse/HDFS-2802
... View more
06-08-2016
02:58 PM
Thanks @emaxwell I hope this will help most of us, specially the ones using MS AD KDC. Regards
Mayank
... View more
03-22-2016
09:41 PM
1 Kudo
@mkataria You can use Apache Falcon http://hortonworks.com/hadoop/falcon/ or see this https://community.hortonworks.com/articles/9933/apache-nifi-aka-hdf-data-flow-across-data-center.html
... View more
01-27-2016
05:50 PM
1 Kudo
Hi @mkataria , sure, I'll try my best. First click on service 'HDFS' in Ambari, then In the next dialog, create one config-group per Nodemanager , provide a corresponding name and assign that node to that config group Then get back to the "general" HDFS config page (picture 1), select a config group and adjust the log destination for that particular Nodemanager-node (==config-group). ...and restart HDFS 😉 Regards, Gerd
... View more