Member since
09-29-2015
123
Posts
216
Kudos Received
47
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
9018 | 06-23-2016 06:29 PM | |
3076 | 06-22-2016 09:16 PM | |
6134 | 06-17-2016 06:07 PM | |
2800 | 06-16-2016 08:27 PM | |
6559 | 06-15-2016 06:44 PM |
06-13-2016
06:38 PM
@Tom Ellis, you mentioned finding the SaslRpcClient class. That's a very important piece. This is the class that handles SASL authentication for any client-server interaction that uses Hadoop's common RPC framework. The core Hadoop daemons in HDFS and YARN, such as NameNode and ResourceManager, make use of this RPC framework. Many other services throughout the Hadoop ecosystem also use this RPC framework. Clients of those servers will use the SaslRpcClient class as the entry point for SASL negotiation. This is typically performed on connection establishment to a server, such as the first time a Hadoop process attempts an RPC to the NameNode or the ResourceManager. The exact service to use is negotiated between client and server at the beginning of the connection establishment, during the negotiation code that you mentioned finding. The service value will be different per Hadoop daemon, driven by the shortened principal name, e.g. "nn". However, you won't find anything in the Hadoop source code that explicitly references the TGS. Instead, the Hadoop code delegates to the GSS API provided by the JDK for the low-level implementation of the Kerberos protocol, including handling of the TGS. If you're interested in digging into that, the code is visible in the OpenJDK project. Here is a link to the relevant Java package in the OpenJDK 7 tree: http://hg.openjdk.java.net/jdk7u/jdk7u/jdk/file/f51368baecd9/src/share/classes/sun/security/jgss/krb5 Some of the most relevant classes there would be Krb5InitCredential and Krb5Context.
... View more
03-10-2016
05:47 AM
1 Kudo
Thanks for your reply @Chris Nauroth
... View more
04-16-2018
10:44 AM
The DataNode stores a single ".meta" file corresponding to each block replica or For each block replica hosted by a DataNode, there is a corresponding metadata file which is true?
... View more
02-17-2016
06:42 PM
3 Kudos
+1 Another consideration is upgrades. Sharing the same set of JournalNodes across multiple clusters would complicate upgrade plans, because an upgrade of software on those JournalNodes potentially impacts every cluster served by those JournalNodes.
... View more
02-05-2017
11:23 PM
Photos are missing. Can somebody fix this?
... View more
08-16-2017
12:28 AM
We had this issue because some partitions pointed to a non HA location on hdfs. Fixed it by running: hive --config /etc/hive/conf/conf.server --service metatool [-dryRun] -updateLocation hdfs://h2cluster hdfs://h2namenode:8020 , In our case, the problem was that some hive partitions had incorrect location. We fixed it using hive metatool like this: hive --config /etc/hive/conf/conf.server --service metatool [-dryRun] -updateLocation hdfs://hcluster hdfs://namenode:8020
... View more
02-03-2016
07:59 PM
3 Kudos
I'm most familiar with GC tuning for HDFS, so I'll answer from that perspective. As you expected, our recommendation for the HDFS daemons is CMS. In practice, we have found that some of the default settings for CMS are sub-optimal for the NameNode's heap usage pattern. In addition to enabling CMS, we recommend tuning a few of those settings. I agree that G1 would be good to evaluate as the future direction. As of right now, we have not tested and certified with G1, so I can't recommend using it. For more details, please refer to the NameNode garbage collection deep dive article that I just posted. https://community.hortonworks.com/articles/14170/namenode-garbage-collection-configuration-best-pra.html
... View more
10-30-2015
09:35 PM
1 Kudo
If you are asking about iptables then iptables = on port exceptions stays on or knox plays its charm.
... View more
- « Previous
-
- 1
- 2
- Next »