Member since
12-09-2015
42
Posts
16
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3698 | 06-15-2016 07:26 PM |
06-20-2016
01:55 PM
Great @Matjaz Skerjanec , glad to help! How about selecting one of the answers as "Accepted" so that other folks know this question is closed? Also, I'm curious what version of OneFS your cluster is running. Is it 7.2.something or 8.0.something?
... View more
06-17-2016
06:01 PM
1 Kudo
Are you also the administrator of the Isilon cluster? If not, you should ask the administrator to grant you access to view the web administrator UI or ssh into the cluster. Or, ask to be added to email notification on alerts for your zone and an HDFS traffic report from InsightIQ. Currently from the HDP side, the only direct reassurance you can get is to see that the HDFS service is green. We recognize that doesn't match the expectations of an Ambari administrator, so we hope to provide more data directly to Ambari in the near future. Otherwise within Ambari you will be looking at the cluster from an application perspective. For example, you can check that services are running correctly -- the Yarn service check is a great test. If OneFS is not configured correctly it won't pass. Within OneFS, the first page of the administrator dashboard will tell you the status of the cluster. From the console, run isi status for the same data. InsightIQ, our cluster analytics dashboard package, is now offered as a free option by Isilon. With it you can watch HDFS protocol traffic and inspect cluster capacity and other metrics over time. Give that a spin or ask your Isilon administrator to set up a report for you!
... View more
06-17-2016
05:43 PM
Here's what I used after guidance from @Robert Levas "security" : {
"type" : "KERBEROS",
"kerberos_descriptor": {
"properties": {
"realm" : "YOURREALMHERE",
"keytab_dir" : "/etc/security/keytabs"
},
"identities" : [
{
"principal" : {
"type" : "user",
"value" : "ambari-qa@${realm}"
},
"name" : "smokeuser"
}
],
"services" : [
{
"name" : "HDFS",
"components" : [
{
"name" : "NAMENODE",
"identities" : [
{
"principal" : {
"configuration" : "hadoop-env/hdfs_principal_name",
"type" : "user",
"local_username" : "hdfs",
"value" : "hdfs@${realm}"
},
"name" : "hdfs"
}
]
}
]
},
{
"name" : "MAPREDUCE2",
"components" : [
{
"name" : "HISTORYSERVER",
"identities" : [
{
"principal" : {
"configuration" : "mapred-site/mapreduce.jobhistory.principal",
"type" : "service",
"local_username" : "mapred",
"value" : "mapred/_HOST@${realm}"
},
"name" : "history_server_jhs"
}
]
}
]
},
{
"name" : "SPARK",
"identities" : [
{
"principal" : {
"configuration" : "spark-defaults/spark.history.kerberos.principal",
"type" : "user",
"local_username" : "spark",
"value" : "spark@${realm}"
},
"name" : "sparkuser"
}
]
},
{
"name" : "ACCUMULO",
"identities" : [
{
"principal" : {
"configuration" : "accumulo-env/accumulo_principal_name",
"type" : "user",
"local_username" : "accumulo",
"value" : "accumulo@${realm}"
},
"name" : "accumulo"
},
{
"principal" : {
"configuration" : "accumulo-site/trace.user",
"type" : "user",
"local_username" : "accumulo",
"value" : "accumulo@${realm}"
},
"name" : "accumulo_tracer"
}
]
},
{
"name" : "HBASE",
"identities" : [
{
"principal" : {
"configuration" : "hbase-env/hbase_principal_name",
"type" : "user",
"local_username" : "hbase",
"value" : "hbase@${realm}"
},
"name" : "hbase"
}
]
},
{
"name" : "YARN",
"components" : [
{
"name" : "NODEMANAGER",
"identities" : [
{
"principal" : {
"configuration" : "yarn-site/yarn.nodemanager.principal",
"type" : "service",
"local_username" : "yarn",
"value" : "yarn/_HOST@${realm}"
},
"name" : "nodemanager_nm"
}
]
},
{
"name" : "RESOURCEMANAGER",
"identities" : [
{
"principal" : {
"configuration" : "yarn-site/yarn.resourcemanager.principal",
"type" : "service",
"local_username" : "yarn",
"value" : "yarn/_HOST@${realm}"
},
"name" : "resource_manager_rm"
}
]
}
]
},
{
"name" : "STORM",
"identities" : [
{
"principal" : {
"configuration" : "storm-env/storm_principal_name",
"type" : "user",
"value" : "storm@${realm}"
},
"name" : "storm_components"
}
]
},
{
"name" : "FALCON",
"configurations" : [
{
"falcon-startup.properties" : {
"*.dfs.namenode.kerberos.principal" : "hdfs/_HOST@${realm}"
}
}
]
}
]
}
},
"configurations" : [
{
"kerberos-env": {
"properties_attributes" : { },
"properties" : {
"realm" : "YOURREALM",
"kdc_type" : "mit-kdc",
"kdc_host" : "my-real-kdc.emc.com",
"admin_server_host" : "my-real-ambari-server.emc.com"
}
}
},
{
"krb5-conf": {
"properties_attributes" : { },
"properties" : {
"domains" : "YOURREALM",
"manage_krb5_conf" : "true"
}
}
}
]
... View more
06-17-2016
05:41 PM
@Robert Levas Fantastic, thanks for your help! That small sample was the reassurance I needed on what path to follow. A bit of background: OneFS actually isn't a separate service in Ambari. The OneFS cluster reveals itself as a single FQDN, so that is simply added as an additional host within Ambari Server. For clarity, here's what the blueprint block looks like for OneFS: "host_groups" : [
{
"name" : "onefs_group",
"components" : [
{"name" : "DATANODE"},
{"name" : "NAMENODE"},
{"name" : "SECONDARY_NAMENODE"},
{"name" : "KERBEROS_CLIENT"}
],
"cardinality" : "1"
},
I'll update the original question with the security block I used. Robert, this post you did in April was also really helpful as I tracked down my last typo. https://community.hortonworks.com/articles/28734/obtaining-a-listing-of-expected-kerberos-identitie.html Thanks again!
... View more
06-15-2016
07:26 PM
2 Kudos
@Matjaz Skerjanec: As @emaxwell said, this stack trace error is clunky but really it's a good thing. OneFS is unable to respond successfully to "hdfs fsck" because fsck is not how data integrity is protected, managed or alerted by Isilon. Here are a few quotes from Isilon documentation. In the OneFS Technical Overview (https://www.emc.com/collateral/hardware/white-papers/h10719-isilon-onefs-technical-overview-wp.pdf): "No expensive 'fsck' or 'disk-check' processes are ever required. No drawn-out resynchronization ever needs to take place. Writes are never blocked due to a failure. The patented transaction system is one of the ways that OneFS eliminates single -- and even multiple -- points of failure." From OneFS Hardware Fault Tolerance (https://community.emc.com/community/products/isilon/blog/2015/03/25/onefs-hardware-fault-tolerance) In the event that the recomputed checksum does not match the stored checksum, OneFS will generate a system alert, log the event, retrieve and return the corresponding error correcting code (ECC) block to the client and attempt to repair the suspect data block. If you'd like to learn more, I suggest a web search for "Isilon fsck" or "Isilon data integrity" and looking through the material that comes up. It's a window into the value of OneFS as the storage host in your Hadoop cluster.
... View more
06-14-2016
09:42 PM
It's not clear which version of Ambari this is for. Maybe 2.0? Current steps (Ambari 2.1/2.2, HDP 2.3/2.4 and OneFS 7.2.1.2/8.0.0.1) for most easily completing Kerberos configuration are linked in answer to this question: https://community.hortonworks.com/questions/38583/configure-kerberos-using-ambari-with-emc-isilon.html
... View more
06-10-2016
11:48 PM
1 Kudo
@Timothy Spann, I think the easiest way to track OneFS support for new versions of Ambari and HDP is to visit the ECN and follow this page, https://community.emc.com/docs/DOC-37101 . One thing to note is that this guide is written for the 7.2.x. In 8.0.0.0 the conmmand line structure was changed, so refer to the new CLI guide. Hadoop starts on page 997, http://www.emc.com/collateral/TechnicalDocument/docu65065.pdf .
... View more
06-10-2016
10:59 PM
Update on this article -- recent OneFS releases do support Rolling Upgrade of HDP. Here's the improved procedure: https://community.emc.com/community/products/isilon/blog/2016/01/07/rolling-upgrade-of-hdp-23-with-onefs
... View more
06-09-2016
11:40 PM
I guess I wasn't clear enough in the title. It should be "How do I create custom Kerberos principals during cluster creation?" As I said in the body, I'm able to Kerberize just fine, and Ambari is successfully creating principals in my KDC. However, I need to customize a number of principals. I've only been successful with smokeuser aka ambari-qa. Specifically I need to change: - UPNs: HDFS, Accumulo Tracer, Accumulo, HBase, Storm, Spark - SPNs: HDFS NameNode, Yarn Resource Manager, Yarn Node Manager, Mapred2 Job History Server If I could get an example of one non-smokeuser UPN and one SPN, and assurance that it works in both a blueprint and a cluster template, I think I can do the rest.
... View more
06-09-2016
10:06 PM
1 Kudo
Using Ambari 2.2.2 to install HDP 2.4.2, I want to use a blueprint and cluster template to create a new kerberized cluster. I have tried a number of different approaches (submit a kerberos descriptor json seperately and invoke it with kerberos_descriptor_reference in blueprint; embed kerberos_descriptor json in blueprint; embed kerberos_descriptor json in cluster template) but the principals in my kerberos descriptor are not being created, and stack/service/component defaults are used instead. I assume that I've had typos or logical errors in my attempts. Since I am deploying with OneFS, I need to modify a handful of principals per the recent guide linked from this question. The most direct mention of principals I could find was in AMBARI-14516 by @Robert Levas. So I tried to mimic it exactly in one of my attempts. I did not include a kerberos_descriptor field in my Blueprint, just used this: "Blueprints" : {
"stack_name" : "HDP",
"stack_version" : "2.4",
"security" : {
"type" : "KERBEROS"
}
}
Then in my cluster template I included this: "security" : {
"type" : "KERBEROS",
"kerberos_descriptor": {
"properties": {
"realm" : "WORKINGREALM.COM",
"keytab_dir" : "/etc/security/keytabs"
},
"identities" : [
{
"principal" : {
"type" : "user",
"value" : "ambari-qa@WORKINGREALM.COM"
},
"name" : "smokeuser"
},
{
"principal" : {
"type" : "user",
"value" : "hdfs@WORKINGREALM.COM"
},
"name" : "hdfs"
}
]
}
},
"configurations" : [
{
"kerberos-env": {
"properties_attributes" : { },
"properties" : {
"realm" : "WORKINGREALM.COM",
"kdc_type" : "mit-kdc",
"kdc_host" : "therealhost.fqdn.com",
"admin_server_host" : "therealhost.fqdn.com"
}
}
},
{
"krb5-conf": {
"properties_attributes" : { },
"properties" : {
"domains" : "WORKINGREALM.COM",
"manage_krb5_conf" : "true"
}
}
}
]
The ambari-qa upn is created properly per my definition here, but the hdfs upn still uses the default structure of hdfs-${cluster_name}@${realm}. How do I override that? I tried nesting the hdfs upn in the "services" : [ { "components" : {} } ] structure, but didn't have success with that either. If you can tell me a way that definitely does work with Ambari 2.2.2, then I can figure out what I was doing wrong when I made that type of attempt. Another question to consider: "How do I debug why a Kerberos Descriptor json block is not having my anticipated result?" Because there is nothing in the ambari-server logs whether it succeeds or fails. Simply "Processing principal, xyz" and "Creating keytab file for xyx" with no indication of what led to that. (By the way -- and I'm getting off track here -- the Automated+Kerberization guide is fantastic. I am casting my vote for quick update to the Kerberos Descriptor section to include user submitted KDs, blueprints with internal KD, and cluster template with KD, and how those overlay with stack/service/component KDs. Here are some of the questions that I have that go unanswered because only stack/service/component is discussed... Can my cluster template json reference values in a kerberos descriptor json and in a stack json, or do they need to be declared at that level? Which overrides which? Why is there an escaped text blob in the KD json but an artifact blob in the KD json extracted from a blueprint?)
... View more
Labels:
- Labels:
-
Apache Ambari
- « Previous
-
- 1
- 2
- Next »