Member since
09-15-2015
457
Posts
507
Kudos Received
90
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
15545 | 11-01-2016 08:16 AM | |
10964 | 11-01-2016 07:45 AM | |
8334 | 10-25-2016 09:50 AM | |
1882 | 10-21-2016 03:50 AM | |
3698 | 10-14-2016 03:12 PM |
11-02-2015
12:04 PM
Thanks! Sure we could post it in our blog.
... View more
11-02-2015
12:03 PM
4 Kudos
You're probably running into the latest JDK/Kerberos issue. What JDK Version do you have? 1.8.0_40? The JDK 1.8.0_40 has a bug and throws the following error: GSSException: Defective token detected (Mechanism level: GSSHeader did not find the right tag)
The token supplied by the client is not accepted by the server.
This is because Windows sends an NTLM based ticket not a kerberos based ticket. I had this issue a couple weeks ago, it is fixed in 1.8.0_60, so the easiest way is to upgrade the JDK on your nodes.
... View more
10-30-2015
01:26 PM
He did use the SolrCloud mode for the PutSolrContentStream. I used SolrStandalone, so it should work either way 🙂
... View more
10-30-2015
05:54 AM
1 Kudo
Actually, we have an unofficial Ambari service for IPython => https://github.com/randerzander/jupyter-service
... View more
10-29-2015
10:24 PM
3 Kudos
+ Security + Requirements (python libs, os, ....) + Export/Import/Save options + data connectors (file, db, hdfs,...)
... View more
10-29-2015
09:26 PM
3 Kudos
Regarding Option1: I was under the impression that you can use xxxx-env blocks without defining the whole block within blueprints, e.g. I was using a blueprint recently to add the existing MySQL configuration to "hive-env". But I didn't check hive-env after the blueprint installation via the Ambari API, so I am not sure if the hive-env was messed up or not (I'll run another test in a week or so), Hive did start though. Regarding Option 2: You can use a simple Python script, here are the necessary steps 1. Find current config version ("tag"-field), e.g. cluster-env curl -H "X-Requested-By:ambari" -u admin:your-pw -X GET "http://c6601.ambari.apache.org:8080/api/v1/clusters/bigdata?fields=Clusters/desired_configs/cluster-env" Response: {
"href" : "http://c6601.ambari.apache.org:8080/api/v1/clusters/bigdata?fields=Clusters/desired_configs/cluster-env",
"Clusters" : {
"cluster_name" : "bigdata",
"version" : "HDP-2.3",
"desired_configs" : {
"cluster-env" : {
"tag" : "version1434441386979",
"user" : "admin",
"version" : 7
}
}
}
} The tag-field is import => version1434441386979 2. Export current config (example result below) curl -H "X-Requested-By:ambari" -u admin:your-pw -X GET "http://c6601.ambari.apache.org:8080/api/v1/clusters/bigdata/configurations?type=cluster-env&tag=version1434441386979" Response (truncated): {
"href":"http://c6601.ambari.apache.org:8080/api/v1/clusters/bigdata/configurations?type=cluster-env&tag=version1434441386979",
"items":[
{
"href":"http://c6601.ambari.apache.org:8080/api/v1/clusters/bigdata/configurations?type=cluster-env&tag=version1434441386979",
"tag":"version1434441386979",
"type":"cluster-env",
"version":7,
"Config":{
"cluster_name":"bigdata"
},
"properties":{
...
"smokeuser_keytab":"/etc/security/keytabs/smokeuser.headless.keytab",
"smokeuser_principal_name":"ambari-qa@EXAMPLE.COM",
"sqoop_tar_destination_folder":"hdfs:///hdp/apps/{{ hdp_stack_version }}/sqoop/",
"sqoop_tar_source":"/usr/hdp/current/sqoop-client/sqoop.tar.gz",
"tez_tar_destination_folder":"hdfs:///hdp/apps/{{ hdp_stack_version }}/tez/",
"tez_tar_source":"/usr/hdp/current/tez-client/lib/tez.tar.gz",
"user_group":"hadoop"
}
}
]
} 3. Prepare the new configuration by copying all properties from above in the following template. Template: {
"Clusters" : {
"desired_configs" : {
"type" : "cluster-env",
"tag" : "version<INSERT_CURRENT_TIMESTAMP_IN_MILLISECONDS>",
"properties" : {
<INSERT_EXPORTED_CONFIGURATION_PROPERTIES_FROM_ABOVE>
}
}
}
} For example, lets say my user_group is not hadoop anymore, from now on its horton {
"Clusters" : {
"desired_configs" : {
"type" : "cluster-env",
"tag" : "version<INSERT_CURRENT_TIMESTAMP_IN_MILLISECONDS>",
"properties" : {
...
"smokeuser_keytab" : "/etc/security/keytabs/smokeuser.headless.keytab",
"smokeuser_principal_name" : "ambari-qa@EXAMPLE.COM",
"sqoop_tar_destination_folder" : "hdfs:///hdp/apps/{{ hdp_stack_version }}/sqoop/",
"sqoop_tar_source" : "/usr/hdp/current/sqoop-client/sqoop.tar.gz",
"tez_tar_destination_folder" : "hdfs:///hdp/apps/{{ hdp_stack_version }}/tez/",
"tez_tar_source" : "/usr/hdp/current/tez-client/lib/tez.tar.gz",
"user_group" : "horton"
}
}
}
} 5. Final step: Upload new configuration to cluster curl -H "X-Requested-By:ambari" -u admin:your-pw -X PUT "http://c6601.ambari.apache.org:8080/api/v1/clusters/bigdata" -d @<JSON_PAYLOAD_FROM_ABOVE> DONE 🙂 Hope it helps Jonas
... View more
10-29-2015
08:38 PM
3 Kudos
Hi Terry, in a secured cluster you have two types of keytabs or principals. Headless and Service principals. Headless principals are not bound to a specific host or node, they have the syntax: <service_name>-<clustername>@EXAMPLE.COM Service princiapsl are bound to a specific service and host or node, they have the syntax: <service-name>/<hostname>@EXAMPLE.COM For Example: Headless: hdfs-mycluster@EXAMPLE.COM
Service: nn/c6601.ambari.apache.org@EXAMPLE.COM Here is some more info https://docs.oracle.com/cd/E21455_01/common/tutorials/kerberos_principal.html Make sure you use the right principal when you use kinit, you can see the principals of a keytab with klist -k <keytab file>
... View more
10-29-2015
09:17 AM
3 Kudos
You should not remove old packages by removing these folders via "rm -rf ....". There are many locations where HDP stores files, packages etc.. You can uninstall old versions for example via: yum erase -y \*2_3_0_0_2557\* Make sure you are not removing any current packages. Above command will remove the packages for the specified version, however you will still find some old log and config files (I usually keep them as a reference, but I think you can remove them without any problems).
... View more
10-29-2015
08:21 AM
Awesome tutorial, Thanks for sharing 🙂
... View more
10-28-2015
11:16 AM
1 Kudo
Thanks @Neeraj I also found these two books: Pro Apache Hadoop Hadoop Definitive Guide And both are basically saying that mapreduce.input.fileinputformat.split.minsize < dfs.blocksize < ...maxsize Smartsense recommended: 105MB (minsize) and 270MB (maxsize) Our current block setting is 64MB, although Smartsense recommended 128MB blocksize, so it kind of fits the min/max recommendations as well as the descriptions from the books.
... View more