Member since
09-23-2013
238
Posts
72
Kudos Received
28
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1853 | 08-13-2019 09:10 AM | |
3275 | 07-22-2015 08:51 AM | |
7184 | 06-29-2015 06:43 AM | |
5039 | 06-02-2015 05:17 PM | |
21084 | 05-23-2015 04:48 AM |
06-26-2014
11:10 PM
So for the [domain_realm] section, you can focus on the domain mapping to the realm, so [domain_realm] .yeahmobi.com = HADOOP.COM yeahmobi.com = HADOOP.COM So when you read that above, it is stating .yeahmobi.com = HADOOP.COM would handle any_subdomain.yeahmobi.com being mapped to the realm HADOOP.COM yeahmobi.com = HADOOP.COM would handle any_hostname.yeahmobi.com being mapped to the realm HADOOP.COM The host name only references in your [domain_realms] section are not valid. Make sure you have deployed the JCE policy files for the version of JDK you are using in the cluster. That is indicating your kerberos configuration is using AES-256 keys which are a stong encryption form of key. The default JDK does not have those strength ciphers available by default. The jar files get copied into your /usr/java/jdk1.*/jre/lib/security path, replacing the existing ones. Restart services to have the JVM come up ready to use the strong ecnryption (aes-256) ciphers. You can obtain the proper JDK version's JCE policy files here: JDK 1.6 http://www.oracle.com/technetwork/java/javase/downloads/jce-6-download-429243.html JDK 1.7 http://www.oracle.com/technetwork/java/javase/downloads/jce-7-download-432124.html
... View more
06-26-2014
07:37 AM
Did you deploy client configuration from cloudera manager? What is in your krb5.conf? What does klist -ef show after you kinit as your HDFS user? What OS Distro & version are you on?
... View more
02-24-2014
08:22 AM
2 Kudos
Consider the following examples: First the /etc/krb5.conf In this example a second domain is configured (Active Directory) for cross realm authentication with AES256 encryption being used by AD. Using AES256 means that one must install the JCE Policy Files For JDK6 or the JCE Policy Files for JDK7 to use stron encryption like AES256. Note the Items in bold that are pointed, out, they should be set in that specific file (krb5.condif) [logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
default_realm = TEST.LAB
dns_lookup_realm = false
dns_lookup_kdc = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
[realms]
TEST.ORG.LAB = {
kdc = Win2k8x64-AD4.test.org.lab:88
kdc = Win2k8x64-AD2.test.org.lab:88
admin_server = Win2k8x64-AD4.test.org.lab:749
admin_server = Win2k8x64-AD2.test.org.lab:749
default_domain = test.org.lab
}
TEST.LAB = {
kdc = kdc1.test.lab:88
admin_server = kdc1.test.lab:749
default_domain = test.lab
}
[domain_realm]
.test.lab = TEST.LAB
test.lab = TEST.LAB
.test.org.lab = TEST.ORG.LAB
test.org.lab = TEST.ORG.LAB Consider the following for the /var/kerberose/krb5kdc/kdc.conf, calling out items to set in this file as Bold Text, below. [kdcdefaults]
kdc_ports = 88
kdc_tcp_ports = 88
[realms]
TEST.LAB = {
#master_key_type = aes256-cts
max_renewable_life = 7d 0h 0m 0s
acl_file = /var/kerberos/krb5kdc/kadm5.acl
dict_file = /usr/share/dict/words
admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
default_principal_flags = +renewable
}
... View more
02-19-2014
07:23 AM
In addition to the blog link, the ciomplete catalog of ports supporting the integration is here http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM4Ent/latest/Cloudera-Manager-Installation-Guide/cmig_config_ports.html This breaks down based on CDH component, CM, etc.
... View more
01-03-2014
01:28 PM
Can you describe what your network configuration is within the cluster? More specifically consider these following questions you should be verifying within your deploy (dont post hostnames or IP's plz). I believe EC2 nodes are multi-homed. Validate for yourself what the host-naming is resolving to across those interfaces. Look at what forward and reverse lookups are returning as well. Some of the network configuraitons for components have a "wildcard" name that can be found when you search within a service's configuration settings. This is so the service is listening on "all" interfaces. For yourself, from both the EC2 cluster nodes you are trying to connect to, and your VM, please evaluate what comes back for this command line in comparison to the naming you are using between your VM and the EC2 environment: # python -c "import socket; print socket.getfqdn(); print socket.gethostbyname(socket.getfqdn())"
... View more
12-02-2013
09:31 PM
yep you got it!
... View more
12-02-2013
09:30 PM
1 Kudo
hosts files should look like this on all nodes, where cehd3.test.lab is the name of the cluster node and its IP [root@cehd3 conf]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 10.100.101.43 cehd3.test.lab cehd3 once that is in place (regardless of DNS config) confirm with the following python -c "import socket; print socket.getfqdn(); print socket.gethostbyname(socket.getfqdn())"
... View more
11-13-2013
10:23 AM
2 Kudos
(yes the reboot after changing selinux config is necessary)
... View more
11-13-2013
10:22 AM
1 Kudo
Set selinux to disabled, reboot. you can set back to permissive after install.
... View more
11-05-2013
10:32 AM
1 Kudo
We're sorry this is happening, but realize we have a large install base that uses CM for managing and deploying clusters without issue. Generally the problems people run into with installation are related to Documentation provides multiple paths for approach to installation, mixing installation steps can cause issues. Name Resolution (forward and reverse lookup must work for all nodes) Attempting to use DHCP Under-sized environment (vm's with minimal memory) /etc/hosts files not containing as first entry in hosts file after the localhost/ipv6 entry (*note hadoop does not use IPv6 yet). [IP address] host.fqdn.name hostname SELinux or firewalls mistakenly still enabled on nodes Attempting to install as non root user / sudoers passwordless configuration mistakes EXT3 filesystem instead of EXT4 filesystem Using RAID or LVM disk groups rather than presenting raw disk. The "host inspector" function examines hosts and offers the most common flaws, but that generally does not come up until you have finished cluster role assignments. /var/log/cloudera-scm-server provides logging of the CM server issues /var/log/clouder-scm-agent provides logging seen through the agent configuration /var/run/cloudera-scm-agent/process/[###]-SERVICE-Instance/ provides current runtime information for deployed parcel services started by CM. Todd
... View more