Member since
03-25-2017
47
Posts
0
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4278 | 10-08-2018 06:21 PM | |
5092 | 09-17-2018 11:33 PM |
10-14-2018
09:01 AM
Hi, You may try below work around 1) Generally operations team create a client system and allow access to production cluster from there rather giving access to datanode. So if it's just a client then you use the previous solution 2) if you really want to read data from cluster 1 in cluster 2 then you can try using namenode ip rather than nameservice hdfs dfs -ls hdfs://namenode-ip:port/
... View more
10-08-2018
06:21 PM
Thanks @bgooley I solved this by upgrading os and kerberos version. It works fine for me now. Thanks for your help
... View more
10-05-2018
12:57 PM
Even for me kinit is working and zookeeper and namenode start but datanode fails to connect namenode and then complete cluster comes down
... View more
10-05-2018
07:09 AM
After enabling kerberos datanode started failing to connect the namenode Error in datanode log: WARN org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hdfs/hdp-3.com@CDH.HDP (auth:KERBEROS) cause:java.io.IOException: Couldn't setup connection for hdfs/hdp-3.com@CDH.HDP to hdp-1.com/192.1.1.1:8022 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: hdp-1.com/192.1.1.1:8022 WARN org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hdfs/hdp-3..com@CDH.HDP (auth:KERBEROS) cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Ticket expired (32) - PROCESS_TGS)] WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool ID needed, but service not yet registered with NN, trace: java.lang.Exception krb.conf cat /etc/krb5.conf [libdefaults] default_realm = CDH.HDP dns_lookup_kdc = false dns_lookup_realm = false ticket_lifetime = 86400 renew_lifetime = 604800 forwardable = true default_tgs_enctypes = des-cbc-crc aes des-cbc-md5 arcfour-hmac rc4 default_tkt_enctypes = des-cbc-crc aes des-cbc-md5 arcfour-hmac rc4 permitted_enctypes = des-cbc-crc aes des-cbc-md5 arcfour-hmac rc4 udp_preference_limit = 1 kdc_timeout = 10000 [realms] CDH.HDP = { kdc = hdp-2.com admin_server = hdp-2.com default_domain = cdh.hdp } [domain_realm] cdh.hdp = CDH.HDP kdc.conf [kdcdefaults] kdc_ports = 88 kdc_tcp_ports = 88 [realms] CDH.HDP = { #master_key_type = aes256-cts acl_file = /var/kerberos/krb5kdc/kadm5.acl dict_file = /usr/share/dict/words admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab supported_enctypes = aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal } Please help to resolve this.
... View more
09-18-2018
12:04 AM
I see below error in log: java.lang.OutOfMemoryError: Java heap space So i would like to know the heap memory you have allocated right now? Can you try increasing heap size of datanode.
... View more
09-17-2018
11:49 PM
I suggest, you can create two linux user account for cluster1 and cluster2 respectively and configure .bashrc. For example: Create two user account produser(prod) and druser(dr). Create 2 directory of hdfs config dir "/mnt/hadoopprod/conf" and "/mnt/hadoopdr/conf" Configure hadoop home directory for each user in ~/.bashrc file Switch user and use the cluster 🙂
... View more
09-17-2018
11:33 PM
Hi, Thanks for your response and help. Everytime i make changes in configs it re-deploy the configurations which was deleting my topology script. So i pushed my script to /mnt/topology/ directory and also tweak the script a bit It look like below now topology.sh #!/bin/bash while [ $# -gt 0 ]; do nodearg=$1 #get the first argument for line in `cat /mnt/topology/topology.data`; do #read line from topology.data file node=$(echo $line|awk -F ',' '{print $1}') #parse the data and get the hostname to compare result="" if [ $nodearg = $node ]; then #compare the hostname in the file with the argument result=$(echo $line|awk -F ',' '{print $2}') #parse the file again to recive the rack details for the host break; else result="/default/rack-0" fi done shift echo $result done
... View more
09-17-2018
09:23 AM
So, in that case it will satisfy the first if condition. Do you know how hadoop invoke topology script? I mean the parameters it passes along with script file.
... View more
09-17-2018
05:30 AM
Hi,
I have written my own topology script and made required configuration in cloudera manager>hdfs>configurations>net.topology.script.file.name property. But the rack topology is not updated and could see ERROR in namenode log as "script /etc/hadoop/conf/topology.sh returned 0 values when 1 were expected.". Please help to resolve the issue.
topology.sh
#!/bin/bash
nodearg=$1 #get the first argument
while [ $# -gt 0 ]; do
for line in `cat topology.data`; do #read line from topology.data file
node=$(echo $line|awk -F ',' '{print $1}') #parse the data and get the hostname to compare
result=""
if [ $nodearg = $node ]; then #compare the hostname in the file with the argument
result=$(echo $line|awk -F ',' '{print $2}') #parse the file again to recive the rack details for the host
break;
else
result="/default/rack-0"
fi
done
shift
echo $result
done
topology.data
hdp-1.hdp.com,/default/rack-1
hdp-2.hdp.com,/default/rack-2
hdp-3.hdp.com,/default/rack-3
19.1.0.13,/default/rack-1
19.1.0.14,/default/rack-2
19.1.0.15,/default/rack-3
Output:
$ ./topology.sh hdp-1. hdp.com
/default/rack-1
$ ./topology.sh 19.1.0.14
/default/rack-2
Thanks and regards
Sidharth
... View more
Labels:
- Labels:
-
Cloudera Manager
-
HDFS
07-04-2017
12:54 PM
I have enabled kerberos. I have installed informatica on Aix.Now, I am trying to connect informatica with impala using jdbc connection but it's not able to read the ticket from cache.
... View more