Member since
01-19-2017
3676
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 618 | 06-04-2025 11:36 PM | |
| 1185 | 03-23-2025 05:23 AM | |
| 585 | 03-17-2025 10:18 AM | |
| 2195 | 03-05-2025 01:34 PM | |
| 1383 | 03-03-2025 01:09 PM |
03-07-2018
11:40 PM
@Jalender Here is an example for YARN_CLIENT // Getting the state of the component curl -k -u admin:admin -H "X-Requested-By:ambari" -i -X GET
http://<HOST>:8080/api/v1/clusters/<CLUSTER_NAME>/hosts/<HOST_FQDN>/host_components/YARN_CLIENT Hope that helps
... View more
03-07-2018
01:34 PM
1 Kudo
@hema moger There are a couple of errors I can see from your ouput advertised.listeners=PLAINTEXT://{kafka_server}:9092
log.dirs=/tmp/kafka-logs (2 entries this is the correct value)
broker.id=1num.partitions=3 # At least 1
default.replication.factor=1 # best 3
zookeeper.connect = localhost:2181 # Make sure your zookeepr is up Please share your server.properties
... View more
03-07-2018
01:18 PM
@hema moger Can you attach your server.properties? Apart from that can you give a brief description of your setup, Number of Zookeepers,Brokers, Versions,OS etc
... View more
03-07-2018
11:42 AM
@Rohit Khose To be able to help you can you describe your setup? OS/HDP/Ambari versions Can you attach your /etc/krb5.conf, /var/kerberos/krb5kdc/kadm5.acl Did you install JCE? Where is the below FIELD.HORTONWORKS.COM coming from? hbase.regionserver.kerberos.principal", "hbase/_HOST@FIELD.HORTONWORKS.COM") Can you also attach the below logs /var/log/kadmind.log /var/log/krb5kdc.log Did the Ambari Kerberos wizard run successfully?
... View more
03-07-2018
08:56 AM
1 Kudo
@Rohit Khose Can you share how you installed your Kerberos packages? On the KDC server, you MUST have run # yum install krb5-server krb5-libs Created the Kerberos databases # kdb5_util create -s Then start the KDC and kadmin processes on the KDC assuming you are on Centos/redhat 7 $ systemctl enable krb5kdc
$ systemctl start krb5kdc
$ systemctl enable kadmin
$ systemctl start kadmin Create a Kerberos Admin On the KDC server create a KDC admin by creating an admin principal. # kadmin.local -q "addprinc admin/admin" And on all the clients you MUST have run # yum install krb5-libs krb5-workstation Your Kerberos config is wrong starting with the /etc/krb5.conf and it should be copied to all clients hoping you run the kerberos client installation [libdefaults]
renew_lifetime = 7d
forwardable = true
default_realm = $YOUR_REALM
ticket_lifetime = 24h
dns_lookup_realm = false
dns_lookup_kdc = false
udp_preference_limit=1
[domain_realm]
your_realm = $YOUR_REALM
.your_realm = $YOUR_REALM
[logging]
default = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
kdc = FILE:/var/log/krb5kdc.log
[realms]
$YOUR_REALM = {
admin_server = your_kdc.server_FQDN
kdc = your_kdc.server_FQDN
} Contents of /var/kerberos/krb5kdc/kadm5.acl: */admin@$YOUR_REALM * After these steps the run the Ambari Kerberos wizard which will generate the correct keytabs in /etc/security/keytabs/* directory if you want a full documentation let me know Hope that helps
... View more
03-06-2018
11:28 PM
@ajay vembu One of the pre-requisites for an HDP cluster setup is to disable the firewall. See this hortonworks official documentation You can temporary clear all iptables rules so that you can troubleshoot problem. If you are using Red Hat or Fedora Linux type command: # /etc/init.d/iptables save
# /etc/init.d/iptables stop If you are using other Linux distribution type following commands: # iptables -F
# iptables -X
# iptables -t nat -F
# iptables -t nat -X
# iptables -t mangle -F
Please revert
... View more
03-06-2018
11:02 PM
@ajay vembu Zookeeper is not running on these 2 hosts Cannot open channel to 2 at election addressHost2/10.23.152.247:3888java.net.ConnectException:Connection refused
Cannot open channel to 3 at election addressHost2/10.23.152.159:3888java.net.ConnectException:Connection refused Can you manually start by running the below command on all the zookeeper hosts su - zookeeper -c "/usr/hdp/current/zookeeper-server/bin/zookeeper-server start" Once the zookeepers are up the start the other components
... View more
03-01-2018
06:50 PM
@hema moger Great if it's a Linux server then create a passwordless login between the remote server and the edge node. First, update your /etc/hosts so that the remoter server is pingable from your edge node check the firewall rules and make sure you don't have a DENY Here is the walkthrough See attached pic1.jpg In my case the I have a centos server GULU and a Cloudera Quickstart VM running in Oracle VM virtual box because they are on the same network it's easy GULU Remote server: I want to copy the file test.txt which is located in /home/sheltong/Downloads [root@gulu ~]# cd /home/sheltong/Downloads [root@gulu Downloads]# ls
test.txt Edge node or localhost: [root@quickstart home]# scp root@192.168.0.80:/home/sheltong/Downloads/test.txt .
The authenticity of host '192.168.0.80 (192.168.0.80)' can't be established.
RSA key fingerprint is 93:8a:6c:02:9d:1f:e1:b5:0a:05:68:06:3b:7d:a3:d3.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.0.80' (RSA) to the list of known hosts.
root@192.168.0.80's password:xxxxxremote_server_root_passwordxxx
test.txt 100% 136 0.1KB/s 00:00 Validate that the file was copied [root@quickstart home]# ls cloudera test.txt There you are I hope that helped
... View more
03-01-2018
09:27 AM
1 Kudo
@hema moger 1. Is the remote server a Linux box or Windows? If its the latter then you will need WinSCP to transfer the file to a Linux box. 2. If you set up your cluster according to the recommended architecture you should be having an edge node(s), Masternodes and data nodes. Typically your edge node will be used to recevieve the csv file.You will need to ensure there is connectivity between your edge node and the remote Linux box where your CSV file is. Assuming you have root access to boot the remote and edge node then you can copy the CSV file to the edge node. it better to setup a passwordless connection between the edge node and the remote Linux server. If you are on the computer from which you want to send the file to a remote computer: # scp /file/to/send username@remote:/where/to/put Here the remote can be an FQDN or an IP address. On the other hand if you are on the computer wanting to receive the file from a remote computer: # scp username@remote:/file/to/send /where/to/put Then on the edge node, you can invoke hdfs command, assuming the csv file is in /home/transfer/test.csv # su - hdfs
$ hdfs dfs -put /home/transfer/test.csv /user/your_hdfs_directory Validate the success of the hdfs command $ hdfs dfs -ls /user/your_hdfs_directory/ You should be able to see your test.csv here
... View more
02-28-2018
02:10 PM
1 Kudo
@Ravikanth Pratti The below error is typical of firewall issue.Make sure firewall is not blocking your access iptables is default firewall on Linux. Run following command to see what iptables rules are setup: # /sbin/iptables -L -n Firewall error 0:0:0:2181:QuorumCnxManager@588] - Cannot open channel to 3 at election address jn3/15.34.71.187:3888 java.net.NoRouteToHostException: No route to host (Host unreachable) You can temporary clear all iptables rules so that you can troubleshoot problem. If you are using Red Hat or Fedora Linux type command: # /etc/init.d/iptables save
# /etc/init.d/iptables stop If you are using other Linux distribution type following commands: # iptables -F
# iptables -X
# iptables -t nat -F
# iptables -t nat -X
# iptables -t mangle -F
# iptables -t mangle -X Hope that helps
... View more