Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 855 | 06-04-2025 11:36 PM | |
| 1430 | 03-23-2025 05:23 AM | |
| 718 | 03-17-2025 10:18 AM | |
| 2576 | 03-05-2025 01:34 PM | |
| 1689 | 03-03-2025 01:09 PM |
03-08-2018
10:03 AM
@Rohit Khose Precisely the error is because hbase is picking use. //configuration.set("hbase.regionserver.kerberos.principal", "hbase/_HOST@FIELD.HORTONWORKS.COM"); Can you run on the kdc server #kadmin.local
listprincs and check the output for hbase also check the /etc/hosts entries on your cluster it could be some DNS issue
... View more
03-08-2018
09:40 AM
In this examples I am using a Cloudera virtual box but it the same command on HDP.
I didn't create a local user mudassar but used hdfs and the hdfs directory ################################# # Whoami ################################# # id
uid=0(root) gid=0(root) groups=0(root) ################################# # Switched to hdfs user ################################# # su - hdfs ################################# # Changed directory to tmp to create test files ################################# $ cd /tmp ################################# # Created first file ################################# $ echo "This is the first text file for mudassar to test snapshot" > text1.txt ################################# # Created second file ################################# $ echo "Apache Kafka comes with a pluggable authorizer known as Kafka Authorization Command Line (ACL) Interface, which is used for defining users and allowing or denying them to access its various APIs. The default behavior is that only a superuser is allowed to access all the resources of the Kafka cluster, and no other user can access those resources if no proper ACL is defined for those users. The general format in which Kafka ACL is defined is as follows" > text2.txt ################# # Here is output ################ $ ls -lrt
-rw-rw-r-- 1 hdfs hdfs 58 Mar 8 00:34 text1.txt
-rw-rw-r-- 1 cloudera hdfs 456 Mar 8 00:37 text2.txt ################################# # Above I changed the owner of text2.txt to cloudera as root user in the local filesystem ################################# #chown cloudera:hdfs /tmp/text2.txt ################################ # First create a target directory in hdfs ################################ hdfs dfs -mkdir /user/mudassar ################################ # create a snapshottable directory ################################# hdfs dfsadmin -allowSnapshot /user/mudassar output: Allowing snaphot on /user/mudassar succeeded ################################# # Check the snapshottable dir in HDFS ################################# hdfs lsSnapshottableDir Output drwxr-xr-x 0 hdfs supergroup 0 2018-03-08 00:54 0 65536 /user/mudassar ################################# # copy a file from local to the HDFS snapshotable directory ################################# hdfs dfs -put /tmp/text1.txt /user/mudassar ################################# # Validate the files was copied ################################# hdfs dfs -ls /user/mudassar Found 1 items
-rw-r--r-- 1 hdfs supergroup 58 2018-03-08 00:55 /user/mudassar/text1.txt
################################# # create snapshot ################################# hdfs dfs -createSnapshot /user/mudassar Output
Created snapshot /user/mudassar/.snapshot/s20180308-005619.181 ################################# # Check to see the snapshot ################################# hdfs dfs -ls /user/mudassar/.snapshot Output
Found 1 items
drwxr-xr-x - hdfs supergroup 0 2018-03-08 00:56 /user/mudassar/.snapshot/s20180308-005619.181
################################# # copy another file to the directory ################################# hdfs dfs -put /tmp/text2.txt /user/mudassar ################################# # Check the files exit in /user/mudassar notice the timestamp,permissions etc ################################# hdfs dfs -ls /user/mudassar output Found 2 items -rw-r--r-- 1 hdfs supergroup 58 2018-03-08 00:55 /user/mudassar/text1.txt -rw-r--r-- 1 hdfs supergroup 456 2018-03-08 00:58 /user/mudassar/text2.txt ################################# # Changed ownership of one of the files ################################# hdfs dfs -chown cloudera:supergroup /user/mudassar/text2.txt ################################# #Create second snapshot ################################# hdfs dfs -createSnapshot /user/mudassar ################################# # checked the directory notice now we have 2 snapshots one contains ONLY texte1.txt and the other contains both files
################################# $ hdfs dfs -ls /user/mudassar/.snapshot Output Found 2 items drwxr-xr-x - hdfs supergroup 0 2018-03-08 00:56 /user/mudassar/.snapshot/s20180308-005619.181 drwxr-xr-x - hdfs supergroup 0 2018-03-08 01:01 /user/mudassar/.snapshot/s20180308-010152.924
################################# # Simulate accidental deletion of the files ################################# hdfs dfs -rm /user/mudassar/* output 18/03/08 01:06:26 INFO fs.TrashPolicyDefault: Moved: 'hdfs://quickstart.cloudera:8020/user/mudassar/text1.txt' to trash at: hdfs://quickstart.cloudera:8020/user/hdfs/.Trash/Current/user/mudassar/text1.txt 18/03/08 01:06:26 INFO fs.TrashPolicyDefault: Moved: 'hdfs://quickstart.cloudera:8020/user/mudassar/text2.txt' to trash at: hdfs://quickstart.cloudera:8020/user/hdfs/.Trash/Current/user/mudassar/text2.txt ################################# # Validate the files were deleted ################################# hdfs dfs -ls /user/mudassar
Will return nothing but the directory exist, run hdfs dfs -ls /user/ ################################# # check the contents of the latest snapshot xxxxx.924 see above ################################# hdfs dfs -ls -R /user/mudassar/.snapshot/s20180308-010152.924 output -rw-r--r-- 1 hdfs supergroup 58 2018-03-08 00:55 /user/mudassar/.snapshot/s20180308-010152.924/text1.txt -rw-r--r-- 1 cloudera supergroup 456 2018-03-08 00:58 /user/mudassar/.snapshot/s20180308-010152.924/text2.txt ################################# # Recover the 2 files from snapshot using -ptopax option ################################# hdfs dfs -cp -ptopax /user/mudassar/.snapshot/s20180308-010152.924/* /user/mudassar/ ################################# # Validate the files were restored with original timestamps, ownership, permission, ACLs and XAttrs.
################################# hdfs dfs -ls /user/mudassar output -rw-r--r-- 1 hdfs supergroup 58 2018-03-08 00:55 /user/mudassar/text1.txt -rw-r--r-- 1 cloudera supergroup 456 2018-03-08 00:58 /user/mudassar/text2.txt There you are recovered the 2 files delete accidentally ,please let me know if that worked out for you
... View more
03-07-2018
11:40 PM
@Jalender Here is an example for YARN_CLIENT // Getting the state of the component curl -k -u admin:admin -H "X-Requested-By:ambari" -i -X GET
http://<HOST>:8080/api/v1/clusters/<CLUSTER_NAME>/hosts/<HOST_FQDN>/host_components/YARN_CLIENT Hope that helps
... View more
03-07-2018
01:34 PM
1 Kudo
@hema moger There are a couple of errors I can see from your ouput advertised.listeners=PLAINTEXT://{kafka_server}:9092
log.dirs=/tmp/kafka-logs (2 entries this is the correct value)
broker.id=1num.partitions=3 # At least 1
default.replication.factor=1 # best 3
zookeeper.connect = localhost:2181 # Make sure your zookeepr is up Please share your server.properties
... View more
03-07-2018
01:18 PM
@hema moger Can you attach your server.properties? Apart from that can you give a brief description of your setup, Number of Zookeepers,Brokers, Versions,OS etc
... View more
03-07-2018
11:42 AM
@Rohit Khose To be able to help you can you describe your setup? OS/HDP/Ambari versions Can you attach your /etc/krb5.conf, /var/kerberos/krb5kdc/kadm5.acl Did you install JCE? Where is the below FIELD.HORTONWORKS.COM coming from? hbase.regionserver.kerberos.principal", "hbase/_HOST@FIELD.HORTONWORKS.COM") Can you also attach the below logs /var/log/kadmind.log /var/log/krb5kdc.log Did the Ambari Kerberos wizard run successfully?
... View more
03-07-2018
08:56 AM
1 Kudo
@Rohit Khose Can you share how you installed your Kerberos packages? On the KDC server, you MUST have run # yum install krb5-server krb5-libs Created the Kerberos databases # kdb5_util create -s Then start the KDC and kadmin processes on the KDC assuming you are on Centos/redhat 7 $ systemctl enable krb5kdc
$ systemctl start krb5kdc
$ systemctl enable kadmin
$ systemctl start kadmin Create a Kerberos Admin On the KDC server create a KDC admin by creating an admin principal. # kadmin.local -q "addprinc admin/admin" And on all the clients you MUST have run # yum install krb5-libs krb5-workstation Your Kerberos config is wrong starting with the /etc/krb5.conf and it should be copied to all clients hoping you run the kerberos client installation [libdefaults]
renew_lifetime = 7d
forwardable = true
default_realm = $YOUR_REALM
ticket_lifetime = 24h
dns_lookup_realm = false
dns_lookup_kdc = false
udp_preference_limit=1
[domain_realm]
your_realm = $YOUR_REALM
.your_realm = $YOUR_REALM
[logging]
default = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
kdc = FILE:/var/log/krb5kdc.log
[realms]
$YOUR_REALM = {
admin_server = your_kdc.server_FQDN
kdc = your_kdc.server_FQDN
} Contents of /var/kerberos/krb5kdc/kadm5.acl: */admin@$YOUR_REALM * After these steps the run the Ambari Kerberos wizard which will generate the correct keytabs in /etc/security/keytabs/* directory if you want a full documentation let me know Hope that helps
... View more
03-06-2018
11:28 PM
@ajay vembu One of the pre-requisites for an HDP cluster setup is to disable the firewall. See this hortonworks official documentation You can temporary clear all iptables rules so that you can troubleshoot problem. If you are using Red Hat or Fedora Linux type command: # /etc/init.d/iptables save
# /etc/init.d/iptables stop If you are using other Linux distribution type following commands: # iptables -F
# iptables -X
# iptables -t nat -F
# iptables -t nat -X
# iptables -t mangle -F
Please revert
... View more
03-06-2018
11:02 PM
@ajay vembu Zookeeper is not running on these 2 hosts Cannot open channel to 2 at election addressHost2/10.23.152.247:3888java.net.ConnectException:Connection refused
Cannot open channel to 3 at election addressHost2/10.23.152.159:3888java.net.ConnectException:Connection refused Can you manually start by running the below command on all the zookeeper hosts su - zookeeper -c "/usr/hdp/current/zookeeper-server/bin/zookeeper-server start" Once the zookeepers are up the start the other components
... View more