Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 923 | 06-04-2025 11:36 PM | |
| 1525 | 03-23-2025 05:23 AM | |
| 756 | 03-17-2025 10:18 AM | |
| 2707 | 03-05-2025 01:34 PM | |
| 1801 | 03-03-2025 01:09 PM |
07-26-2021
02:53 PM
1 Kudo
@sipocootap2 Here is a walkthrough on how to delete a snapshot Created a directory $ hdfs dfs -mkdir -p /app/tomtest Changed the owner $ hdfs dfs -chown -R tom:developer /app/tomtest To be able to create a snapshot the directory has to be snapshottable $ hdfs dfsadmin -allowSnapshot /app/tomtest
Allowing snaphot on /app/tomtest succeeded Now I created 3 snapshots $ hdfs dfs -createSnapshot /app/tomtest sipo
Created snapshot /app/tomtest/.snapshot/sipo
$ hdfs dfs -createSnapshot /app/tomtest coo
Created snapshot /app/tomtest/.snapshot/coo
$ hdfs dfs -createSnapshot /app/tomtest tap2
Created snapshot /app/tomtest/.snapshot/tap2 Confirm the directory is snapshottable $ hdfs lsSnapshottableDir
drwxr-xr-x 0 tom developer 0 2021-07-26 23:14 3 65536 /app/tomtest List all the snapshots in the directory $ hdfs dfs -ls /app/tomtest/.snapshot
Found 3 items
drwxr-xr-x - tom developer 0 2021-07-26 23:14 /app/tomtest/.snapshot/coo
drwxr-xr-x - tom developer 0 2021-07-26 23:14 /app/tomtest/.snapshot/sipo
drwxr-xr-x - tom developer 0 2021-07-26 23:14 /app/tomtest/.snapshot/tap2 Now I need to delete the snapshot coo $ hdfs dfs -deleteSnapshot /app/tomtest/ coo Confirm the snapshot is gone $ hdfs dfs -ls /app/tomtest/.snapshot
Found 2 items
drwxr-xr-x - tom developer 0 2021-07-26 23:14 /app/tomtest/.snapshot/sipo
drwxr-xr-x - tom developer 0 2021-07-26 23:14 /app/tomtest/.snapshot/tap2 Voila To delete a snapshot the format is hdfs dfs -deleteSnapshot <path> <snapshotName> i.e hdfs dfs -deleteSnapshot /app/tomtest/ coo notice the space and omittion of the .snapshot as all .(dot) files the snapshot directory is not visible with normal hdfs command The -ls command gives 0 results $ hdfs dfs -ls /app/tomtest/ The special command shows the 2 remaining snapshots $ hdfs dfs -ls /app/tomtest/.snapshot
Found 2 items
drwxr-xr-x - tom developer 0 2021-07-26 23:14 /app/tomtest/.snapshot/sipo
drwxr-xr-x - tom developer 0 2021-07-26 23:14 /app/tomtest/.snapshot/tap2 Is there a command to disallow snapshots for all the subdirectories? Yes there is only after you have deleted all the snapshots therein demo, or better at directory creation time you can disallow snapshots $ hdfs dfsadmin -disallowSnapshot /app/tomtest/
disallowSnapshot: The directory /app/tomtest has snapshot(s). Please redo the operation after removing all the snapshots. The only way I have found which works when for me and permits me to have a cup of coffee is to first list all the snapshots and copy-paste the delete even if there are 60 snapshots it works and I only get back when the snapshots are gone or better still do something else while the deletion is going on not automated though the example The below would run concurrently hdfs dfs -deleteSnapshot /app/tomtest/ sipo
.....
....
hdfs dfs -deleteSnapshot /app/tomtest/ tap2 -deleteSnapshot skips trash by default! Happy hadooping
... View more
07-26-2021
01:20 PM
@enirys As suggested we need more details and there is no silver bullet a piece of advance from experience it's better you open a new thread and give as much details as possible. OS HDP version Ambari Mit or AD kerberos Documented steps or official document reference Your Kerberos config krb5.conf, kdc.conf kadm5.acl Hosts files Node number [Single or Multi node] Just any information that will reduce the too many exchange of posts but gives members the info needed to help. Cheers
... View more
07-26-2021
02:27 AM
@ambari275 Great please accept the answer so the thread can be closed and referenced byother users Happy hadooping !!!
... View more
07-26-2021
01:24 AM
@ambari275 These are the steps to follow see below Assumptions logged as root clustername=test REALM= DOMAIN.COM Hostname = host1 logged in as root [root@host1]# Switch to user HDFS the HDFS superuser [root@host1]# su - hdfs Check the HDFS associated keytab generated [hdfs@host1 ~]$ cd /etc/security/keytabs/
[hdfs@host1 keytabs]$ ls Sample output atlas.service.keytab hdfs.headless.keytab knox.service.keytab oozie.service.keytab Now use the hdfs.headless.keytab to get the associated principal [hdfs@host1 keytabs]$ klist -kt /etc/security/keytabs/hdfs.headless.keytab Expected output Keytab name: FILE:/etc/security/keytabs/hdfs.headless.keytab
KVNO Timestamp Principal
---- ------------------- ------------------------------------------------------
1 07/26/2021 00:34:03 hdfs-test@DOMAIN.COM
1 07/26/2021 00:34:03 hdfs-test@DOMAIN.COM
1 07/26/2021 00:34:03 hdfs-test@DOMAIN.COM
1 07/26/2021 00:34:03 hdfs-test@DOMAIN.COM
1 07/26/2021 00:34:03 hdfs-test@DOMAIN.COM Grab a Kerberos ticket by using the keytab+ principal like username/pèassword to authenticate to KDC [hdfs@host1 keytabs]$ kinit -kt /etc/security/keytabs/hdfs.headless.keytab hdfs-test@DOMAIN.COM Check you no have a valid Kerberos ticket [hdfs@host1 keytabs]$ klist Sample output Ticket cache: FILE:/tmp/krb5cc_1013
Default principal: hdfs-test@DOMAIN.COM
Valid starting Expires Service principal
07/26/2021 10:03:17 07/27/2021 10:03:17 krbtgt/DOMAIN.COM@DOMAIN.COM Now you can list successfully the HDFS directories, remember to -ls it seems you forgot it in your earlier command [hdfs@host1 keytabs]$ hdfs dfs -ls /
Found 9 items
drwxrwxrwx - yarn hadoop 0 2018-09-24 00:31 /app-logs
drwxr-xr-x - hdfs hdfs 0 2018-09-24 00:22 /apps
drwxr-xr-x - yarn hadoop 0 2018-09-24 00:12 /ats
drwxr-xr-x - hdfs hdfs 0 2018-09-24 00:12 /hdp
drwxr-xr-x - mapred hdfs 0 2018-09-24 00:12 /mapred
drwxrwxrwx - mapred hadoop 0 2018-09-24 00:12 /mr-history
drwxrwxrwx - spark hadoop 0 2021-07-26 10:04 /spark2-history
drwxrwxrwx - hdfs hdfs 0 2021-07-26 00:57 /tmp
drwxr-xr-x - hdfs hdfs 0 2018-09-24 00:23 /user Voila happy hadooping and remember to accept the best response so other users could reference it
... View more
07-25-2021
02:15 PM
@ambari275 I have gone through the logs and here are my observations Error: WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.ExtensionsService.getExtensionVersions(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String), should not consume any entity. Solution: To fix the issue: # cat /etc/ambari-server/conf/ambari.properties | grep client.threadpool.size.max
client.threadpool.size.max=25 The client.threadpool.size.max property indicates a number of parallel threads servicing client requests. To find the number of cores on the server, issue Linux command nproc # nproc
25 1) Edit /etc/ambari-server/conf/ambari.properties file and change the default value of client.threadpool.size.max to have the number of cores on your machine. client.threadpool.size.max=25 2) Restart ambari-server # ambari-server restart Error 2021-07-23 12:43:42,673 WARN [Stack Version Loading Thread] RepoVdfCallable:142 - Could not load version definition for HDP-3.0 identified by https://archive.cloudera.com/p/HDP/centos7/3.x/3.0.1.0/HDP-3.0.1.0-187.xml. Server returned HTTP response code: 401 for URL: https://archive.cloudera.com/p/HDP/centos7/3.x/3.0.1.0/HDP-3.0.1.0-187.xml java.io.IOException: Server returned HTTP response code: 401 for URL: https://archive.cloudera.com/p/HDP/centos7/3.x/3.0.1.0/HDP-3.0.1.0-187.xml Reason: 401 means "Unauthorized", so there must be something with your credentials this is purely an authorization issue. It seems your access to the HDP repos is an issue. Your krb5.conf should look something like this # cat /etc/krb5.conf # Configuration snippets may be placed in this directory as well
includedir /etc/krb5.conf.d/
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
dns_lookup_realm = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
rdns = false
default_realm = DOMAIN.COM
default_ccache_name = KEYRING:persistent:%{uid}
[realms]
DOMAIN.COM = {
kdc = [FQDN 10.1.1.150]
admin_server =[FQDN 10.1.1.150]
}
[domain_realm]
.domain.com = DOMAIN.COM
domain.com = DOMAIN.COM Your /etc/host I think I remember once having issues with hostnames with - try using host1 for ESXI-host2 etc and please don't comment out the IPV6 entry it can cause network connectivity issue so please remove the be # on the second line x.x.x.x localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
x.x.x.x FQDN server
x.x.x.x host1
x.x.x.x host2
x.x.x.x host3 Kerberos service uses DNS to resolve hostnames. Therefore, DNS must be enabled on all hosts. With DNS, the principal must contain the fully qualified domain name (FQDN) of each host. For example, if the hostname is host1, the DNS domain name is domain.com, and the realm name is DOMAIN.COM, then the principal name for the host would be host/host1.domain.com@DOMAIN.COM. The examples in this guide require that DNS is configured and that the FQDN is used for each host. Also, ensure ambari agents is installed on all hosts including the ambari-server! Ensure on all the hosts the hostname point to the Ambari server [server]
hostname=<FQDN_oF_Ambari_server>
url_port=8440
secured_url_port=8441
connect_retry_delay=10
max_reconnect_retry_delay=30 Please revert
... View more
07-25-2021
01:34 PM
@USMAN_HAIDER There is this step below did you perform that? Kerberos must be specified as the security mechanism for Hadoop infrastructure, starting with the HDFS service. Enable Cloudera Manager Server security for the cluster on an HDFS service. After you do so, the Cloudera Manager Server automatically enables Hadoop security on the MapReduce and YARN services associated with that HDFS service. In the Cloudera Manager Admin Console:
Select Clusters > HDFS-n.
1.Click the Configuration tab.
2.Select HDFS-n for the Scope filter.
3.Select Security for the Category filter.
4.Scroll (or search) to find the Hadoop Secure Authentication property.
5.Click the Kerberos button to select Kerberos: Please revert
... View more
07-23-2021
10:57 AM
@ambari275 You can set up the kerberos server anywhere on the network provided it can be accessed by the hosts in your cluster. I suspect there is d^something wrong with yor Ambari server. Can you share your /var/log/ambari-server/ambari-server.log I asked for a couple of files but you only shared the krb5.conf. I will need the rest of the files to be able to understand and determine what could be the issue. Can describe your setup? Number of Nodes,network, OS etc
... View more
07-22-2021
12:38 PM
@ambari275 From the onset, I see you left the defaults and I doubt whether that really maps to your cluster. Here is a list of outputs I need to validate $ hostname -f [Where you installed the kerberos server]
/etc/hosts
/var/kerberos/krb5kdc/kadm5.acl
/var/kerberos/krb5kdc/kdc.conf On the Kerberos server can you run # kadmin.local Then list_principals q to quit The hostname -f output on the Kerberos server should replace kdc and admin_server in krb5.conf Here is an example OS: Centos 7 Cluster Realm HOTEL.COM My hosts entry is for a class C network so yours could be different but your host name must be resolved by DNS [root@test ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.0.153 test.hotel.com test [root@test ~]# hostname -f test.hotel.com [root@test ~]# cat /var/kerberos/krb5kdc/kadm5.acl */admin@HOTEL.COM * [root@test ~]# cat /etc/krb5.conf # Configuration snippets may be placed in this directory as well
includedir /etc/krb5.conf.d/
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
dns_lookup_realm = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
rdns = false
default_realm = HOTEL.COM
default_ccache_name = KEYRING:persistent:%{uid}
[realms]
HOTEL.COM = {
kdc = test.hotel.com
admin_server =test.hotel.com
}
[domain_realm]
.hotel.com = HOTEL.COM
hotel.com = HOTEL.COM [root@test ~]# cat /var/kerberos/krb5kdc/kdc.conf [kdcdefaults]
kdc_ports = 88
kdc_tcp_ports = 88
[realms]
HOTEL.COM = {
#master_key_type = aes256-cts
acl_file = /var/kerberos/krb5kdc/kadm5.acl
dict_file = /usr/share/dict/words
admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
}
[realms]
HOTEL.COM = {
master_key_type = des-cbc-crc
database_name = /var/kerberos/krb5kdc/principal
admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
supported_enctypes = des-cbc-crc:normal des3-cbc-raw:normal des3-cbc-sha1:norm
al des-cbc-crc:v4 des-cbc-crc:afs3
kadmind_port = 749
acl_file = /var/kerberos/krb5kdc/kadm5.acl
dict_file = /usr/dict/words
} Once you share the above then I could figure out where the issue could be. Happy hadooping
... View more
07-22-2021
10:46 AM
@USMAN_HAIDER When you create a new Principal in the slave KDC you should also have a crontab that will propagate it to the master #!/bin/sh
#/var/kerberos/kdc-master-propogate.sh
kdclist = "slave-kdc.customer.com"
/sbin/kdb5_util dump /usr/local/var/krb5kdc/master_datatrans
for kdc in $kdclist
do
/sbin/kprop -f /usr/local/var/krb5kdc/master_datatrans $kdc
done This way the principals will be sync'ed
... View more
07-18-2021
02:10 PM
@mike_bronson7 Are you using the default capacity schedule settings? No queues/leafs created? Is what you shared the current seeting?
... View more