Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2831 | 04-27-2020 03:48 AM | |
| 5504 | 04-26-2020 06:18 PM | |
| 4683 | 04-26-2020 06:05 PM | |
| 3716 | 04-13-2020 08:53 PM | |
| 5624 | 03-31-2020 02:10 AM |
08-19-2019
09:42 PM
@irfangk1 You can find more details about headless / service principals/keytabs in the following doc: https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.1.0/authentication-with-kerberos/content/kerberos_principals.html
... View more
08-19-2019
09:34 PM
@irfangk1 From Standard Kerberos perspective there is no command to differentiate between headless/service keytab. However, we can differentiate between headless / service keytabs you can find the detailed discussion about it in the following thread: https://community.cloudera.com/t5/Support-Questions/Headless-Keytab-Vs-User-Keytab-Vs-Service-Keytab/m-p/175276 Try running the following command on your keytab: Headless keytab Headless principals are not bound to a specific host or node, they have the syntax: - @EXAMPLE.COM # klist -kte /etc/security/keytabs/hdfs.headless.keytab
Keytab name: FILE:/etc/security/keytabs/hdfs.headless.keytab
KVNO Timestamp Principal
---- ------------------- ------------------------------------------------------
2 08/11/2019 01:58:27 hdfs-ker1latest@EXAMPLE.COM (des-cbc-md5)
2 08/11/2019 01:58:27 hdfs-ker1latest@EXAMPLE.COM (aes256-cts-hmac-sha1-96)
2 08/11/2019 01:58:27 hdfs-ker1latest@EXAMPLE.COM (des3-cbc-sha1)
2 08/11/2019 01:58:27 hdfs-ker1latest@EXAMPLE.COM (arcfour-hmac)
2 08/11/2019 01:58:27 hdfs-ker1latest@EXAMPLE.COM (aes128-cts-hmac-sha1-96) If it is truly a headless keytab then it will not have a principal specific to a Host. Service keytab Service principal is something that does not need to be a POSIX user,they are mostly applications that have own arrangement on how they run on the OS level and need to interact with the Kerberized cluster. Notice it's principal name has hostname included. Example: # klist -kte /etc/security/keytabs/nn.service.keytab
Keytab name: FILE:/etc/security/keytabs/nn.service.keytab
KVNO Timestamp Principal
---- ------------------- ------------------------------------------------------
2 08/11/2019 01:58:40 nn/ker1latest1.example.com@EXAMPLE.COM (des-cbc-md5)
2 08/11/2019 01:58:40 nn/ker1latest1.example.com@EXAMPLE.COM (aes256-cts-hmac-sha1-96)
2 08/11/2019 01:58:40 nn/ker1latest1.example.com@EXAMPLE.COM (des3-cbc-sha1)
2 08/11/2019 01:58:40 nn/ker1latest1.example.com@EXAMPLE.COM (arcfour-hmac)
2 08/11/2019 01:58:40 nn/ker1latest1.example.com@EXAMPLE.COM (aes128-cts-hmac-sha1-96) .
... View more
08-19-2019
05:26 PM
@lvic4594_ Great to know that the issue is resolved after making the recommended changes to use the producer.config argument explicitly --producer.config /etc/kafka/conf/producer.properties As the issue is resolved, hence it will be great to mark this thread as Solved. So that other users can quickly find the resolved threads/answers,.
... View more
08-19-2019
01:28 AM
@lvic4594_ As you are keep getting "RecordTooLargeException" even after increasing few properties that you listed in your previous comment. So can you please let us know exactly where are you noticing those exceptions? Broker side or Producer side or Consumer side? Also can you please try to specify the complete path of the "producer.properties" file in the "kafka-console-producer.sh" command line just to ensure that we are using the correct producer properties file? Example: /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list <broker-ip>:6667 --producer.config /etc/kafka/conf/producer.properties --topic test < ./big.txt . Also please verify if this file has the correct value: # grep 'max.request.size' /etc/kafka/conf/producer.properties . Reference Article: https://community.cloudera.com/t5/Community-Articles/Kafka-producer-running-into-multiple-org-apache-kafka-common/ta-p/248636 Broker side: "message.max.bytes" - this is the largest size of the message that can be received by the broker from a producer. "replica.fetch.max.bytes" - The number of bytes of messages to attempt to fetch for each partition. Producer side: "max.request.size" is a limit to send the larger message. Consumer side: Increase "max.partition.fetch.bytes" which help you to consume big messages. max number of bytes per partition returned by the server. should be larger than the max.message.size so consumer can read the largest message sent by the broker. For Consumer side can you please let us know if you have also increased the "max.partition.fetch.bytes" ?
... View more
08-19-2019
12:42 AM
1 Kudo
@sean1 As you mentioned that . "This is a new installation and since Ambari was never upgraded," .... "there are no database back up files" 1. If this a new installation then what is the reason for downgrading ambari? If ambari was never upgraded (as it is a new installation) then what is the need to downgrade it ? 2. If you do not have the Ambari DB backup then you can not recover the cluster installation even if you installed the Ambari 2.6.2 (So backup is must)
... View more
07-30-2019
12:52 AM
@Reed Villanueva Regarding your query: 1. What is the point of these ambari users / groups? Ambari-level administrators can assign user and group access to Ambari-, Cluster-, Host-, Service-, and User- (view-only) level permissions. Access levels allow administrators to categorised cluster users and groups based on the permissions that each level includes. Permissions that an Ambari-level administrator assigns each user or group define each role. These roles can be understood using the following table mentioned in the following doc. To understand which ambari role holder can do what. https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.3.0/administering-ambari/content/amb_roles_and_authorizations.html 2. What is the context they are intended to be used in? When a user wants to login to Ambari UI or say in a Specific View like File View / Hive View ...etc then in that case the Users created in Ambari DB (listed in the "users" table) can perform the actions according to their roles defined. For Local users ambari will authenticate them using the password listed inside the "users" table. But for the LDAP users the authentication will be done at the LDAP level (because ambari does not store the LDAP Sync users passwords in it's DB). .
... View more
07-29-2019
11:09 PM
@Reed Villanueva The users created inside the Ambari UI can be of two types "LOCAL" users and "LDAP" users. You can find this detail isnide the "users" table of ambari DB. Ambari in any case is not responsible for creating user/groups for those ambari UI users in any node. For example you will see "admin" user in ambari but you wont see any such user on ambari server host or on any other node. If you have integrated ambari with some user base like LDAP /AD then you can run the ldap-sync command to sync those users and groups present in the LDAP to sync them to ambari database "users'" table. So that those users can login to ambari UI with the LDAP credentials. But if you want these same Users to be created on every Physical host so that you can login to those hosts using the mentioned user accounts then you will need to setup SSSD service to sync those LDAP users to the OS users. https://github.com/HortonworksUniversity/Security_Labs/blob/master/HDP-2.6-AD.md#setup-ados-integration-via-sssd
... View more
07-26-2019
02:28 AM
1 Kudo
@Michael Bronson After deleting a service from Ambari UI Or using Ambari API calls ... you do not need to restart Ambari Server. Additionally, When you delete the service using Ambari UI then it internally makes the same API calls.
... View more
07-25-2019
02:50 AM
@Reed Villanueva Few things: 1. Ambari never adds any host entry inside the "/etc/hosts" file. It is the responsibility of the cluster admin to make sure all the hostname entries are added to either in a DNS server or inside the "/etc/hosts" file of each node of the cluster (irrespective it is a new node or old node) 2. Every cluster node should return a Fully Qualified Hostname (FQDN) when you run the following command on any cluster node. # hostname -f 3. Each and every node present in your cluster should be able to resolve each other using their FQDN (not using the alias hostname) So ping hw04 and ping hw04.ucera.local are not same . So to fix the issue please perform the above checks. And make sure that the "/etc/hosts" file entry on ambari server host and all cluster nodes are identical. Then verify if the ping and telnet works file from all cluster nodes and they are able to reach "hw04.ucera.local" correctly. # cat /etc/hosts
# ping hw04.ucera.local
# telnet hw04.ucera.local 50075 .
... View more
07-25-2019
01:30 AM
@Reed Villanueva Please check if you can acccess the hostname "hw04.ucera.local" from all your cluster nodes? Without any hostname/firewall issue? Please run the same command from all cluster nodes: # ping hw04.ucera.local
# telnet hw04.ucera.local 50075
# hostname -f .
... View more