Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 742 | 06-04-2025 11:36 PM | |
| 1313 | 03-23-2025 05:23 AM | |
| 647 | 03-17-2025 10:18 AM | |
| 2375 | 03-05-2025 01:34 PM | |
| 1541 | 03-03-2025 01:09 PM |
05-16-2018
11:37 PM
Hey Geoffrey eventhough it worked. I kept monitoring it for a while and metrics went away again, but this time with a different message 2018-05-16 22:53:52,754 INFO TimelineMetricHostAggregatorHourly: End aggregation cycle @ Wed May 16 22:53:52 UTC 2018
2018-05-16 22:54:10,428 INFO org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 58080 actions to finish
2018-05-16 22:54:20,432 INFO org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 58080 actions to finish
2018-05-16 22:54:30,437 INFO org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 58080 actions to finish
2018-05-16 22:54:40,446 INFO org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 58080 actions to finish
2018-05-16 22:54:45,499 INFO TimelineClusterAggregatorSecond: Started Timeline aggregator thread @ Wed May 16 22:54:45 UTC 2018
2018-05-16 22:54:45,501 INFO TimelineClusterAggregatorSecond: Last Checkpoint read : Wed May 16 22:52:00 UTC 2018
2018-05-16 22:54:45,501 INFO TimelineClusterAggregatorSecond: Rounded off checkpoint : Wed May 16 22:52:00 UTC 2018
2018-05-16 22:54:45,501 INFO TimelineClusterAggregatorSecond: Last check point time: 1526511120000, lagBy: 165 seconds.
2018-05-16 22:54:45,501 INFO TimelineClusterAggregatorSecond: Start aggregation cycle @ Wed May 16 22:54:45 UTC 2018, startTime = Wed May 16 22:52:00 UTC 2018, endTime = Wed May 16 22:54:00 UTC 2018
2018-05-16 22:54:45,501 INFO TimelineClusterAggregatorSecond: Skipping aggregation for metric patterns : sdisk\_%,boottime
2018-05-16 22:54:50,453 INFO org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 58080 actions to finish
2018-05-16 22:55:00,460 INFO org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 58080 actions to finish
2018-05-16 22:55:10,462 INFO org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 58080 actions to finish
2018-05-16 22:55:20,463 INFO org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 58080 actions to finish
2018-05-16 22:55:30,473 INFO org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 58080 actions to finish
2018-05-16 22:55:40,476 INFO org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 58080 actions to finish
2018-05-16 22:55:50,487 INFO org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 58080 actions to finish
2018-05-16 22:56:00,490 INFO org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 58080 actions to finish
2018-05-16 22:56:10,494 INFO org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 58080 actions to finish
<br> and it's showing above all time. Any idea? I meant, the message is preatty clear and looks like my heap size is not enough for the amount of data the service is getting, this is what I have configure on my metrics collector heap size: metrics_collector_heapsize = 6144 If I have a cluster with 126 node and 106 of them has 899.50 GB as configure capaciy ant 20 of them with 399.75 GB what would be a fair amount of heap size to assign to this service does a formula exists for this? Regards!
... View more
05-23-2018
01:57 PM
@Matthias Tewordt Hey what the latest error,can you share the stack trace
... View more
05-17-2018
05:50 AM
@Vaughn Shideler Great that your issue has been resolved If you find one of the answers addressed your question, please take a moment to login and click the "Accept" link on the answer. This will ensure other members who encounters the same issue could use that solution 🙂 Happy Hadooping
... View more
05-16-2018
04:41 PM
@Mokkan Mok The HDP and Ambari upgrades will only impact the related binaries, but you should also test their compatibility against bespoke/third party tools that are plug to the hadoop cluster e.g Presto,Juypter ,tableau ,etc
... View more
05-11-2018
04:55 AM
2 Kudos
Iy you have deployed and secured your multi-node-cluster with an MIT KDC running on a Linux box (dedicated or not), this can also be applied on a single node cluster. Below is a step by step procedure to grant a group of user(s) on the Edge node with access to services in the cluster. Assumption KDC is running KDC is created KDC user and a master password is available REALM: DEV.COM Users: user1 to user5 Edge node: for users Kerberos Admin user is root or sudoer A good solution security-wise is to copy the generated keytabs to that users' home directory. If these are local Unix users NOT Active directory then create the keytabs in e.g /tmp and later copy them to their respective home directories and make sure to change the correct permissions on the keytabs. A good practice is to ensure a node dedicated to users usually called an EDGE NODE all client software is installed here and not on the Data or Name Nodes! Change directory to tmp # cd /tmp If you have root access, no need for sudo, specify the password for user1 # sudo kadmin.local
Authenticating as principal root/admin@DEV.COM with password.
kadmin.local: addprinc user1@DEV.COM
WARNING: no policy specified for user1@DEV.COM; defaulting to no policy
Enter password for principal "user1@DEV.COM":
Re-enter password for principal "user1@DEV.COM":
Principal "user1@DEV.COM" created. Do the above step for all the new users addprinc user2@DEV.COM
addprinc user3@DEV.COM
addprinc user4@DEV.COM
addprinc user5@DEV.COM The keytabs with be generated in the current directory Generate keytab for user1 The keytab will be generated in the current directory # sudo ktutil
ktutil: addent -password -p user1@DEV.COM -k 1 -e RC4-HMAC
Password for user1@DEV.COM:
ktutil: wkt user1.keytab
ktutil: q You MUST repeat the above for all the 5 users Copy the newly created keytab to the user's home directory, in this example I have copied the keytab to /etc/security/keytabs # cp user1.keytab /etc/security/keytabs Change ownership & permission here user1 belongs to hadmin group # chown user1:hadmin user1.keytab Again do the above for all the other users. A good technical and security best practice is to copy the keytabs from the kdc to edge node respective home directories and change the ownership of the keytabs Validate the principals in this example the keytabs are in /etc/security/keytabs # klist -kt /etc/security/keytabs/user1.keytab
Keytab name: FILE:/etc/security/keytabs/user1.keytab
KVNO Timestamp Principal
----------- ------------------- ------------------------------------------------------
1 05/10/2018 10:46:27 user1@DEV.COM To ensure successful ticket attribution the newly created user should validate the principal. See example below and use it grab a ticket, the principal will be concatenated with the keytab when running the kinit # klist -kt /etc/security/keytabs/user1.keytab
Keytab name: FILE:/etc/security/keytabs/user1.keytab
KVNO Timestamp Principal
-------- ------------------------ --------------------------------------------------------
1 05/10/18 01:00:50 user1@DEV.COM
.... .................. ..............
1 05/10/18 01:00:50 user1@DEV.COM Test the new user1 should try grabbing a Kerberos ticket (keytab + principal) # kinit -kt /etc/security/keytabs/user1.keytab user1@DEV.COM The below command should show the validity of the Kerberos ticket # klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: user1@DEV.COM
Valid starting Expires Service principal
05/10/2018 10:53:48 05/11/2018 10:53:48 krbtgt/DEV.COM@DEV.COM You should be okay now access and successfully run jobs on the cluster see example Accessing Hive CLI with Kerberos ticket $ hive
2018-05-10 23:18:57 WARN [main] conf.HiveConf: HiveConf of name hive.custom-extensions.root does not exist
log4j:WARN No such property [maxFileSize] in org.apache.log4j.DailyRollingFileAppender.
Logging initialized using configuration in file:/etc/hive/2.6.2.0-205/0/hive-log4j.properties
hive> show databases;
OK
default
Time taken: 8.525 seconds, Fetched: 1 row(s) Success !! Accessing Hive without a Kerberos ticket¨ Destroy the Kerberos ticket $ kdestroy Validate the existence of absence of Kerberos ticket $ klist
klist: No credentials cache found (ticket cache FILE:/tmp/krb5cc_1001) Accessing Hive CLI should fail
... View more
Labels:
05-10-2018
09:20 PM
1 Kudo
@Mudit Kumar You have deployed and secured your multi-node-cluster with an MIT KDC running on a Linux box (dedicated or not), this can also be applied on a single node cluster. Below is a step by step procedure Assumption KDC is running KDC is created KDC user and master password is available REALM: DEV.COM Users : user1,user2,user3-user5 Edge node: for users Kerberos Admin user is root or sudoer A good solution security-wise is to copy the generated keytabs to the users'home directory. If these are local Unix users NOT Active directory then create the keytabs in e.g /tmp and later copy them to their respective home directories and make sure to change the correct permissions on the keytabs. You will notice a node dedicated to users EDGE NODE, all client software are installed here and not on the data or name nodes! Change directory to tmp # cd /tmp With root access, no need for sudo, specify the password for user1 # sudo kadmin.local
Authenticating as principal root/admin@DEV.COM with password.
kadmin.local: addprinc user1@DEV.COM
WARNING: no policy specified for user1@DEV.COM; defaulting to no policy
Enter password for principal "user1@DEV.COM":
Re-enter password for principal "user1@DEV.COM":
Principal "user1@DEV.COM" created. Do the above step for for all the other users too addprinc user2@DEV.COM
addprinc user3@DEV.COM
addprinc user4@DEV.COM
addprinc user5@DEV.COM The keytabs with be generated in the current directory Generate keytab for user1 The keytab will be generated in the current directory # sudo ktutil
ktutil: addent -password -p user1@DEV.COM -k 1 -e RC4-HMAC
Password for user1@DEV.COM:
ktutil: wkt user1.keytab
ktutil: q You MUST repeat the above for all the 5 users Copy the newly created keytab to the user's home directory, in this example I have copied the keytab to /etc/security/keytabs # cp user1.keytab /etc/security/keytabs Change ownership & permission here user1 belongs to hadmin group # chown user1:hadmin user1.keytab Again do the above for all the other users. A good technical and security best practice is to copy the keytabs from the kdc to edgenode respective home directories and change the ownership of the keytabs Validate the principals in this example the keytabs are in /etc/security/keytabs # klist -kt /etc/security/keytabs/user1.keytab
Keytab name: FILE:/etc/security/keytabs/user1.keytab
KVNO Timestamp Principal
----------- ------------------- ------------------------------------------------------
1 05/10/2018 10:46:27 user1@DEV.COM To ensure successful ticket attribution the user should validate the principal see example below and use it grab a ticket , the principal will be concatenated with the keytab when running the kinit # klist -kt /etc/security/keytabs/user1.keytab
Keytab name: FILE:/etc/security/keytabs/user1.keytab
KVNO Timestamp Principal
-------- ------------------------ --------------------------------------------------------
1 05/10/18 01:00:50 user1@DEV.COM
.... .................. ..............
1 05/10/18 01:00:50 user1@DEV.COM
Test the new user1 should try grabbing a Kerberos ticket (keytab + principal) # kinit -kt /etc/security/keytabs/user1.keytab user1@DEV.COM The below command should show the validity of the Kerberos ticket # klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: user1@DEV.COM
Valid starting Expires Service principal
05/10/2018 10:53:48 05/11/2018 10:53:48 krbtgt/DEV.COM@DEV.COM You should be okay now access and successfully run jobs on the cluster
... View more
05-10-2018
11:56 AM
@Erkan ŞİRİN Seeing your error above "kinit: Clock skew too great while getting initial credentials" Correct me if I am wrong I see on your sandbox date output translates to date 09/05/2018 and time 09:44 # date
Wed May 9 09:44:22 +03 2018 But on the screenshot of your Windows time attached translates to date 02/05/2018 and the time 09:44 that's is 7 days difference Please set your Windows 2012R2's date to the same date like the Sandbox its should work!! Please let me know
... View more
05-10-2018
07:05 AM
I tried and verified in my 10 node cluster. It worked perfectly.
... View more
05-09-2018
10:19 AM
@Geoffrey Shelton Okot ERROR: Exiting with exit code 1. REASON: Caught exception running LDAP sync. [LDAP: error code 49 - Invalid Credentials]; nested exception is javax.naming.AuthenticationException: [LDAP: error code 49 - Invalid Credentials]
... View more
06-23-2018
03:31 PM
@Geoffrey Shelton Okot:Now i need to access my HDP cluster from my Laptop using curl/rest API but i am not able to do so.My laptop is in different AD domain.I tried enabling SPENGO/HTTP as well but no luck.Curl call works inside the cluster but not from outside.Any documentation help on that?
... View more