Member since
01-19-2017
3676
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 609 | 06-04-2025 11:36 PM | |
| 1177 | 03-23-2025 05:23 AM | |
| 584 | 03-17-2025 10:18 AM | |
| 2186 | 03-05-2025 01:34 PM | |
| 1375 | 03-03-2025 01:09 PM |
10-04-2017
06:53 PM
@Andres Urrego Some laptops/desktop you might need to enable virtualization Please go through theses steps Power on the machine and open the BIOS (as per Step 1). Open the Processor submenu The processor settings menu may be hidden in the Chipset, Advanced CPU Configuration or Northbridge. Enable Intel Virtualization Technology (also known as Intel VT) or AMD-V depending on the brand of the processor.
... View more
10-04-2017
06:47 PM
@D Giri Did you by chance download the CSV file with the keytabs for manual creation? There is an option to ONLY regenerate keytabs for missing hosts and components !! Did you correctly key in the user/passowrd in the Ambari-Kerberos wizard? Could you briefly describe your cluster setup? Master/slave and where the KDC is installed? Make sure the [realms] and [domain_realms] entries in /etc/krb5.conf is correct.
Validate the contents of these 2 files /var/kerberos/krb5kdc/kdc.conf , /var/kerberos/krb5kdc/kadm5.acl Can you share the contents of the above file don't forget to scramble site specific information
... View more
10-04-2017
05:56 PM
@arjun more If you have KDC and AD integrated, this simply means the account to which the keytab is related has been disabled, locked, expired, or deleted. The AD service account should NEVER expire. If not could you validate the below steps Make sure the [realms] and [domain_realms] entries in cat /etc/krb5.conf is correct. Validate the contents of these 2 files /var/kerberos/krb5kdc/kdc.conf , /var/kerberos/krb5kdc/kadm5.acl Check the hdfs prinncipal # kadmin.local
Authenticating as principal hdfs-uktehdpprod/admin@EUROPE.ODCORP.NET with password.
kadmin.local: listprincs hdfs*
hdfs-uktehdpprod@EUROPE.ODCORP.NET
kadmin.local: Get the correct prncipal for hdfs # klist -kt /etc/security/keytabs/hdfs.headless.keytab
Keytab name: FILE:/etc/security/keytabs/hdfs.headless.keytab
KVNO Timestamp Principal ---- ------------------- ------------------------------------------------------
1 08/24/2017 15:42:23 hdfs-uktehdpprod@EUROPE.ODCORP.NET
1 08/24/2017 15:42:23 hdfs-uktehdpprod@EUROPE.ODCORP.NET
1 08/24/2017 15:42:23 hdfs-uktehdpprod@EUROPE.ODCORP.NET Try grabbing a valid Kerberos ticket # kinit -kt /etc/security/keytabs/hdfs.headless.keytab hdfs-uktehdpprod@EUROPE.ODCORP.NET Validate the avalability period # klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: hdfs-uktehdpprod@EUROPE.ODCORP.NET
Valid starting Expires Service principal
10/04/2017 19:36:12 10/05/2017 19:36:12 krbtgt/EUROPE.ODCORP.NET@EUROPE.ODCORP.NET Please revert
... View more
10-03-2017
10:53 AM
@D Giri HDP 2.6 has a new feature called Service Auto start see Ambaru UI-->admin-> Service Auto Start Can you validate that the component status ? Or the Auto start Services status should be either enabled/disabled Can you also check the KDC if the principals are createdCan you also check in the KDC # kadmin.local
kadmin.local: listprincs Are you running Ambari as root if not then that user MUST authorization to write to /var/lib/ambari-server/tmp. Please revert
... View more
09-30-2017
10:48 PM
@Prakash Punj Can you check that the Ambari-agent is running on your Ambari host? Did you also upgrade the Ambari agents? The ambari-agents versions should match the ambari-server version 🙂 Run this command on all the nodes and make sure their output match ! # yum list installed | grep ambari Once you have the same versions then restart all the ambari-agent # ambari-agent restart Hope this resolves your lost heartbeat problem.
... View more
09-30-2017
04:58 PM
@ashim sinha Here is a good document that you use as a reference. Let me know
... View more
09-30-2017
11:22 AM
@Sree Kupp I see you are attempting something that is taken care of by real HA setup with active and standby namenode, you can use below command to force failover. $ hdfs haadmin -failover Let me know whether that helps
... View more
09-30-2017
08:55 AM
@ashim sinha Hive CLI is now a legacy tool.HiveServer1 is already deprecated it has been a while that Hive community has been recommending Beeline + HS2 configuration, ideally because of the deprecation Hive CLI, use the below example to achieve the same goal. Assuming you have the correct hive principal replace the below with appropriate values. This is how it works on my cluster ###########################################################
# Access HiveServer2 in kerberized cluster
# hive CLI has been deprecated in HS2
###########################################################
# su - hive
[hive@london ~]$ beeline
Beeline version 1.2.1000.2.5.3.0-37 by Apache Hive
beeline> ! connect jdbc:hive2://london.uk.com:10000/;principal=hive/london.uk.com@TEST.COM
Connecting to jdbc:hive2://london.uk.com:10000/;principal=hive/london.uk.com@TEST.COM
Enter username for jdbc:hive2://london.uk.com:10000/;principal=hive/london.uk.com@TEST.COM:
Enter password for jdbc:hive2://london.uk.com:10000/;principal=hive/london.uk.com@TEST.COM:
Connected to: Apache Hive (version 1.2.1000.2.5.3.0-37)
Driver: Hive JDBC (version 1.2.1000.2.5.3.0-37)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://london.uk.com:10000/> show databases;
+----------------+--+
| database_name |
+----------------+--+
| default |
| moroto |
| jair |
+----------------+--+
3 rows selected (2.863 seconds)
0: jdbc:hive2://london.uk.com:10000> use jair;
No rows affected (0.097 seconds)
0: jdbc:hive2://london.uk.com:10000> show tables;
+-----------+--+
| tab_name |
+-----------+--+
| employee |
+-----------+--+
1 row selected (0.162 seconds)
0: jdbc:hive2://london.uk.com:10000/> select * from employee;
+--------------+----------------+------------------+-----------------------+--+
| employee.id | employee.name | employee.salary | employee.destination |
+--------------+----------------+------------------+-----------------------+--+
| 1201 | Gopal | 45000 | Technical manager |
| 1202 | Manisha | 45000 | Proof reader |
| 1203 | Masthanvali | 40000 | Technical writer |
| 1204 | Kiran | 40000 | Hr Admin |
| 1205 | Kranthi | 30000 | Op Admin |
| 1206 | Geoffrey | 45000 | DevOPS |
| 1207 | Salvatore | 30000 | IT Lead |
| 1208 | Dave | 40000 | Cleaner |
| 1206 | Geoffrey | 45000 | DevOPS |
| 1207 | Shelly | 30000 | IT Lead |
| 1208 | Dave | 40000 | Cleaner |
| 1209 | fid | 40000 | Cleaner |
| 1210 | Grid | 40000 | Clerk |
| 1211 | Dred | 40000 | Masseuse |
| 1212 | stad | 40000 | boss |
| 1213 | jair | 40000 | cook |
| 1214 | Jenelle | 40000 | handyman |
+--------------+----------------+------------------+-----------------------+--+
17 rows selected (7.172 seconds) Please let me know if you have any issue I will gladly help
... View more
09-30-2017
08:36 AM
@Benjamin Hopp Can you try to validate by doing the below steps $ kdestroy The grab a ticket as the nifi user [root@host ~]# sudo su - nifi_user
$ kinit
Password for nifi_user@NAM.xxxxxxxx.COM:
$ klist
Ticket cache: FILE:/tmp/krb5cc_49393
Default principal: nifi_user@NAM.xxxxxxxx.COM
Valid starting Expires Service principal
09/28/17 17:06:44 09/29/17 03:06:44 krbtgt/ NAM.xxxxxxxx.COM@NAM.xxxxxxxx.COM Tell me if that works
... View more
09-24-2017
01:08 PM
@Anish Gupta Here is an extract of a HCC solution https://nicholasmaillard.wordpress.com/2015/07/20/formatting-hdfs/ The above article describes right steps to format hdfs for HA enabled clusters as below: "The initial steps are very close
Stop the Hdfs service Start only the journal nodes (as they will need to be made aware of the formatting) On the first namenode (as user hdfs)
hadoop namenode -format hdfs namenode -initializeSharedEdits -force (for the journal nodes) hdfs zkfc -formatZK -force (to force zookeeper to reinitialise) restart that first namenode On the second namenode
hdfs namenode -bootstrapStandby -force (force synch with first namenode) On every datanode clear the data directory Restart the HDFS service These steps should help you to overcome the issue.
... View more