Member since
05-20-2016
155
Posts
220
Kudos Received
30
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
7063 | 03-23-2018 04:54 AM | |
2592 | 10-05-2017 02:34 PM | |
1429 | 10-03-2017 02:02 PM | |
8320 | 08-23-2017 06:33 AM | |
3129 | 07-27-2017 10:20 AM |
06-19-2023
03:19 AM
i got the same problem. When I reinstalled hive, everything worked.
... View more
10-06-2018
03:17 PM
Thanks @Santhosh B Gowda. We have around 15Nodes in the cluster. The Ambari version is 2.6.1.5 & HDP 2.6.3. We are preparing the OS Upgrade plan for production, We are doing this in test region. I have the following questions a. Can you confirm if we have to upgrade Ambari2.6.1.5 to Ambari 2.7 before starting the OS Migration. This will enable us to use 'Recover Host' option introduced in the Ambari 2.7. b. Should we start with the Ambari 2.6.1.5. We will have to ensure that we have upgraded all the servers to RHEL 7 with Ambari agent & Ambari server as 2.6.1.5. Once all the Servers are upgraded to RHEL 7, We will have all the servers running Ambari 2.6.1.5 in all the servers running RHEL 7. Post this we can upgrade Ambari to 2.7. What would be your approach to this.
... View more
08-24-2018
09:55 AM
1 Kudo
There are at times we would need to move kerberos database to different nodes or upgrade the OS of KDC node ( for e.x CentOS6 to CentOS7 ). Obviously you would not want to lose you the kdc users especially if your HDP cluster is configured to use this kdc. Follow below steps to backup and restore kerberos database. prerequisite * Backup the keytab from the HDP cluster under /etc/security/keytabs from all nodes.
* Note down your kdc admin principal and password
* Backup /etc/krb5.conf
* Backup /var/kerberos directory Backup * Take the kerberos database dump using below command ( to be executed on node running kerberos )
kdb5_util dump kdb5_dump.txt
* Safely backup the kdb5_dump.txt. Restore * Restore the kerberos database execute below command
kdb5_util load kdb5_dump.txt
* Restore the /etc/krb5.conf from backup
* Restore /var/kerberos/krb5kdc/kdc.conf from backup
* Restore /var/kerberos/krb5kdc/kadm5.acl from backup
* Run below command to store master principal in stash file ( kdc admin password is required )
kdb5_util stash
* Start KDC server using below command
service krb5kdc start
... View more
10-30-2018
08:40 PM
I have updated "Advanced nifi-bootstrap-env" config on Ambari as below and restarted NIFI service. But still I don't see any metrics coming up on http://nifi1:7071/metrics Am I missing anything ?
... View more
08-09-2018
11:51 PM
We have NIFI Secure S2S enabled. How do we get the {{ token }}, as NIFI is not enabled for Username / Password, we only use CERTS / PEM files?
... View more
10-03-2017
02:02 PM
1 Kudo
Can you please share the logs of one the alerts prompted in Ambari UI ?
... View more
08-23-2017
08:04 AM
@Santosh Thank you..the problem is solved..hbase is working fine
... View more
06-30-2017
09:57 AM
4 Kudos
While this article provides a mechanism through which we could setup Spark with HiveContext, there are some limitation that when using Spark with HiveContext. For e.x Hive support writing query result to HDFS using the "INSERT OVERWRITE DIRECTORY" i.e INSERT OVERWRITE DIRECTORY 'hdfs://cl1/tmp/query'
SELECT * FROM REGION Above command will result is writing the result of above query to HDFS. However if the same query is passed to Spark with HiveContext, this will fail since "INSERT OVERWRITE DIRECTORY" is not a supported feature when using Spark. This is tracked via this jira. If the same needs to be achieved via spark -- it could achieved by using the Spark CSV library ( required in case of Spark1 ). Below is the code snippet on how to achieve the same. DataFrame df = hiveContext.sql("SELECT * FROM REGION");
df.write()
.format("com.databricks.spark.csv")
.option("delimiter", "\u0001")
.save("hdfs://cl1/tmp/query");
Above command will save the result in HDFS under dir /tmp/query. Please note the delimiter which is used, this is same as what hive currently supports. Also below depedency needs to be added to pom.xml <dependency>
<groupId>com.databricks</groupId>
<artifactId>spark-csv_2.10</artifactId>
<version>1.5.0</version>
</dependency>
... View more
Labels:
06-27-2017
12:57 PM
4 Kudos
@Bhushan Rokade Yes beeline expects the HQL file to be local file system.
... View more