Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1567 | 06-04-2025 11:36 PM | |
| 2046 | 03-23-2025 05:23 AM | |
| 961 | 03-17-2025 10:18 AM | |
| 3658 | 03-05-2025 01:34 PM | |
| 2535 | 03-03-2025 01:09 PM |
09-12-2019
03:47 AM
Thanks a lot!! I did the same with your msql settings which you had mentioned!! And it works fine
... View more
09-10-2019
02:31 PM
1 Kudo
@ranger There are three modes for Hive Metastore deployment: Embedded Metastore Not recommended for production. Local Metastore This mode allows us to have many Hive sessions i.e. many users can use the metastore at the same time. It's achieved by using any JDBC compliant like MySQL. In this case, the javax.jdo.option.ConnectionURL property is set to jdbc:mysql://host/dbname? createDatabaseIfNotExist=true, and javax.jdo.option.ConnectionDriverName is set to com.mysql.jdbc.Driver. The JDBC driver JAR file for MySQL must be on Hive’s classpath, Remote Metastore In this mode, metastore runs on its own separate JVM, not in the Hive service JVM. If other processes want to communicate with the metastore server they can communicate using Thrift Network APIs here you have the ability to have one more metastore servers in this case to provide High availability. having said that it seems you are trying to use embedded Metastore. What I advice you to do is create one as the root through the Ambari ui, it will ask you for the DBName and Host which would be where you installed the MySQL database else pre-create the metastore. The Hive database must be created before loading the Hive database schema that explains why you are getting the startup error. Using Hive with MySQL On the Ambari Server host, stage the appropriate MySQL connector for later deployment. On the Ambari Server host, Download the MySQL Connector/JDBC driver from MySQL. Runambari-server setup --jdbc-db=mysql --jdbc-driver=/path/to/mysql/mysql-connector-java.jar Confirm that mysql-connector-java.jar is in the Java share directory. ls /usr/share/java/mysql-connector-java.jar Make sure the .jar file has the appropriate permissions - 644. Execute the following command: ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java.jar Create a user for Hive and grant it permissions. using the MySQL database admin utility: # mysql -u root -p
CREATE USER '[HIVE_USER]'@'localhost' IDENTIFIED BY '[HIVE_PASSWORD]';
GRANT ALL PRIVILEGES ON *.* TO '[HIVE_USER]'@'localhost';
CREATE USER '[HIVE_USER]'@'%' IDENTIFIED BY '[HIVE_PASSWORD]';
GRANT ALL PRIVILEGES ON *.* TO '[HIVE_USER]'@'%';
CREATE USER '[HIVE_USER]'@'[HIVE_METASTORE_FQDN]' IDENTIFIED BY '[HIVE_PASSWORD]';
GRANT ALL PRIVILEGES ON *.* TO '[HIVE_USER]'@'[HIVE_METASTORE_FQDN]';
FLUSH PRIVILEGES; Where [HIVE_USER] is your desired Hive user name, [HIVE_PASSWORD] is your desired Hive user password and [HIVE_METASTORE_FQDN] is the Fully Qualified Domain Name of the Hive Metastore host. Create the Hive database. The Hive database must be created before loading the Hive database schema. # mysql -u root -p
CREATE DATABASE [HIVE_DATABASE] Where [HIVE_DATABASE] is your desired Hive database name. After the above step then in the Ambari UI when you reach the Hive Metastpre configuring stage use the same credentials and the "test" should succeed and that should fire up when you start all the HDP components HOpe that helps
... View more
08-30-2019
01:49 AM
@irfangk1 If its an HDP cluster then I assume you are using Ambari for managing the HDF cluster, you will need to first prepare the 2 new hosts see the Prepare the EnvironmentCloudera document Then add the hosts to the cluster see Add host to cluster Thereafter add HDF to these 2 new nodes it follows the same procedure as adding HDF Services on an existing HDP Cluster hTH
... View more
08-25-2019
12:20 PM
1 Kudo
@Manoj690 Going through your logs I can see that the Namenode is in SAFE MODE, and in this case it won't allow you to change the status of any file in the cluster including the logs. 2019-08-22 12:31:01,376 [server.Accumulo] INFO : Attempting to talk to zookeeper 2019-08-22 12:31:01,681 [server.Accumulo] INFO : ZooKeeper connected and initialized, attempting to talk to HDFS 2019-08-22 12:31:01,946 [server.Accumulo] WARN : Waiting for the NameNode to leave safemode 2019-08-22 12:31:01,946 [server.Accumulo] INFO : Backing off due to failure; current sleep period is 1.0 seconds 2019-08-22 12:31:02,950 [server.Accumulo] WARN : Waiting for the NameNode to leave safemode 2019-08-22 12:31:02,950 [server.Accumulo] INFO : Backing off due to failure; current sleep period is 2.0 seconds 2019-08-22 12:31:04,954 [server.Accumulo] WARN : Waiting for the NameNode to leave safemode To resolve the issue can you do the following as hdfs user $ hdfs dfsadmin -safemode get Safe mode is OFF The above is the desired output but if you get ON then proceed like below First backup your FS Image & Edits $ hdfs dfsadmin -saveNamespace Then exit the safemode $ hdfs dfsadmin -safemode leave Once successful then revalidate $ hdfs dfsadmin -safemode get This time it should be off and you can now successfully restart the failed services from Ambari everything should succeed HTH
... View more
08-18-2019
01:27 AM
@ray_teruya If you found this answer addressed your question, please take a moment to log in and click the "kudos" link on the answer. That would be a great help to Community users to find the solution quickly for these kinds of errors.
... View more
07-31-2019
04:17 PM
@Geoffrey Shelton Okot That is exactly what I missed. Thank you very much for the prompt and right-to-the-point response.
... View more
07-25-2019
06:29 AM
Hi @jessica moore, see this https://github.com/apache/ambari/blob/trunk/ambari-server/docs/api/v1/index.md
... View more
06-27-2019
03:22 AM
The above question and the entire response thread below was originally posted in the Community Help track. On Thu Jun 27 03:00 UTC 2019, a member of the HCC moderation staff moved it to the Cloud & Operations track. The Community Help Track is intended for questions about using the HCC site itself, not technical questions.
... View more