Member since
01-19-2017
3681
Posts
633
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1610 | 06-04-2025 11:36 PM | |
| 2071 | 03-23-2025 05:23 AM | |
| 984 | 03-17-2025 10:18 AM | |
| 3741 | 03-05-2025 01:34 PM | |
| 2573 | 03-03-2025 01:09 PM |
09-11-2019
12:17 PM
@ranger Can you set the ATLAS_HOME_DIR in atlas-env.sh, and other settings below by editing the hive-site.xml adding the below changes Set-up atlas hook in hive-site.xml of your hive configuration: <property> <name>hive.exec.post.hooks</name> <value>org.apache.atlas.hive.hook.HiveHook</value> </property> And <property> <name>atlas.cluster.name</name> <value>primary</value> </property> Add 'export HIVE_AUX_JARS_PATH=<atlas package>/hook/hive' in hive-env.sh of your hive configuration Copy <atlas-conf>/atlas-application.properties to the hive conf directory. cp /usr/hdp/<VERSION>/atlas/conf/atlas-application.properties to /usr/hdp/<VERSION>/hive/conf Then run the import locate /usr/hdp/<VERSION>/atlas/hook-bin/import-hive.sh Please let me know the outcome
... View more
09-10-2019
02:31 PM
1 Kudo
@ranger There are three modes for Hive Metastore deployment: Embedded Metastore Not recommended for production. Local Metastore This mode allows us to have many Hive sessions i.e. many users can use the metastore at the same time. It's achieved by using any JDBC compliant like MySQL. In this case, the javax.jdo.option.ConnectionURL property is set to jdbc:mysql://host/dbname? createDatabaseIfNotExist=true, and javax.jdo.option.ConnectionDriverName is set to com.mysql.jdbc.Driver. The JDBC driver JAR file for MySQL must be on Hive’s classpath, Remote Metastore In this mode, metastore runs on its own separate JVM, not in the Hive service JVM. If other processes want to communicate with the metastore server they can communicate using Thrift Network APIs here you have the ability to have one more metastore servers in this case to provide High availability. having said that it seems you are trying to use embedded Metastore. What I advice you to do is create one as the root through the Ambari ui, it will ask you for the DBName and Host which would be where you installed the MySQL database else pre-create the metastore. The Hive database must be created before loading the Hive database schema that explains why you are getting the startup error. Using Hive with MySQL On the Ambari Server host, stage the appropriate MySQL connector for later deployment. On the Ambari Server host, Download the MySQL Connector/JDBC driver from MySQL. Runambari-server setup --jdbc-db=mysql --jdbc-driver=/path/to/mysql/mysql-connector-java.jar Confirm that mysql-connector-java.jar is in the Java share directory. ls /usr/share/java/mysql-connector-java.jar Make sure the .jar file has the appropriate permissions - 644. Execute the following command: ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java.jar Create a user for Hive and grant it permissions. using the MySQL database admin utility: # mysql -u root -p
CREATE USER '[HIVE_USER]'@'localhost' IDENTIFIED BY '[HIVE_PASSWORD]';
GRANT ALL PRIVILEGES ON *.* TO '[HIVE_USER]'@'localhost';
CREATE USER '[HIVE_USER]'@'%' IDENTIFIED BY '[HIVE_PASSWORD]';
GRANT ALL PRIVILEGES ON *.* TO '[HIVE_USER]'@'%';
CREATE USER '[HIVE_USER]'@'[HIVE_METASTORE_FQDN]' IDENTIFIED BY '[HIVE_PASSWORD]';
GRANT ALL PRIVILEGES ON *.* TO '[HIVE_USER]'@'[HIVE_METASTORE_FQDN]';
FLUSH PRIVILEGES; Where [HIVE_USER] is your desired Hive user name, [HIVE_PASSWORD] is your desired Hive user password and [HIVE_METASTORE_FQDN] is the Fully Qualified Domain Name of the Hive Metastore host. Create the Hive database. The Hive database must be created before loading the Hive database schema. # mysql -u root -p
CREATE DATABASE [HIVE_DATABASE] Where [HIVE_DATABASE] is your desired Hive database name. After the above step then in the Ambari UI when you reach the Hive Metastpre configuring stage use the same credentials and the "test" should succeed and that should fire up when you start all the HDP components HOpe that helps
... View more
09-08-2019
05:08 AM
1 Kudo
@shashank_naresh The compatibility states that KNIME Big Data Connectors are certified by Cloudera for CDH 5.x, by Hortonworks for HDP 2.1 to 2.4 but should be valid also for later versions as well as MapR for MapR 4.1. with Hive 0.13. It seems you are trying to access Knime UI withing HDP, that is not possible, the essence is to connect and work with the data. So the most appropriate access required is to the DWH over HDFS which in your case is Hive. Question How are you trying to connect to Knime? You should use JDBC check Big Data Extensions can be purchased at http://www.knime.org/knime-big-data-extensions. Refs: https://www.knime.com/knime-big-data-connectors https://hortonworks.com/wp-content/uploads/2014/12/Knime-Hortonworks-Solutions-Brief.pdf Hope that helps
... View more
09-01-2019
10:05 AM
@kvinod You can use MySQL for the component please follow https://www.cloudera.com/documentation/enterprise/latest/topics/cm_ig_mysql.html
... View more
08-30-2019
12:52 PM
@kvinod Is it a production cluster or not? If its the latter is there a reason for not using some other database like Mysql/MariaDB/Oracle? In my implementations I tend to favour these 3 hence I have more hands-on in your case you could try out MariaDB/Mysql as they are open source
... View more
08-30-2019
01:49 AM
@irfangk1 If its an HDP cluster then I assume you are using Ambari for managing the HDF cluster, you will need to first prepare the 2 new hosts see the Prepare the EnvironmentCloudera document Then add the hosts to the cluster see Add host to cluster Thereafter add HDF to these 2 new nodes it follows the same procedure as adding HDF Services on an existing HDP Cluster hTH
... View more
08-29-2019
02:40 PM
@irfangk1 It would be good to clarify whether its a CDH or HDP cluster managed by Ambari or not and any relevant information.
... View more
08-29-2019
12:24 PM
@kvinod For sure you have a double entry in your db.mgmt.properties one simple way is to split the files as I have done below [------] Create a back of the current file cp /etc/cloudera-scm-server/db.mgmt.properties /etc/cloudera-scm-server/db.mgmt.properties.ORIG Then override the db.mgmt.properties with first the ORANGE if nothing works then the BLUE $ sudo vi /etc/cloudera-scm-server/db.mgmt.properties # The source of truth for these settings # is the Cloudera Manager databases and # changes made here will not be reflected # there automatically. # com.cloudera.cmf.ACTIVITYMONITOR.db.type=postgresql com.cloudera.cmf.ACTIVITYMONITOR.db.host=hostname:7432 com.cloudera.cmf.ACTIVITYMONITOR.db.name=amon com.cloudera.cmf.ACTIVITYMONITOR.db.user=amon com.cloudera.cmf.ACTIVITYMONITOR.db.password=4WB4R5yxnp com.cloudera.cmf.REPORTSMANAGER.db.type=postgresql com.cloudera.cmf.REPORTSMANAGER.db.host=hostname:7432 com.cloudera.cmf.REPORTSMANAGER.db.name=rman com.cloudera.cmf.REPORTSMANAGER.db.user=rman com.cloudera.cmf.REPORTSMANAGER.db.password=WceGeruLNG com.cloudera.cmf.NAVIGATOR.db.type=postgresql com.cloudera.cmf.NAVIGATOR.db.host=hostname:7432 com.cloudera.cmf.NAVIGATOR.db.name=nav com.cloudera.cmf.NAVIGATOR.db.user=nav com.cloudera.cmf.NAVIGATOR.db.password=D2tjw5xjoE com.cloudera.cmf.NAVIGATORMETASERVER.db.type=postgresql com.cloudera.cmf.NAVIGATORMETASERVER.db.host=hostname:7432 com.cloudera.cmf.NAVIGATORMETASERVER.db.name=navms com.cloudera.cmf.NAVIGATORMETASERVER.db.user=navms com.cloudera.cmf.NAVIGATORMETASERVER.db.password=elJRINTAth [--------------------------separator-----------------------------] # The source of truth for these settings # is the Cloudera Manager databases and # changes made here will not be reflected # there automatically. # com.cloudera.cmf.ACTIVITYMONITOR.db.type=postgresql com.cloudera.cmf.ACTIVITYMONITOR.db.host=hostname:7432 com.cloudera.cmf.ACTIVITYMONITOR.db.name=amon com.cloudera.cmf.ACTIVITYMONITOR.db.user=amon com.cloudera.cmf.ACTIVITYMONITOR.db.password=O31A60K5SN com.cloudera.cmf.REPORTSMANAGER.db.type=postgresql com.cloudera.cmf.REPORTSMANAGER.db.host=hostname:7432 com.cloudera.cmf.REPORTSMANAGER.db.name=rman com.cloudera.cmf.REPORTSMANAGER.db.user=rman com.cloudera.cmf.REPORTSMANAGER.db.password=BPPShP0O9k com.cloudera.cmf.NAVIGATOR.db.type=postgresql com.cloudera.cmf.NAVIGATOR.db.host=hostname:7432 com.cloudera.cmf.NAVIGATOR.db.name=nav com.cloudera.cmf.NAVIGATOR.db.user=nav com.cloudera.cmf.NAVIGATOR.db.password=QHYL7zUSQe com.cloudera.cmf.NAVIGATORMETASERVER.db.type=postgresql com.cloudera.cmf.NAVIGATORMETASERVER.db.host=hostname:7432 com.cloudera.cmf.NAVIGATORMETASERVER.db.name=navms com.cloudera.cmf.NAVIGATORMETASERVER.db.user=navms com.cloudera.cmf.NAVIGATORMETASERVER.db.password=elJRINTAth After testing the 2 options depending on success or failure then we could get another option. Make sure you have also followed these steps link. https://www.cloudera.com/documentation/enterprise/5-14-x/topics/cm_ig_extrnl_pstgrs.html#cmig_topic_5_6_1 Please revert
... View more
08-28-2019
03:20 AM
@kvinod In such a case, you should attach the logs because that's the only source we can use to investigate and get tips of maybe what happened. How long have you cluster been running before going down? Did you ever purge the CMS database? On large deployments, it's a good strategy to put the monitor roles on their own hosts and isolation. Now that you have deleted the CMS hence lost the data therein you can safely create a new backend database and point the new config to that instance. DB's to recreate Reports Manager Activity Monitor Please revert
... View more
08-26-2019
05:03 AM
@iamabug Are you now comfortable proceeding? If you need some help don't hesitate to ask.
... View more