Member since
01-19-2017
3676
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 547 | 06-04-2025 11:36 PM | |
| 1092 | 03-23-2025 05:23 AM | |
| 560 | 03-17-2025 10:18 AM | |
| 2099 | 03-05-2025 01:34 PM | |
| 1316 | 03-03-2025 01:09 PM |
10-21-2019
10:33 PM
@Axe Can you share your CM logs? Some usual checklist is to check the file system status $ df -h And try to restart thr CM manager and the agents but most important check and share the agent logs too.
... View more
10-21-2019
12:18 PM
@soumya From your property files, your backend database is Oracle! There is something very weird about the information you are sharing with members that's why we can't resolve your problem? Can you get the value of your hive-site.xml "javax.jdo.option.ConnectionURL" that should confirm oracle All along you claimed you were using MySQL database for your metastore? But the connect string to an oracle database is different,here is the syntax for connecting "jdbc:oracle:thin:@localhost/remote_host:port:db_name" so according to your shared hive-site.xml correct connect is "jdbc:oracle:thin:@DFJHNNJHJUUI:4355:KKJH0033" "jdbc:oracle:thin:@<hoostname[DFJHNNJHJUUI]>:port[4355]:db_name[KKJH0033] Else connecting Direct - Binary Transport Mode example below beeline -n hive -p hive -u "jdbc:hive2://osaka.com:10000/sparktest" Ranger_using_Mysql_db Hs2 mode ports Connecting to my Sparktest database Hive_using_Mysql_metastore Please clarify
... View more
10-20-2019
06:53 AM
1 Kudo
@soumya Spark SQL is Spark’s interface for working with structured and semi-structured data so it is not what I asked Hive Metastore uses Mysql; MariaDB, SQL Server, Oracle etc for its metastore !! If you can't provide infor requested it will be difficult to help you out!
... View more
10-20-2019
02:22 AM
@soumya when I ask you to share output there is a valid reason for that but when you continue to send me errors I can figure out what could be the issue because the errors are different each time. In all my previous posting I requested outputs or screenshots !! Like the Hs2 connect string from the Ambari UI, /etc/hosts and output of $ hostname -f, etc What is the backend database you are using for hive?
... View more
10-19-2019
07:08 AM
@soumya Can you share the output od the below Ambari UI --> Hive --> Summary (Tab) --> "HiveServer2 JDBC URL" --> Click on the at the right side of the URL to copy the URL I see you using port 10001 ??? What is the output from your Linux console of $ hostname -f You should append that derived value i.e soumya.com to the connect string
... View more
10-19-2019
01:51 AM
@soumya Following from my previous responses can you get the connection string like below Ambari UI --> Hive --> Summary (Tab) --> "HiveServer2 JDBC URL" --> Click on the at the right side of the URL to copy the URL and then try this URL with beeline once At the prompt add beeline to the copied link above. # beeline jdbc:hive2://osaka.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2 SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/hdp/3.1.0.0-78/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/hdp/3.1.0.0-78/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Connecting to jdbc:hive2://osaka.com:2181/default;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2 Enter username for jdbc:hive2://osaka.com:2181/default: hive Enter password for jdbc:hive2://osaka.com:2181/default: **** 19/10/19 09:40:50 [main]: INFO jdbc.HiveConnection: Connected to osaka.com:10000 Connected to: Apache Hive (version 3.1.0.3.1.0.0-78) Driver: Hive JDBC (version 3.1.0.3.1.0.0-78) Transaction isolation: TRANSACTION_REPEATABLE_READ Beeline version 3.1.0.3.1.0.0-78 by Apache Hive 0: jdbc:hive2://osaka.com:2181/default> show tables; INFO : Compiling command(queryId=hive_20191019094333_357978cc-2f7d-433f-badc-6d8f20178361): show tables .......... INFO : Completed executing command(queryId=hive_20191019094333_357978cc-2f7d-433f-badc-6d8f20178361); Time taken: 11.262 seconds INFO : OK +-------------+ | tab_name | +-------------+ | hello_acid | +-------------+ 1 row selected (77.143 seconds) 0: jdbc:hive2://osaka.com:2181/default> Specifying a database in the connect string # beeline -u "jdbc:hive2://osaka.com:2181/sparktest;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2" SLF4J: Class path contains multiple SLF4J bindings. ........... Connecting to jdbc:hive2://osaka.com:2181/sparktest;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2 19/10/19 10:01:21 [main]: INFO jdbc.HiveConnection: Connected to osaka.com:10000 Connected to: Apache Hive (version 3.1.0.3.1.0.0-78) Driver: Hive JDBC (version 3.1.0.3.1.0.0-78) Transaction isolation: TRANSACTION_REPEATABLE_READ Beeline version 3.1.0.3.1.0.0-78 by Apache Hive 0: jdbc:hive2://osaka.com:2181/sparktest> show tables; Methods of Hs2 connections HiveServer2 and a JDBC client (such as Beeline) as the primary way to access Hive. It uses SQL standard-based authorization or Ranger-based authorization. However if wish to access Hive data from other applications, such as Pig. For these use cases, use the Hive CLI and storage-based authorization. HS2 using Binary Transport Mode beeline jdbc:hive2://<ZOOKEEPER QUORUM>/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2HiveServer2 HS2 using HTTP Transport Mode beeline jdbc:hive2://<ZOOKEEPER QUORUM>/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;transportMode=http;httpPath=cliservice HS2 Interactive note zooKeeperNamespace=hiveserver2-hive2 in the below URL beeline jdbc:hive2://<ZOOKEEPER QUORUM>/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2-hive2 HS2 Scripts using Beeline, you can make use of -f option in Beeline. beeline -u "jdbc:hive2://master01:2181,master02:2181,master03:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2" -f file.hql The above methods should succeed.
... View more
10-18-2019
02:08 PM
1 Kudo
@ituni Can you share your /etc/hosts from the 2 servers? many a time member forget very trivial things ib your /etc/hosts on uvmu02.uvmu0x.com should look like this don't comment 127.0.0.1 localhost 127.0.1.1 uvmu02.uvmu0x.com # This one # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters # uVMU0X 192.168.56.101 uvmu01.uvmu0x.com uvmu01 192.168.56.102 uvmu02.uvmu0x.com uvmu02 192.168.56.103 uvmu03.uvmu0x.com uvmu03 Assuming the Ambari server is uvmu01.uvmu0x.com firstly I would like you to check that the ambari server and agent are of the same version # rpm -qa|grep ambari On server uvmu02.uvmu0x.com Thereafter share ensure you disabled the Firewall $ sudo ufw disable Confirm the Firewall stopped and disabled on system startup $ sudo ufw status $ sudo apt-get install ambari-agent Share your ambari-agent.ini it should look like this on all nodes I even the one hosting Ambari I am wondering why your second server has hostname=uvmu02.uvmu0x.com in the Ambari-agent.ini???? [server] hostname=uvmu01.uvmu0x.com url_port=8440 secured_url_port=8441 connect_retry_delay=10 max_reconnect_retry_delay=30 $ ambari-agent start Now go back to the Ambari UI and choose manual registration your uvmu02.uvmu0x.com should now register successfully Please revert
... View more
10-18-2019
01:11 PM
@Jena If your problem was resolved with the solution proposed please take some time to accept the answer so other members can reference it for similar issues and reward the member who spent her/his time to respond to your question this ensures that all questions get attention.
... View more
10-18-2019
01:04 PM
@soumya Have you tried the below method? $ beeline -u jdbc:hive2://osaka.com:10000 -n hive -p hive ......... SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Connecting to jdbc:hive2://osaka.com:10000 Connected to: Apache Hive (version 3.1.0.3.1.0.0-78) Driver: Hive JDBC (version 3.1.0.3.1.0.0-78) Transaction isolation: TRANSACTION_REPEATABLE_READ Beeline version 3.1.0.3.1.0.0-78 by Apache Hive 0: jdbc:hive2://osaka.com:10000> show databases; INFO : Compiling command(queryId=hive_20191018215530_3f94b050-d36c-46c9-9582-40a0fef9b6e2): show databases ........... INFO : Completed executing command(queryId=hive_20191018215530_3f94b050-d36c-46c9-9582-40a0fef9b6e2); Time taken: 0.037 seconds INFO : OK +---------------------+ | database_name | +---------------------+ | default | | information_schema | | sparktest | | sys | +---------------------+ 4 rows selected (0.392 seconds) 0: jdbc:hive2://osaka.com:10000>
... View more
10-17-2019
09:08 PM
@irfangk1 It's NOT a requirement but best practice you that you have better control and filter of who has access to your cluster and it is on the edge, not you Firewall your cluster by deploying KNOX like a DMZ in a classic network. 2M and & 6D is fine so one of the 3 ZK masters will sit on a data node right? .. Here is a document that should inspire you setup of edge node in HDP cluster
... View more