Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1017 | 06-04-2025 11:36 PM | |
| 1569 | 03-23-2025 05:23 AM | |
| 787 | 03-17-2025 10:18 AM | |
| 2844 | 03-05-2025 01:34 PM | |
| 1865 | 03-03-2025 01:09 PM |
10-18-2019
01:11 PM
@Jena If your problem was resolved with the solution proposed please take some time to accept the answer so other members can reference it for similar issues and reward the member who spent her/his time to respond to your question this ensures that all questions get attention.
... View more
10-18-2019
01:04 PM
@soumya Have you tried the below method? $ beeline -u jdbc:hive2://osaka.com:10000 -n hive -p hive ......... SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Connecting to jdbc:hive2://osaka.com:10000 Connected to: Apache Hive (version 3.1.0.3.1.0.0-78) Driver: Hive JDBC (version 3.1.0.3.1.0.0-78) Transaction isolation: TRANSACTION_REPEATABLE_READ Beeline version 3.1.0.3.1.0.0-78 by Apache Hive 0: jdbc:hive2://osaka.com:10000> show databases; INFO : Compiling command(queryId=hive_20191018215530_3f94b050-d36c-46c9-9582-40a0fef9b6e2): show databases ........... INFO : Completed executing command(queryId=hive_20191018215530_3f94b050-d36c-46c9-9582-40a0fef9b6e2); Time taken: 0.037 seconds INFO : OK +---------------------+ | database_name | +---------------------+ | default | | information_schema | | sparktest | | sys | +---------------------+ 4 rows selected (0.392 seconds) 0: jdbc:hive2://osaka.com:10000>
... View more
10-17-2019
09:08 PM
@irfangk1 It's NOT a requirement but best practice you that you have better control and filter of who has access to your cluster and it is on the edge, not you Firewall your cluster by deploying KNOX like a DMZ in a classic network. 2M and & 6D is fine so one of the 3 ZK masters will sit on a data node right? .. Here is a document that should inspire you setup of edge node in HDP cluster
... View more
10-17-2019
12:42 PM
@irfangk1 Edge nodes are the interface between the Hadoop cluster and the outside network. They’re also often used as staging areas for data being transferred into the Hadoop cluster. Installing the edge node is as easy as adding a node to the cluster. The only difference is that on the edge-node you will only deploy client software ONLY e.g SQOOP, PIG, HDFS, YARN, HBase, SPARK, ZK HIVE or HUE etc to enable you to for example to run HDFS commands on the edge-node. To enable communication between the outside network and the Hadoop cluster, edge nodes need to be multi-homed into the private subnet of the Hadoop cluster as well as into the corporate network. A multi-homed computer is one that has dedicated connections to multiple networks. This is a practical illustration of why edge nodes are perfectly suited for interaction with the world outside the Hadoop cluster. Keeping your Hadoop cluster in its own private subnet is an excellent practice, so these edge nodes serve as a controlled window inside the cluster If you're using Knox for perimeter security, then all clients' software should reside on a dedicated Knox gateway machine to which end users can submit their requests.It's good practice to divide the cluster into master nodes, worker nodes, edge node(s), and management node. Services such as Namenode, Zookeeper, Yarn Resource Manager, Secondary Namenode usually run on the master node machines. Worker nodes aka Datanode should be further divided into two categories those running HDFS and Yarn and those running Storm and Kafka and other components A minimum best practice is to have 3-5 master and >5 data nodes. HTH
... View more
10-17-2019
08:03 AM
@Jena Great patience never give up 😁😂. I think your problem is now resolved I suspect incompatibility problem with IE 11 I need to check the documentation later today. Please take some time to accept my reponse as valid answer so other members can use 8t to resolve the same issue. Happy hadooping
... View more
10-17-2019
07:57 AM
1 Kudo
@ThanhP There you go a happy member 😁😂 Please get some time and accept my solution so other members can use it to solution the same problem. Happy hadooping !!!
... View more
10-16-2019
09:25 PM
1 Kudo
@ThanhP As reiterated Only Adpater1 should be active with bridged Adapter and Adapter2, Adapter3, Adapter4 not activated. If so restart your sandbox and on the Splash UI [Balck window] you should see a class C IP address a 192.168.x.x use that http://192.168.x.x :9995 let me know
... View more
10-16-2019
12:42 PM
@saivenkatg55 Did you set these parameters? Configure the following environment properties for MIT Kerberos. KRB5_CONFIG: Path for the kerberos ini file. KRB5CCNAME: Path for the kerberos credential cache file. Please revert
... View more
10-16-2019
12:34 PM
1 Kudo
@ThanhP The log says the opposite "Started ServerConnector@7692d9cc{HTTP/1.1,[http/1.1]}{0.0.0.0:9995}" I think you should be hitting the wrong port or IP. How have you set up your network? Bridged/NAT, can you share the output of $ ifconfig If you are launching Zeppelin through Ambari Login to Ambari (operations console) as user amy_ds/amy_ds username/password combination. http://sandbox-hdp.hortonworks.com:9995 Please share your feedback
... View more
10-15-2019
10:32 PM
@Jena I think you are on the right track but some steps away. You need to log in using the Web CLI as root/hadoop first change the root password after successfully doing that while still logged in as root rest the #ambari-admin-password-reset this will launch in the background a series of script execution and trigger the restart of ambari server and some components some by default are in maintenance mode ONLY after then can you access the DAS UI please let me know if thats clear enough. HTH
... View more