Member since
01-19-2017
3676
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 549 | 06-04-2025 11:36 PM | |
| 1095 | 03-23-2025 05:23 AM | |
| 560 | 03-17-2025 10:18 AM | |
| 2100 | 03-05-2025 01:34 PM | |
| 1316 | 03-03-2025 01:09 PM |
10-17-2019
12:42 PM
@irfangk1 Edge nodes are the interface between the Hadoop cluster and the outside network. They’re also often used as staging areas for data being transferred into the Hadoop cluster. Installing the edge node is as easy as adding a node to the cluster. The only difference is that on the edge-node you will only deploy client software ONLY e.g SQOOP, PIG, HDFS, YARN, HBase, SPARK, ZK HIVE or HUE etc to enable you to for example to run HDFS commands on the edge-node. To enable communication between the outside network and the Hadoop cluster, edge nodes need to be multi-homed into the private subnet of the Hadoop cluster as well as into the corporate network. A multi-homed computer is one that has dedicated connections to multiple networks. This is a practical illustration of why edge nodes are perfectly suited for interaction with the world outside the Hadoop cluster. Keeping your Hadoop cluster in its own private subnet is an excellent practice, so these edge nodes serve as a controlled window inside the cluster If you're using Knox for perimeter security, then all clients' software should reside on a dedicated Knox gateway machine to which end users can submit their requests.It's good practice to divide the cluster into master nodes, worker nodes, edge node(s), and management node. Services such as Namenode, Zookeeper, Yarn Resource Manager, Secondary Namenode usually run on the master node machines. Worker nodes aka Datanode should be further divided into two categories those running HDFS and Yarn and those running Storm and Kafka and other components A minimum best practice is to have 3-5 master and >5 data nodes. HTH
... View more
10-17-2019
08:03 AM
@Jena Great patience never give up 😁😂. I think your problem is now resolved I suspect incompatibility problem with IE 11 I need to check the documentation later today. Please take some time to accept my reponse as valid answer so other members can use 8t to resolve the same issue. Happy hadooping
... View more
10-17-2019
07:57 AM
1 Kudo
@ThanhP There you go a happy member 😁😂 Please get some time and accept my solution so other members can use it to solution the same problem. Happy hadooping !!!
... View more
10-16-2019
09:25 PM
1 Kudo
@ThanhP As reiterated Only Adpater1 should be active with bridged Adapter and Adapter2, Adapter3, Adapter4 not activated. If so restart your sandbox and on the Splash UI [Balck window] you should see a class C IP address a 192.168.x.x use that http://192.168.x.x :9995 let me know
... View more
10-16-2019
12:42 PM
@saivenkatg55 Did you set these parameters? Configure the following environment properties for MIT Kerberos. KRB5_CONFIG: Path for the kerberos ini file. KRB5CCNAME: Path for the kerberos credential cache file. Please revert
... View more
10-16-2019
12:34 PM
1 Kudo
@ThanhP The log says the opposite "Started ServerConnector@7692d9cc{HTTP/1.1,[http/1.1]}{0.0.0.0:9995}" I think you should be hitting the wrong port or IP. How have you set up your network? Bridged/NAT, can you share the output of $ ifconfig If you are launching Zeppelin through Ambari Login to Ambari (operations console) as user amy_ds/amy_ds username/password combination. http://sandbox-hdp.hortonworks.com:9995 Please share your feedback
... View more
10-15-2019
10:32 PM
@Jena I think you are on the right track but some steps away. You need to log in using the Web CLI as root/hadoop first change the root password after successfully doing that while still logged in as root rest the #ambari-admin-password-reset this will launch in the background a series of script execution and trigger the restart of ambari server and some components some by default are in maintenance mode ONLY after then can you access the DAS UI please let me know if thats clear enough. HTH
... View more
10-15-2019
11:56 AM
@Vij Can you share your share your zookeeper_client_jaas.conf and zookeeper_jaas.conf they should be look like below zookeeper_client_jaas.conf Client { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=false useTicketCache=true; }; zookeeper_jaas.conf Server { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true useTicketCache=false keyTab="/etc/security/keytabs/zk.service.keytab" principal="zookeeper/<host>@[REALM]"; }; Please compare and let me know
... View more
10-15-2019
09:36 AM
@saivenkatg55 I can see your name node is in safe mode can you do the following As root # su - hdfs # Check to validate what I saw in the log $ hdfs dfs -safemode get # Resolve the lock out $ hdfs dfs -safemode leave # Validate safe mode is off $ hdfs dfs -safemode get That should resolve the issue! Then also send me Share /var/log/kadmind.log and /var/log/kadmind.log
... View more
10-14-2019
09:15 PM
@saivenkatg55 Share /var/log/kadmind.log and /var/log/kadmind.log .. you can use Big file Transfer
... View more