Member since
04-03-2019
962
Posts
1743
Kudos Received
146
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
11543 | 03-08-2019 06:33 PM | |
4917 | 02-15-2019 08:47 PM | |
4173 | 09-26-2018 06:02 PM | |
10605 | 09-07-2018 10:33 PM | |
5681 | 04-25-2018 01:55 AM |
06-24-2016
06:04 PM
2 Kudos
@Mohana Murali Gurunathan I can see that while starting it is trying to write /usr/hdp/2.3.4.7-4/hadoop/mapreduce.tar.gz at /hdp/apps/2.3.4.7-4/mapreduce/mapreduce.tar.gz on hdfs. it's unable to write to HDFS because of connection refused error. If you look at the logs carefully, you can see that instead of namenode hostname, datanode is trying to connect to localhost:8020 which is failing as expected. exception": "ConnectException", "javaClassName": "java.net.ConnectException", "message": "Call Fromdatanode.sample.com/10.250.98.101 to localhost:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused Can you please check /etc/hosts file on all the datanodes just to ensure that you have added correct entries for the namennode?
... View more
06-24-2016
05:30 PM
4 Kudos
@Vishal Jain It should be okay as your hostnames are getting resolved to new IP address. Please run service checks from Ambari UI for important HDP components to see if everything is working fine.
... View more
06-24-2016
06:36 AM
8 Kudos
This tutorial has been successfully tried on HDP-2.4.0.0 and Ambari 2.2.1.0 . I have my HDP Cluster Kerberized and Ambari has been configured for SSL. Note - Steps are same for Ambari with or without SSL. . Please follow below steps for Configuring Pig View on Kerberized HDP Cluster. . Step 1 - Please configure your Ambari Server for Kerberos with the steps mentioned in below article. Please follow steps 1 to 5. https://community.hortonworks.com/articles/40635/configure-tez-view-for-kerberized-hdp-cluster.html . Step 2 - Please add below properties to core-site.xml via Ambari UI and restart required services. . Note - If you are running Ambari Server as root user then add below properties hadoop.proxyuser.root.groups=*hadoop.proxyuser.root.hosts=* . If you are running Ambari server as non-root user then please add below properties in core-site.xml hadoop.proxyuser.<ambari-server-user>.groups=*hadoop.proxyuser.<ambari-server-user>.hosts=* Please replace <ambari-server-user> with user running Ambari Server in above example. . I'm assuming that your ambari server principal is ambari-server@REALM.COM, if not then please replace 'ambari-server' with your principal's user part. hadoop.proxyuser.ambari-server.groups=*hadoop.proxyuser.ambari-server.hosts=* . Step 3 - Create user directory on hdfs for the user accessing pig view. For e.g. in my case I'm using admin user to access pig view. . sudo -u hdfs hadoop fs -mkdir /user/admin sudo -u hdfs hadoop fs -chown admin:hdfs /user/adminsudo -u hdfs hadoop fs -chmod 755/user/admin . Step 4 - Goto Admin tab --> Click on Manage Ambari --> Views --> Edit Pig view ( Create a new one if it doesn't exist already ) and configure settings as given below . Note - You may need to modify values as per your environment settings! . After above steps, you should be able to access your pig view without any issues. If you receive any error(s) then please check /var/log/ambari-server/ambari-server.log for more details and troubleshooting.
... View more
Labels:
06-23-2016
05:27 AM
3 Kudos
@milind pandit
I believe this feature is in roadmap for YARN. Currently we don't have any report etc. to keep track of historical performance metrics. Note - You can create your own script to get the required values from yarn REST API. Please have a look at https://community.hortonworks.com/articles/16151/yarn-que-utilization-ambari-widget.html to add queue utilization widget in Ambari. [ Not exact solution to the question but we can track queue utilization ]
... View more
06-23-2016
12:55 AM
5 Kudos
@Xiaobing Zhou There are 2 methods for fencing. shell and ssh. In your example shell fencing is used. this command will always return true and fencing will happen if there is an issue with the current active NN. for ssh fence, you need to setup passwordless ssh from active to standby and vice varsa. Please read more about fencing at below link (refer dfs.ha.fencing.methods) https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html
... View more
06-23-2016
12:42 AM
7 Kudos
@Xiaobing Zhou
One major reason could be - Suppose you are getting these errors in X namenode which was active. it was unresponsive for some reason ( may be network connectivity or it was busy in processing datanode's reports or something else and could not able to communicate with zkfc ) and fencing has happened, now Y is your active NN and when X becomes responsive, it assumes that I'm the active NN and tries to send write request to the journal node. As Y is already active, last promised epoc value was increased and journal node will simply reject the write request from X. Please read detailed information about this at below link. https://community.hortonworks.com/articles/27225/how-qjm-works-in-namenode-ha.html Hope this information helps. Happy Hadooping!! 🙂
... View more
06-23-2016
12:06 AM
4 Kudos
Stop order: 1. Stop Datanodes 2. Stop Namenodes 3. Stop ZKs 4. Stop Journal nodes Start Order: 1. Start ZKs 2. Start JNs 3. Start Datanodes 4. Start NNs
... View more
06-22-2016
06:37 AM
3 Kudos
@sirisha A Work-preserving ResourceManager restart ensures that applications continuously function during a ResourceManager restart with minimal impact to end-users. The overall concept is that the ResourceManager preserves application queue state in a pluggable state store, and reloads that state on restart. While the ResourceManager is down, ApplicationMasters and NodeManagers continuously poll the ResourceManager until it restarts. If you have automatic failover enabled true then this polling time will get reduced and your jobs will resume in short amount of time so I would suggest to have both the options true in the configuration. Hope this information helps.
... View more
06-22-2016
12:09 AM
4 Kudos
@Rich Raposa - In addition to what @Takahiko Saito said, you can verify this thing by running hive terminal as: hive --hiveconf hive.execution.engine=mr Do ctrl+c It should not take time to come out as we are using "mr" execution engine. Hope this information helps.
... View more
06-20-2016
04:43 AM
5 Kudos
This tutorial has been successfully tried on HDP-2.4.0.0 and Ambari 2.2.1.0 . I have my HDP Cluster Kerberized and Ambari has been configured for SSL. Note - Steps are same for Ambari with or without SSL. . Please follow below steps for Configuring Hive View on Kerberized HDP Cluster. . Step 1 - Please configure your Ambari Server for Kerberos with the steps mentioned in below article. Please follow steps 1 to 5. https://community.hortonworks.com/articles/40635/configure-tez-view-for-kerberized-hdp-cluster.html . Step 2 - Please add below properties to core-site.xml via Ambari UI and restart required services. . Note - If you are running Ambari Server as root user then add below properties hadoop.proxyuser.root.groups=*
hadoop.proxyuser.root.hosts=* . If you are running Ambari server as non-root user then please add below properties in core-site.xml hadoop.proxyuser.<ambari-server-user>.groups=*
hadoop.proxyuser.<ambari-server-user>.hosts=* Please replace <ambari-server-user> with user running Ambari Server in above example. . I'm assuming that your ambari server principal is ambari-server@REALM.COM, if not then please replace 'ambari-server' with your principal's user part. hadoop.proxyuser.ambari-server.groups=*
hadoop.proxyuser.ambari-server.hosts=* . Step 3 - Create user directory on hdfs for the user accessing hive view. For e.g. in my case I'm using admin user to access hive view. . sudo -u hdfs hadoop fs -mkdir /user/admin
sudo -u hdfs hadoop fs -chown admin:hdfs /user/admin
sudo -u hdfs hadoop fs -chmod 755 /user/admin . Step 4 - Goto Admin tab --> Click on Manage Ambari --> Views --> Edit Hive view ( Create a new one if it doesn't exist already ) and configure settings as given below . Note - You may need to modify values as per your environment settings! . . After above steps, you should be able to access your hive view without any issues. If you receive any error(s) then please check /var/log/ambari-server/ambari-server.log for more details and troubleshooting. . . Please comment if you have any feedback/questions/suggestions. Happy Hadooping!!
... View more