Member since
09-25-2015
46
Posts
139
Kudos Received
16
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5394 | 11-27-2017 07:37 PM | |
4981 | 09-18-2017 06:28 PM | |
2757 | 09-08-2017 06:40 PM | |
1381 | 07-17-2017 07:13 PM | |
1258 | 06-29-2017 06:18 PM |
04-09-2018
08:43 PM
Hi @Alexander Schätzle, We had a jira for the same issue - https://issues.apache.org/jira/browse/YARN-7269. The fix went into HDP -2.6.3.0. But I am not sure about the exact build number. May be you can try with HDP-2.6.4.0 repo.
... View more
04-09-2018
06:37 PM
Hi @Venkata Sudheer Kumar M, You can fetch total containers allocated for an application using YARN CLI - 1. Fetch applicationAttempt for the application using- yarn applicationattempt -list <applicationID>
2. Fetch all containers for that applicationAttempt using- yarn container -list <appattempt> In the step 2 above, you will also get NodeManager info on which the container was launched. From the nodemanager info on which the container was launched, you can obtain the vcores and memory allocated for that container using the below REST API curl http://<Nodemanager address>:<port>/ws/v1/node/containers/<containerID> Hope this is helpful to you!
... View more
11-27-2017
07:37 PM
1 Kudo
Hi @Michael Bronson All services logs can be found at the location mentioned in the yarn-env.sh (can be found under Hadoop Conf files, usually /etc/hadoop/conf/) on the respective nodes. Hence please check location mentioned in yarn-env.sh on the master02 machine. Once we check the logs, it will be easier to figure out the exact reason for failure. Example yarn-env.sh export YARN_LOG_DIR=/grid/0/log/yarn/
... View more
09-21-2017
09:35 PM
1 Kudo
Hi @Mykyta Demeshchenko https://community.hortonworks.com/questions/75914/user-not-found-error-when-invoking-hive.html -- This may be useful for you
... View more
09-18-2017
06:28 PM
3 Kudos
Hi @PJ All services logs can be found at the location mentioned in the yarn-env.sh (can be found under Hadoop Conf files, usually /etc/hadoop/conf/) on the respective nodes.
Example yarn-env.sh export YARN_LOG_DIR=/grid/0/log/yarn/
... View more
09-18-2017
06:28 PM
8 Kudos
Hi @PJ All services logs can be found at the location mentioned in the yarn-env.sh (can be found under Hadoop Conf files, usually /etc/hadoop/conf/) on the respective nodes.
Example yarn-env.sh export YARN_LOG_DIR=/grid/0/log/yarn/
... View more
09-08-2017
06:40 PM
11 Kudos
Hi @nur majid, In mapred queue -list output, default capacity and maximum capacity are with respect to Cluster' resources whereas current capacity is with respect to Queue's resources. Example - Assume cluster's resources - 10GB Default Queue default capacity is 20% of cluster's resources which is 2GB But default queue can go upto 30% of cluster's resources [maximum capacity] which is 3GB Current capacity is 101% of Queue's capacity [Remember , queue's capacity is 2GB] , so it will come around 2.02GB which is less than 3GB[maximum capacity].
... View more
07-17-2017
07:13 PM
5 Kudos
Hi @Anurag Mishra, If you want to change yarn-site.xml in the server, then it is also fine. But you need to restart the services on your own instead of using ambari .(if you use ambari to restart services, then as mentioned by Jay, your changes will get wiped off) WARNING : It so happens, that for some properties in yarn-site.xml, it requires restart of more than one service , in which case, you need to copy the change in all service nodes and restart them. ADVANTAGE OF AMABRI - In case of ambari, its just one point change, it will figure out which services require that change and suggest restart of all the services required.
... View more
06-29-2017
06:18 PM
7 Kudos
Hi @Rahul Gupta, Yes, a single node can have more than one application masters when we have several applications running; where each application master belongs to a unique application. None of the application master will be aware of the presence of the other. More info on AM https://hortonworks.com/blog/apache-hadoop-yarn-concepts-and-applications/
... View more
06-27-2017
06:44 PM
5 Kudos
Hi @Gunjan Dhawas For point 2 - Also as you mentioned its nodemanager which will communicate with container, so can nodemanager directly communicate with containers which are running on different nodes or it will go through RM to get container information. Nodemanagers are basically YARN’s per-node agent, and takes care of the individual compute nodes in a Hadoop cluster. This includes keeping up-to date with the ResourceManager (RM), overseeing containers’ life-cycle management; monitoring resource usage (memory, CPU) of individual containers, tracking node-health, log’s management and auxiliary services which may be exploited by different YARN applications.
So Nodemanagers are the nodes on which containers are launched. So yes, nodemanagers directly monitors the containers and their resource consumption. For point 1 - "The application code executing within the container then provides necessary information (progress, status etc.) to its ApplicationMaster via an application-specific protocol.", so how the application master monitor the status of containers which are running on different node than applicatioMaster. Once the applicationMaster negotiates the resources with RM, it will launch the container by providing container launch specification to the NodeManager. The launch specification includes the necessary information to allow the container to communicate with the ApplicationMaster itself. Thus ApplicationMaster gets the progress/status via application-specific protocol provided in the container launch specification
... View more