Member since
12-20-2022
52
Posts
14
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
146 | 07-31-2024 01:11 AM | |
256 | 04-22-2024 11:24 AM | |
440 | 02-08-2024 04:02 AM | |
1720 | 01-19-2024 01:55 AM |
04-21-2024
10:04 PM
@yagoaparecidoti If my suggestion helped Please accept it as a solution
... View more
04-21-2024
10:02 PM
As a rule of thumb assign 80% of the node resources to YARN. Please go through this to modify the config according to your need. https://hadoop.apache.org/docs/r2.7.2/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html#Queue_Properties
... View more
04-16-2024
11:59 PM
1 Kudo
Hi @mike_bronson7 Please check that if the RM is working fine or is it down. Please also check your zookeeper. Please check with telnet if you can connect to RMhost from other host.
... View more
04-16-2024
11:55 PM
1 Kudo
There could be multiple cause of this issue. But at first please check the permission of the log directory for that node manager. The Node Manager enforces that the remote root log directory exists and it has the correct permission settings. A warning is emitted in the Node Manager logs if the folder does not exist or exists but with incorrect permissions (e.g. 1777). If created, the directory’s owner and the group will be the same user as the Node Manager’s user, and group. The group is configurable, which is useful in scenarios where the Job History Server (or JHS in short) is running in a different UNIX group than the Node Manager that can prevent aggregated logs from being deleted. Because directly under the filesystem's root, each user has its own directory, everything under the user’s directory is created with 0770 permissions, so that only the specific user and the hadoop group are allowed to access those directories and files. Each individual aggregated log file will be created with 0640 permissions - providing rw access to the user and read-only access to the hadoop group. Since the directory has 0770 permission, the members of the hadoop group will be able to delete these files, which is important for the automatic deletion
... View more
02-13-2024
01:17 AM
1 Kudo
Do this to fix your issue: change the config something like this ... add yaml env variable "${dir_for_config}" note that '~/' is bash expansion [0] eg: ... conf.yml dbConfig: dbType: H2 driver: org.h2.Driver url: jdbc:h2:~/${dir_for_config}/config-service;DB_CLOSE_DELAY=-1;AUTO_RECONNECT=TRUE;DB_CLOSE_ON_EXIT=FALSE In CM WebUI CM> Yarn Queue...>Conf> find "YARN Queue Manager Service Environment Advanced Configuration Snippet (Safety Valve)"add: key: dir_for_config value: x
... View more
02-13-2024
01:08 AM
1 Kudo
Please provide the user and put a comma and then put the space. eg yarn , hdfs , mapred.
... View more
02-12-2024
03:40 AM
1 Kudo
Yes you can use but the problem with that will be you have to allow all the users to access that directory. YARN can move local logs securely onto HDFS or a cloud-based storage, such as AWS. This allows the logs to be stored for a much longer time than they could be on a local disk, allows faster search for a particular log file and optionally can handle compression.
... View more
02-08-2024
04:02 AM
2 Kudos
The application ask for container run some part in that container and then release it back. So the 28 vcores that you are seeing is due to that. Let's say your job asks for 4 containers and eack with 7 vcores so at first only two containers will run as you have limit of 15 vcores. but if one container is released then that job will take another container with 7 vcores so in total now the number of vcores used is 21.
... View more
01-19-2024
01:55 AM
If you put User Limit Factor as 3.5 it will take over the minimum queue capacity and will go towards max capacity of the queue. Let's say if the Minimum capacity of queue is 20% putting user limit factor of 3.5 will give the user to get resources till 70% if that limit is within the max limit of the queue other the queues max limit will be the limit for the user.
... View more
- « Previous
- Next »