Member since
09-29-2015
58
Posts
34
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
970 | 09-29-2016 01:39 PM | |
2172 | 06-21-2016 03:10 PM | |
6625 | 05-16-2016 07:12 PM | |
8688 | 04-08-2016 02:06 PM | |
1158 | 04-08-2016 01:56 PM |
09-27-2016
11:49 AM
1 Kudo
@Mourad Chahri You can go to the ResourceManager UI. From there you should see a nodes link on the left side of the screen. If you click on that, you should see all of your NodeManagers and the reason for it being listed as unhealthy may be shown here. It is most likely due to yarn local dirs or log dirs. You may be hitting the disk threshold for this. There are a couple of parameters you can check for this. yarn.nodemanager.disk-health-checker.min-healthy-disks yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb Finally, if that does not reveal the issue, you should look in /var/log/hadoop-yarn/yarn. Your previous comment shows you were looking in /var/log/hadoop/yarn which is not where the NodeManager log is located. I hope this helps.
... View more
09-26-2016
06:26 PM
@Ahmed ELJAMI this looks like just an INFO message. I would look at the logs for each attempt to see why it is failing. You should be able to see this in the RM UI.
... View more
09-23-2016
02:40 PM
@rama These directories are used by yarn for job logs. There are similar directories used for localization, the yarn-local dirs. They are not distributed so much as used when containers are allocated on that node. They should get cleaned up when jobs complete but can leave orphaned files in the event of a Resource Manager restart or a Node Manager restart. The directories are configured via yarn and it is a comma separated list of locations so you can add additional mounts/directories but they will apply to all node managers managed by Yarn. Hope this helps.
... View more
09-12-2016
03:14 PM
oops, your input is slashes, not dashes. Can you try with the needed format?
... View more
09-12-2016
02:42 PM
@Mayank Pandey what does the following produce? select log_source_time, from_unixtime(unix_timestamp(substr(log_source_time,0,11),'dd-MMM-yyyy')) as todateformat from table1 limit 2;
... View more
09-12-2016
01:15 PM
2 Kudos
@Mayank Pandey there are some ways of converting the date. For example something like select inp_dt, from_unixtime(unix_timestamp(substr(inp_dt,0,11),'dd-MMM-yyyy')) as todateformat from table; There are several ways to attempt this if you do a google search on your needs.
... View more
09-12-2016
01:01 PM
@mike harding to add to this, Tez by default first initializes an AM whereas MapReduce does so at submission only. This is the reason you see the behavior you describe. The tez container has a timeout setting as you stated and that will determine how long lived that initial AM is
... View more
06-29-2016
11:46 AM
@nejm hadjmbarek, in the information you provided, it seems your oozie max concurrency has been reached for the coordinator. You therefore have a number of applications waiting AM resources. Check you max AM resource percentage in capacity scheduler and consider raising it to either .5 or .6 which states that of the total resources, RM can assign our 50 or 60 percent to AM containers.
... View more
06-21-2016
03:10 PM
1 Kudo
@Arthur GREVIN it appears you only have one nodemanager deployed on those nodes. It is allocating 7 GB from that single node and that is why it is showing only that. You would need to deploy nodemanagers on the other 3 existing nodes.
... View more
05-16-2016
07:12 PM
2 Kudos
@Tim Veil It is possible to use the Ambari REST API to change that config. Below is an example: curl -v -u admin:admin -H "Content-Type: application/json" -H "X-Requested-By:ambari" -X PUT http://<AMBARI-SERVER>:8080/api/v1/views/CAPACITY-SCHEDULER/versions/1.0.0/instances/AUTO_CS_INSTANCE/resources/scheduler/configuration --data '{
"Clusters": {
"desired_config": [
{
"type": "capacity-scheduler",
"tag": "version14534007568115",
"service_config_version_note": "To test",
"properties": {
"yarn.scheduler.capacity.maximum-am-resource-percent": 0.2,
"yarn.scheduler.capacity.maximum-applications": 10000,
"yarn.scheduler.capacity.node-locality-delay": 40,
"yarn.scheduler.capacity.resource-calculator": "org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator",
"yarn.scheduler.capacity.queue-mappings-override.enable": false,
"yarn.scheduler.capacity.root.acl_administer_queue": "*",
"yarn.scheduler.capacity.root.capacity": 100,
"yarn.scheduler.capacity.root.queues": "default",
"yarn.scheduler.capacity.root.accessible-node-labels": "*",
"yarn.scheduler.capacity.root.default.acl_submit_applications": "*",
"yarn.scheduler.capacity.root.default.maximum-capacity": 100,
"yarn.scheduler.capacity.root.default.user-limit-factor": 0.5,
"yarn.scheduler.capacity.root.default.state": "RUNNING",
"yarn.scheduler.capacity.root.default.capacity": 100
}
}
]
}
}'
... View more