Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1942 | 06-15-2020 05:23 AM | |
| 15727 | 01-30-2020 08:04 PM | |
| 2086 | 07-07-2019 09:06 PM | |
| 8151 | 01-27-2018 10:17 PM | |
| 4627 | 12-31-2017 10:12 PM |
01-06-2019
08:43 PM
hi
all we have ambari
cluster version 2.6.1 and HDP version - 2.6.4 while kafka
installed on 3 physical machines we have a
strange problem we update the
kafka configuration , and after update kafka required restart so we start to
restart the kafka ( restart from ambari gui ) , but actually kafka progress not
start or start as 15% from progress from server.log
( under /var/kafka/log ) we not see errors and actually we not see progress we try also to
restart the ambari agent on all kafka , and restart the
kafka again but this action not help and kafka failed to restart after 30 min kafka brokers
are up but does not react to restart action from ambari any suggestion
what could be the reason for this strange behaver
... View more
Labels:
01-05-2019
06:58 PM
@Jay any update?
... View more
01-04-2019
06:35 AM
@Jay just to mention that , we want to limit the size of stdout or stderr , is it possible ? , lets say for example if we want to limit the size until 1G per file ,
... View more
01-04-2019
06:34 AM
yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb=1000M in my ambari
... View more
01-04-2019
06:31 AM
@Jay the value is yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=90
... View more
01-03-2019
05:00 PM
hi all we saw few scenarios that disks on datanode machine became full 100% because the files - stdout are huge for example /grid/sdb/hadoop/yarn/log/application_151746342014_5807/container_e37_151003535122014_5807_03_000001/stdout from df -h , we can see df -h /grid/sdb
Filesystem Size Used Avail Use% Mounted on
/dev/sdb 1.8T 1.8T 0T 100% /grid/sdb any suggestion how to avoid this situation that stdout are huge and actually this issue cause stopping the HDFS component on the datanode second: since the PATH of stdout is: /var/log/hadoop-yarn/containers/[application id]/[container id]/stdout
is it possible to limit the file size? or do a purging of stdout when file reached the threshold ?
... View more
Labels:
01-03-2019
03:06 PM
@Geoffrey Shelton Okot , regarding my last comment , do you any suggestion how to find the problematic naodemanager ?
... View more
01-03-2019
02:56 PM
you said "ll your NodeManagers should be listed there and the reason for it being listed as unhealthy may be shown here" but I not see anything about health nodemanager see please the follwing:
... View more
01-03-2019
01:42 PM
hi all we have ambari - 2.6.1 and HDP - 2.6.4 version we set the following: spark.history.fs.cleaner.enabled=true spark.history.fs.cleaner.interval=1d spark.history.fs.cleaner.maxAge=1d but this configuration not deleted the old spark history .inprogress files any suggestion how to force the cleanup?
... View more
Labels: