Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1942 | 06-15-2020 05:23 AM | |
| 15737 | 01-30-2020 08:04 PM | |
| 2089 | 07-07-2019 09:06 PM | |
| 8152 | 01-27-2018 10:17 PM | |
| 4627 | 12-31-2017 10:12 PM |
12-27-2018
06:16 AM
hi all on the zookeeper server under /DT/var/hadoop/zookeeper/version-2 we have snapshot and logs files -rw-r--r-- 1 zookeeper hadoop 67108880 Dec 17 07:26 log.3f00061872
-rw-r--r-- 1 zookeeper hadoop 432992226 Dec 17 08:09 snapshot.3f000619fa
-rw-r--r-- 1 zookeeper hadoop 67108880 Dec 17 08:56 log.4000000001
-rw-r--r-- 1 zookeeper hadoop 432988134 Dec 17 09:00 snapshot.4000002660
-rw-r--r-- 1 zookeeper hadoop 67108880 Dec 17 11:27 log.4100000001
-rw-r--r-- 1 zookeeper hadoop 432988098 Dec 17 11:43 snapshot.4100000061
-rw-r--r-- 1 zookeeper hadoop 67108880 Dec 17 11:59 log.4200000001
-rw-r--r-- 1 zookeeper hadoop 67108880 Dec 17 12:11 log.4300000001
-rw-r--r-- 1 zookeeper hadoop 432988134 Dec 17 12:14 snapshot.4200000063
-rw-r--r-- 1 zookeeper hadoop 433050091 Dec 17 12:14 snapshot.4300000917
-rw-r--r-- 1 zookeeper hadoop 67108880 Dec 17 12:32 log.4400000001
-rw-r--r-- 1 zookeeper hadoop 2 Dec 17 12:37 acceptedEpoch
-rw-r--r-- 1 zookeeper hadoop 432988134 Dec 17 12:37 snapshot.4400000ce2
-rw-r--r-- 1 zookeeper hadoop 2 Dec 17 12:37 currentEpoch
-rw-r--r-- 1 zookeeper hadoop 67108880 Dec 18 05:54 log.4500000001
-rw-r--r-- 1 zookeeper hadoop 433770041 Dec 18 05:54 snapshot.4500010e8b
-rw-r--r-- 1 zookeeper hadoop 67108880 Dec 18 10:01 log.4500010e8c
-rw-r--r-- 1 zookeeper hadoop 434486972 Dec 18 10:01 snapshot.4500023337
-rw-r--r-- 1 zookeeper hadoop 67108880 Dec 18 23:24 log.4500023339
-rw-r--r-- 1 zookeeper hadoop 436318948 Dec 18 23:24 snapshot.450003615b
-rw-r--r-- 1 zookeeper hadoop 67108880 Dec 19 10:15 log.450003615d
-rw-r--r-- 1 zookeeper hadoop 437209673 Dec 19 10:15 snapshot.4500046ef2
-rw-r--r-- 1 zookeeper hadoop 67108880 Dec 20 02:46 log.4500046ef4
-rw-r--r-- 1 zookeeper hadoop 435689635 Dec 20 02:46 snapshot.450005a896
-rw-r--r-- 1 zookeeper hadoop 67108880 Dec 20 16:00 log.450005a898
-rw-r--r-- 1 zookeeper hadoop 438302693 Dec 20 16:00 snapshot.450006a903
-rw-r--r-- 1 zookeeper hadoop 67108880 Dec 21 11:30 log.450006a905
-rw-r--r-- 1 zookeeper hadoop 442227749 Dec 21 11:30 snapshot.4500078530
-rw-r--r-- 1 zookeeper hadoop 67108880 Dec 22 16:15 log.4500078531
-rw-r--r-- 1 zookeeper hadoop 443032988 Dec 22 16:15 snapshot.450008a5aa
-rw-r--r-- 1 zookeeper hadoop 67108880 Dec 23 07:34 log.450008a5aa
-rw-r--r-- 1 zookeeper hadoop 446656243 Dec 23 07:34 snapshot.450009b084
-rw-r--r-- 1 zookeeper hadoop 67108880 Dec 23 19:15 log.450009b086
-rw-r--r-- 1 zookeeper hadoop 449196983 Dec 23 19:15 snapshot.45000b30ed
-rw-r--r-- 1 zookeeper hadoop 67108880 Dec 24 09:42 log.45000b30ef
-rw-r--r-- 1 zookeeper hadoop 452644508 Dec 24 09:42 snapshot.45000c0d0e
-rw-r--r-- 1 zookeeper hadoop 67108880 Dec 24 15:15 log.45000c0d10
-rw-r--r-- 1 zookeeper hadoop 454791668 Dec 24 15:15 snapshot.45000d2fbc
-rw-r--r-- 1 zookeeper hadoop 67108880 Dec 24 17:17 log.45000d2fbe
-rw-r--r-- 1 zookeeper hadoop 455241614 Dec 24 17:17 snapshot.45000e711f
-rw-r--r-- 1 zookeeper hadoop 67108880 Dec 24 18:45 log.45000e711f
-rw-r--r-- 1 zookeeper hadoop 455596316 Dec 24 18:45 snapshot.45000f4461
-rw-r--r-- 1 zookeeper hadoop 67108880 Dec 24 22:15 log.45000f4463
-rw-r--r-- 1 zookeeper hadoop 456271774 Dec 24 22:15 snapshot.4500107583
-rw-r--r-- 1 zookeeper hadoop 67108880 Dec 25 02:15 log.4500107584
-rw-r--r-- 1 zookeeper hadoop 457140537 Dec 25 02:15 snapshot.450011c6b9
-rw-r--r-- 1 zookeeper hadoop 67108880 Dec 25 10:11 log.450011c6bb
-rw-r--r-- 1 zookeeper hadoop 458675622 Dec 25 10:11 snapshot.45001292dd
-rw-r--r-- 1 zookeeper hadoop 67108880 Dec 25 20:15 log.45001292df
-rw-r--r-- 1 zookeeper hadoop 460369584 Dec 25 20:15 snapshot.45001385eb
-rw-r--r-- 1 zookeeper hadoop 67108880 Dec 25 21:45 log.45001385ed
-rw-r--r-- 1 zookeeper hadoop 460718118 Dec 25 21:45 snapshot.45001465d1
-rw-r--r-- 1 zookeeper hadoop 67108880 Dec 26 00:01 log.45001465d3
-rw-r--r-- 1 zookeeper hadoop 461211729 Dec 26 00:01 snapshot.450015b7fe
-rw-r--r-- 1 zookeeper hadoop 67108880 Dec 26 02:31 log.450015b800
-rw-r--r-- 1 zookeeper hadoop 461726381 Dec 26 02:31 snapshot.450017297e
-rw-r--r-- 1 zookeeper hadoop 67108880 Dec 26 04:45 log.4500172980
-rw-r--r-- 1 zookeeper hadoop 462214640 Dec 26 04:45 snapshot.4500186f8c
-rw-r--r-- 1 zookeeper hadoop 67108880 Dec 26 06:51 log.4500186f8e
-rw-r--r-- 1 zookeeper hadoop 462658519 Dec 26 06:51 snapshot.450019a758
-rw-r--r-- 1 zookeeper hadoop 67108880 Dec 26 12:26 log.450019a75a
-rw-r--r-- 1 zookeeper hadoop 463569100 Dec 26 12:26 snapshot.45001ade2a
-rw-r--r-- 1 zookeeper hadoop 67108880 Dec 26 16:15 log.45001ade29
-rw-r--r-- 1 zookeeper hadoop 464378104 Dec 26 16:15 snapshot.45001c37df
-rw-r--r-- 1 zookeeper hadoop 67108880 Dec 26 17:45 log.45001c37e0
-rw-r--r-- 1 zookeeper hadoop 464741867 Dec 26 17:45 snapshot.45001d5eed
-rw-r--r-- 1 zookeeper hadoop 67108880 Dec 26 19:15 log.45001d5eef
-rw-r--r-- 1 zookeeper hadoop 465085452 Dec 26 19:15 snapshot.45001e87de
-rw-r--r-- 1 zookeeper hadoop 67108880 Dec 26 20:17 log.45001e87e0
-rw-r--r-- 1 zookeeper hadoop 465304233 Dec 26 20:17 snapshot.45001f7244
-rw-r--r-- 1 zookeeper hadoop 67108880 Dec 27 05:52 log.45001f7246
-rw-r--r-- 1 zookeeper hadoop 466039484 Dec 27 05:52 snapshot.450020eeef
-rw-r--r-- 1 zookeeper hadoop 67108880 Dec 27 06:09 log.450020eef1 the size for now is 23G and this cause the /DT folder to became almost full is it posible to limit the snapshot ? or purge them ?
... View more
Labels:
12-25-2018
05:10 PM
we restart the dastanodes on our cluster ( HDP - 2.6.4 ) but datanode failed on Error occurred during initialization of VM
Too small initial heap this is strange because dtnode_heapsize is 8G ( DataNode maximum Java heap size = 8G ) capture.png so we not understand how it can be dose - initial heap size related to dtnode_heapsize ? Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 197804180k(12923340k free), swap 16777212k(16613164k free)
CommandLine flags: -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:GCLogFileSize=1024000 -XX:InitialHeapSize=8192 -XX:MaxHeapSize=8192 -XX:MaxNewSize=209715200 -XX:MaxTenuringThreshold=6 -XX:NewSize=209715200 -XX:NumberOfGCLogFiles=5 -XX:OldPLABSize=16 -XX:ParallelGCThreads=4 -XX:+PrintAdaptiveSizePolicy -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseGCLogFileRotation -XX:+UseParNewGC
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-worker01.sys242.com.out <==
Error occurred during initialization of VM
Too small initial heap
ulimit -a for user hdfs
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 772550
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
... View more
Labels:
12-25-2018
11:09 AM
hi all we noticed about the following problem: under the folder ( DATANODE machine ) /var/hadoop/yarn/local/usercache size is 140G in spite we set the following configuration that should avoid this case yarn.nodemanager.localizer.cache.target-size-mb = 10240
yarn.nodemanager.localizer.cache.cleanup.interval-ms = 300000 this configuration from my understanding should delete the the folder/files in case size is more then 10G so how it can be current size is 140G? reference: https://community.hortonworks.com/questions/201820/yarn-usercache-folder-became-with-huge-size.html
yarn.nodemanager.localizer.cache.target-size-mb: This decides the maximum disk space to be used for localizing resources. (At present there is no individual limit for PRIVATE / APPLICATION / PUBLIC cache. YARN-882). Once the total disk size of the cache exceeds this then Deletion service will try to remove files which are not used by any running containers. At present there is no limit (quota) for user cache / public cache / private cache. This limit is applicable to all the disks as a total and is not based on per disk basis. yarn.nodemanager.localizer.cache.cleanup.interval-ms: After this interval resource localization service will try to delete the unused resources if total cache size exceeds the configured max-size. Unused resources are those resources which are not referenced by any running container. Every time container requests a resource, container is added into the resources’ reference list. It will remain there until container finishes avoiding accidental deletion of this resource. As a part of container resource cleanup (when container finishes) container will be removed from resources’ reference list. That is why when reference count drops to zero it is an ideal candidate for deletion. The resources will be deleted on LRU basis until current cache size drops below target size.
... View more
Labels:
12-21-2018
11:55 AM
yes they are: MOON_SERVICE JWE_SER GFT_SER
... View more
12-21-2018
09:11 AM
meanwhile I cant ignore because the ambari server not start , so I need to fix it since article not slimier to my warning , I just want to know how to do as explained in the article but change the syntax according to the warning as get
... View more
12-21-2018
08:05 AM
I have also this warning - maybe this is more relevant WARN - You have config(s): MOON_SERVICE-version1530186236790,MOON_SERVICE-version1530714719817475,MOON_SERVICE-version1530714831573,MOON_SERVICE-version1530174193259482,JWE_SER-version1545298738838,GFT_SER-version1529923094645387,MOON_SERVICE-version1529922706468805,MOON_SERVICE-version1530108940699945,MOON_SERVICE-version1531392078026,GFT_SER-version1545298738838 that is(are) not mapped (in serviceconfigmapping table) to any service!
while the article talk about this warning
WARN - You have config(s): webhcat-site-version1540591929056,webhcat-log4j-version1540591928580,hive-exec-log4j-version1540591929168,webhcat-env-version1540591929289,hive-log4j-version1540591929465,hcat-env-version1540591928892 that is(are) not mapped (in serviceconfigmapping table) to any service!
... View more
12-21-2018
07:50 AM
about the warning as : what should be the releb]vant SQL command? LOT_SER, GOP_SER , BON_SER <-- they are services <br>2018-12-20 20:32:50,045 WARN - You have config(s): [LOT_SER, GOP_SER] that is(are) not mapped (in clusterconfigmapping table) to any cluster!
2018-12-20 20:32:50,061 WARN - Service BON_SER is not available for stack HDP-2.6 in cluster HDP
... View more
12-21-2018
07:28 AM
in case we have the following warning , then how should be the commands of SELECT ? <br>2018-12-20 20:32:50,045 WARN - You have config(s): [LOT_SER, GOP_SER] that is(are) not mapped (in clusterconfigmapping table) to any cluster!
2018-12-20 20:32:50,061 WARN - Service BON_SER is not available for stack HDP-2.6 in cluster HDP
... View more
12-21-2018
06:46 AM
regarding the article - what is the login/password before running SELECT commands and how to get the SHELL ?
... View more
12-20-2018
10:33 PM
please exept my answer if it fit for you
... View more