Member since
08-04-2017
2
Posts
0
Kudos Received
0
Solutions
11-19-2018
11:30 AM
Hi I'm getting an error while trying to deploy the HDP 2.6.5 sandbox using docker on Centos. the exact error message is as follows:- failed
to register layer: ApplyLayer exit status 1 stdout: stderr: write
/kafka-logs/__consumer_offsets-33/00000000000000000000.index: no space left on
device I have been monitoring the mount point for space and noticed that it had more than 20GB free space when we got this error. The detailed output is as follows:- /*******************************************************************************************************************************/ [admin@BLRTESTAPP027 hdp_docker]$ sudo sh
docker-deploy-hdp265.sh [sudo] password for admin: + registry=hortonworks + name=sandbox-hdp + version=2.6.5 + proxyName=sandbox-proxy + proxyVersion=1.0 + flavor=hdp + echo hdp + mkdir -p sandbox/proxy/conf.d + mkdir -p sandbox/proxy/conf.stream.d + docker pull hortonworks/sandbox-hdp:2.6.5 2.6.5: Pulling from hortonworks/sandbox-hdp 9770d73ca513: Already exists cbba75ae30cd: Pull complete 283e5725c5f6: Pull complete 1426e9ece03d: Pull complete 4b00051fa827: Pull complete d09cdd825ed6: Pull complete dcbfe1670fa6: Pull complete fd78a46757f7: Pull complete 5bad1882139b: Pull complete d29a62d4eb22: Pull complete 4fb93bf04f14: Pull complete 8827f466ab83: Pull complete a0fc39e77949: Pull complete 595eabd2c628: Pull complete 2a7fd016935e: Pull complete 87526fe8ce7c: Pull complete d91a044a9aaf: Pull complete bbffcb08266c: Pull complete 65c812fb262a: Pull complete 132f30914412: Pull complete 0f3e10681220: Pull complete 505f5a3365a7: Pull complete abaff3c0f761: Pull complete 7d75f267b911: Pull complete 18099674493a: Pull complete 26310ba15287: Pull complete 635c5bfe7fc8: Pull complete 2f80a5abf101: Extracting
[==================================================>] 7.041GB/7.041GB failed
to register layer: ApplyLayer exit status 1 stdout: stderr: write
/kafka-logs/__consumer_offsets-33/00000000000000000000.index: no space left on
device + docker pull hortonworks/sandbox-proxy:1.0 1.0: Pulling from hortonworks/sandbox-proxy Digest: sha256:42e4cfbcbb76af07e5d8f47a183a0d4105e65a1e7ef39fe37ab746e8b2523e9e Status: Image is up to date for
hortonworks/sandbox-proxy:1.0 + '[' hdp == hdf ']' + '[' hdp == hdp ']' + hostname=sandbox-hdp.hortonworks.com ++ docker images ++ awk '{print $2}' ++ grep hortonworks/sandbox-hdp + version= + docker network create cda + docker run --privileged --name sandbox-hdp -h
sandbox-hdp.hortonworks.com --network=cda
--network-alias=sandbox-hdp.hortonworks.com -d hortonworks/sandbox-hdp: docker: invalid reference format. See 'docker run --help'. + echo ' Remove existing postgres run files. Please wait' Remove existing postgres run files. Please wait + sleep 2 + docker exec -t sandbox-hdp sh -c 'rm -rf
/var/run/postgresql/*; systemctl restart postgresql;' Error: No such container: sandbox-hdp + sed s/sandbox-hdp-security/sandbox-hdp/g
assets/generate-proxy-deploy-script.sh + mv -f assets/generate-proxy-deploy-script.sh.new
assets/generate-proxy-deploy-script.sh + chmod +x assets/generate-proxy-deploy-script.sh + assets/generate-proxy-deploy-script.sh + uname + grep MINGW + chmod +x sandbox/proxy/proxy-deploy.sh + sandbox/proxy/proxy-deploy.sh sandbox-proxy /*******************************************************************************************************************************/
... View more
Labels:
08-09-2017
09:47 AM
We have set up a 3 node cluster using HDP 2.6. While trying to restart the Hive service after enabling LLAP, we are getting the following error:- 200518568_0007/container_e22_1502200518568_0007_01_000002/tmp/ -Dlog4j.configurationFile=llap-daemon-log4j2.properties -Dllap.daemon.log.dir=/u01/hadoop/yarn/log/application_1502200518568_0007/container_e22_1502200518568_0007_01_000002/ -Dllap.daemon.log.file=llap-daemon-hive-hdmaster.icreate.bi.log'
+ LLAP_DAEMON_OPTS=' -Dhttp.maxConnections=3 -XX:+AlwaysPreTouch -Xss512k -XX:+UseG1GC -XX:TLABSize=8m -XX:+ResizeTLAB -XX:+UseNUMA -XX:+AggressiveOpts -XX:InitiatingHeapOccupancyPercent=40 -XX:G1ReservePercent=20 -XX:MaxGCPauseMillis=200 -XX:MetaspaceSize=1024m -server -Djava.net.preferIPv4Stack=true -XX:+UseNUMA -XX:+PrintGCDetails -verbose:gc -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=4 -XX:GCLogFileSize=100M -XX:+PrintGCDateStamps -Xloggc:/u01/hadoop/yarn/log/application_1502200518568_0007/container_e22_1502200518568_0007_01_000002//gc.log -Djava.io.tmpdir=/hadoop/yarn/local/usercache/hive/appcache/application_1502200518568_0007/container_e22_1502200518568_0007_01_000002/tmp/ -Dlog4j.configurationFile=llap-daemon-log4j2.properties -Dllap.daemon.log.dir=/u01/hadoop/yarn/log/application_1502200518568_0007/container_e22_1502200518568_0007_01_000002/ -Dllap.daemon.log.file=llap-daemon-hive-hdmaster.icreate.bi.log -Dllap.daemon.root.logger=query-routing'
+ LLAP_DAEMON_OPTS=' -Dhttp.maxConnections=3 -XX:+AlwaysPreTouch -Xss512k -XX:+UseG1GC -XX:TLABSize=8m -XX:+ResizeTLAB -XX:+UseNUMA -XX:+AggressiveOpts -XX:InitiatingHeapOccupancyPercent=40 -XX:G1ReservePercent=20 -XX:MaxGCPauseMillis=200 -XX:MetaspaceSize=1024m -server -Djava.net.preferIPv4Stack=true -XX:+UseNUMA -XX:+PrintGCDetails -verbose:gc -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=4 -XX:GCLogFileSize=100M -XX:+PrintGCDateStamps -Xloggc:/u01/hadoop/yarn/log/application_1502200518568_0007/container_e22_1502200518568_0007_01_000002//gc.log -Djava.io.tmpdir=/hadoop/yarn/local/usercache/hive/appcache/application_1502200518568_0007/container_e22_1502200518568_0007_01_000002/tmp/ -Dlog4j.configurationFile=llap-daemon-log4j2.properties -Dllap.daemon.log.dir=/u01/hadoop/yarn/log/application_1502200518568_0007/container_e22_1502200518568_0007_01_000002/ -Dllap.daemon.log.file=llap-daemon-hive-hdmaster.icreate.bi.log -Dllap.daemon.root.logger=query-routing -Dllap.daemon.log.level=INFO'
+ exec /usr/jdk64/jdk1.8.0_112/bin/java -Dproc_llapdaemon -Xms0m -Xmx0m -Dhttp.maxConnections=3 -XX:+AlwaysPreTouch -Xss512k -XX:+UseG1GC -XX:TLABSize=8m -XX:+ResizeTLAB -XX:+UseNUMA -XX:+AggressiveOpts -XX:InitiatingHeapOccupancyPercent=40 -XX:G1ReservePercent=20 -XX:MaxGCPauseMillis=200 -XX:MetaspaceSize=1024m -server -Djava.net.preferIPv4Stack=true -XX:+UseNUMA -XX:+PrintGCDetails -verbose:gc -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=4 -XX:GCLogFileSize=100M -XX:+PrintGCDateStamps -Xloggc:/u01/hadoop/yarn/log/application_1502200518568_0007/container_e22_1502200518568_0007_01_000002//gc.log -Djava.io.tmpdir=/hadoop/yarn/local/usercache/hive/appcache/application_1502200518568_0007/container_e22_1502200518568_0007_01_000002/tmp/ -Dlog4j.configurationFile=llap-daemon-log4j2.properties -Dllap.daemon.log.dir=/u01/hadoop/yarn/log/application_1502200518568_0007/container_e22_1502200518568_0007_01_000002/ -Dllap.daemon.log.file=llap-daemon-hive-hdmaster.icreate.bi.log -Dllap.daemon.root.logger=query-routing -Dllap.daemon.log.level=INFO -classpath '/hadoop/yarn/local/usercache/hive/appcache/application_1502200518568_0007/container_e22_1502200518568_0007_01_000002/app/install//conf/:/hadoop/yarn/local/usercache/hive/appcache/application_1502200518568_0007/container_e22_1502200518568_0007_01_000002/app/install//lib/*:/hadoop/yarn/local/usercache/hive/appcache/application_1502200518568_0007/container_e22_1502200518568_0007_01_000002/app/install//lib/tez/*:/hadoop/yarn/local/usercache/hive/appcache/application_1502200518568_0007/container_e22_1502200518568_0007_01_000002/app/install//lib/udfs/*:.:' org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon
Invalid maximum heap size: -Xmx0m
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
Node configurations are as follows:- #Cores - 2 Memory - 16GB yarn.scheduler.minimum-allocation-mb = 4608MB hive.tez.container.size = 9216MB tez.container.max.java.heap.fraction = 0.8 We have even tried providing the Xmx and Xms manually in hive.tez.java.opts to -Xmx7372m -Xms3686m. But it still seems to be taking the Xmx value as 0. Can you please advise any property that we might have not setup properly?
... View more
Labels:
- Labels:
-
Apache Tez