Hello - I've HDP 2.5 and i'm trying to implement Spark security using LLAP (ref. link https://community.hortonworks.com/articles/72454/apache-spark-fine-grain-security-with-llap-test-dr.... I'm getting error in starting HiveServer2 Interactive (LLAP) Error is as shown below, any ideas on what needs to be done ? -------------------------------------------------- SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] INFO cli.LlapServiceDriver: LLAP service driver invoked with arguments=-directory INFO conf.HiveConf: Found configuration file file:/etc/hive2/18.104.22.168-1245/0/conf.server/hive-site.xml WARN conf.HiveConf: HiveConf of name hive.llap.daemon.allow.permanent.fns does not exist WARN conf.HiveConf: HiveConf hive.llap.daemon.vcpus.per.instance expects INT type value Failed: Cache size (1.22GB) has to be smaller than the container sizing (1.22GB) java.lang.IllegalArgumentException: Cache size (1.22GB) has to be smaller than the container sizing (1.22GB) at com.google.common.base.Preconditions.checkArgument(Preconditions.java:92) at org.apache.hadoop.hive.llap.cli.LlapServiceDriver.run(LlapServiceDriver.java:212) at org.apache.hadoop.hive.llap.cli.LlapServiceDriver.main(LlapServiceDriver.java:104) INFO cli.LlapServiceDriver: LLAP service driver finished Command failed after 1 tries
How big is your cluster? From the error seems like your cluster is very small or misconfigured. Can you try the suggestion mentioned in the following post?
the cluster is a local sandbox, so single node cluster.
the value of -> hive.llap.daemon.yarn.container.mb = 750
while the value of
llap_heap_size was set to 0. I changed it to 80% of hive.llap.daemon.yarn.container.mb (i.eto 60) and restarted, however the issue still remains. One interesting thing i saw was that on Ambari UI, when i changed the % of Cluster Capacity to 80%, the error message remains the same except that the Cache Size & Conatiner size is shown as750.0MB each, pls. see below :
Error -> java.lang.IllegalArgumentException: Cache size (750.00MB) has to be smaller than the container sizing (750.00MB)
If above suggestion doesn't help.
Can you post the command line by searching "Command: /usr/hdp/current/hive-server2-hive2/bin/hive --service llap" in the background startup operation log for HSI.
Further, which queue is getting used and what percentage capacity it has been given ? is the queue named 'llap' ?
wrt searching for - "Command: /usr/hdp/current/hive-server2-hive2/bin/hive --service llap" in /var/log/hive/*server2*, i could not find that text, i assume that is what your ask was Or were you asking me to run this command on command-line ?
This is what i see on running the above on command line -
[root@sandbox ~]# /usr/hdp/current/hive-server2-hive2/bin/hive --service llap SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/hdp/22.214.171.124-1245/hive2/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/hdp/126.96.36.199-1245/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] INFO cli.LlapServiceDriver: LLAP service driver invoked with arguments=-directory INFO conf.HiveConf: Found configuration file file:/etc/hive2/188.8.131.52-1245/0/conf.server/hive-site.xml WARN conf.HiveConf: HiveConf of name hive.llap.daemon.allow.permanent.fns does not exist WARN conf.HiveConf: HiveConf hive.llap.daemon.vcpus.per.instance expects INT type value
usage: llap -a,--args <args> java arguments to the llap instance -b,--slider-am-container-mb <b> The size of the slider AppMaster container in MB -c,--cache <cache> cache size per instance -d,--directory <directory> Temp directory for jars etc.
Try setting 'In-Memory Cache per Daemon' under Hive configs section in Ambari UI to a value around 20% of 'Memory per Daemon setting'. From error looks like both are set to same value now.
@asreekumar - Thanks, That seems to have fixed this particular issue, however - Hiveserver2 Interactive is still not starting up, now i'm getting below error -
Additional changes made to try to fix below error (per link - https://community.hortonworks.com/questions/55387/cannot-start-hiveserver2-interactive-llap.html), set the following manually on Ambari UI, Any ideas on what needs to be done ?
Additional changes in hive-site.xml (using Ambari) ->
Traceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_server_interactive.py", line 512, in check_llap_app_status status = do_retries() File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/decorator.py", line 55, in wrapper return function(*args, **kwargs) File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_server_interactive.py", line 505, in do_retries raise Fail(status_str) Fail: LLAP app 'llap0' current state is COMPLETE. 2017-03-28 03:06:40,521 - LLAP app 'llap0' deployment unsuccessful. Traceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_server_interactive.py", line 535, in <module> HiveServerInteractive().execute() File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute method(env) File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 720, in restart self.start(env, upgrade_type=upgrade_type) File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_server_interactive.py", line 123, in start raise Fail("Skipping START of Hive Server Interactive since LLAP app couldn't be STARTED.") resource_management.core.exceptions.Fail: Skipping START of Hive Server Interactive since LLAP app couldn't be STARTED.
The command string "Command: /usr/hdp/current/hive-server2-hive2/bin/hive" needs to be searched in same logs where from you are posting the above Stack Trace "Traceback (most recent call last):". It should be something like this :
LLAP start command: /usr/hdp/current/hive-server2-hive2/bin/hive --service llap --slider-am-container-mb 1024 --size 10240m --cache 7168m --xmx 2457m --loglevel INFO --output /var/lib/ambari-agent/tmp/llap-slider2017-03-22_00-34-46 --slider-placement 0 --skiphadoopversion --skiphbasecp --instances 1 --logger query-routing --args " -XX:+AlwaysPreTouch -XX:+UseG1GC -XX:TLABSize=8m -XX:+ResizeTLAB -XX:+UseNUMA -XX:+AggressiveOpts -XX:InitiatingHeapOccupancyPercent=40 -XX:G1ReservePercent=20 -XX:MaxGCPauseMillis=200 -XX:MetaspaceSize=1024m"
Other way round, can you attach the whole log file, where you got the Stack Trace from ?
Further, what do the values for following YARN configs say :
When queue is at 80%, can you look for the YARN container logs?
1. First check that "yarn.nodemanager.container-metrics.unregister-delay-ms" is having a non-zero value (Say set it to 60000). This will help retain the YARN logs in FS.
2. Restart HSI. If it fails, search for "application_" followed by a <numeric_value>" in you logs.
3. Go to all the terminals and search for file with name *application_*<numeric value> you got from above step (eg: application_1490132162040_0005)
4. Get into the */yarn/log/*application_*<numeric value> folder followed by another directory with name like "container_"<numeric_value ending in 002> (eg: container_1490132162040_0005_01_000002). Look in this folder for *.log and *.out file and attach them here. That may give idea of why containers for LLAP are failing.
Nonetheless, this looks to be small cluster and memory constraint is the main reason for HSI (LLAP) not comign up.
- Attaching the complete logs (which contains the string you mentioned) & screenshot of the yarn memory setting,
btw, yes, this is a single node cluster, hence memory setting might be lower.
Pls let me know if there is something that stands out & needs to be corrected.
Meanwhile - i'll check the other settings you mentioned,
Thanks for the info. Given that memory is too less (YARN NM memory) and it's a 1 node cluster.
I am right now looking into making HSI (LLAP) come up, and may not represent the best configs.
So, can you modify (increase) following 2 YARN configs:
- Memory allocated for all YARN containers on a node = 2000
- Minimum Container Size (Memory) : 500
Save the configs.
Further, in Hive HSI smart panel, increase the 'llap' queue capacity too 100 % (which you had 80% earlier).
Save the configs.
Restart HSI. See if that helps.