Support Questions
Find answers, ask questions, and share your expertise

LLAP dont Running

LLAP dont Running

Hi everbody .... it´s me again hahahahha

I need some help, i install HiveServer2 Interactive (LLAP) but service is not Running after reboot hadoop servers, i dont understood this ... help - me plz

Above follows error

Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_server_interactive.py", line 616, in <module>
    HiveServerInteractive().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 329, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_server_interactive.py", line 121, in start
    status = self._llap_start(env)
  File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_server_interactive.py", line 271, in _llap_start
    code, output, error = shell.checked_call(cmd, user=params.hive_user, quiet = True, stderr=subprocess.PIPE, logoutput=True)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 72, in inner
    result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 102, in checked_call
    tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call
    raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of '/usr/hdp/current/hive-server2-hive2/bin/hive --service llap --slider-am-container-mb 1024 --size 3072m --cache 2048m --xmx 819m --loglevel INFO  --output /var/lib/ambari-agent/tmp/llap-slider2017-09-21_22-43-26 --slider-placement 4 --skiphadoopversion --skiphbasecp --instances 2 --logger query-routing --args " -XX:+AlwaysPreTouch -Xss512k -XX:+UseG1GC -XX:TLABSize=8m -XX:+ResizeTLAB -XX:+UseNUMA -XX:+AggressiveOpts -XX:InitiatingHeapOccupancyPercent=40 -XX:G1ReservePercent=20 -XX:MaxGCPauseMillis=200 -XX:MetaspaceSize=1024m"' returned 3. SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.6.2.0-205/hive2/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.6.2.0-205/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
WARN conf.HiveConf: HiveConf hive.llap.daemon.vcpus.per.instance expects INT type value
WARN cli.LlapServiceDriver: Ignoring unknown llap server parameter: [hive.aux.jars.path]
Failed: Container size (3,00GB) should be greater than minimum allocation(3,93GB)
java.lang.IllegalArgumentException: Container size (3,00GB) should be greater than minimum allocation(3,93GB)
	at com.google.common.base.Preconditions.checkArgument(Preconditions.java:92)
	at org.apache.hadoop.hive.llap.cli.LlapServiceDriver.run(LlapServiceDriver.java:309)
	at org.apache.hadoop.hive.llap.cli.LlapServiceDriver.main(LlapServiceDriver.java:113)
1 REPLY 1
Highlighted

Re: LLAP dont Running

Super Mentor

@Rui Ornellas Junior

Based on the error it looks like a configuration issue.

Failed: Container size (3,00GB) should be greater than minimum allocation(3,93GB)java.lang.IllegalArgumentException: Container size (3,00GB) should be greater than minimum allocation(3,93GB)

.

https://github.com/apache/hive/blob/rel/release-2.1.1/llap-server/src/java/org/apache/hadoop/hive/ll...

Based on the above code it looks like following parameter value is set incorrectly (means the minimum value is set higher than the max value).

final long minAlloc = conf.getInt(YarnConfiguration.RM_SCHEDULER_MINIMUM_ALLOCATION_MB, -1);

Please check the "yarn.scheduler.minimum-allocation-mb" setting value, Based on the error it should be lower than the size "--size 3072m" parameter that we see in your command.

(OR) try increasing the "--size" parameter value more than the "yarn.scheduler.minimum-allocation-mb"

.