Member since
06-27-2017
9
Posts
0
Kudos Received
0
Solutions
09-21-2017
10:47 PM
Hi everbody .... it´s me again hahahahha I need some help, i install HiveServer2 Interactive (LLAP) but service is not Running after reboot hadoop servers, i dont understood this ... help - me plz Above follows error Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_server_interactive.py", line 616, in <module>
HiveServerInteractive().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 329, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_server_interactive.py", line 121, in start
status = self._llap_start(env)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_server_interactive.py", line 271, in _llap_start
code, output, error = shell.checked_call(cmd, user=params.hive_user, quiet = True, stderr=subprocess.PIPE, logoutput=True)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of '/usr/hdp/current/hive-server2-hive2/bin/hive --service llap --slider-am-container-mb 1024 --size 3072m --cache 2048m --xmx 819m --loglevel INFO --output /var/lib/ambari-agent/tmp/llap-slider2017-09-21_22-43-26 --slider-placement 4 --skiphadoopversion --skiphbasecp --instances 2 --logger query-routing --args " -XX:+AlwaysPreTouch -Xss512k -XX:+UseG1GC -XX:TLABSize=8m -XX:+ResizeTLAB -XX:+UseNUMA -XX:+AggressiveOpts -XX:InitiatingHeapOccupancyPercent=40 -XX:G1ReservePercent=20 -XX:MaxGCPauseMillis=200 -XX:MetaspaceSize=1024m"' returned 3. SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.6.2.0-205/hive2/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.6.2.0-205/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
WARN conf.HiveConf: HiveConf hive.llap.daemon.vcpus.per.instance expects INT type value
WARN cli.LlapServiceDriver: Ignoring unknown llap server parameter: [hive.aux.jars.path]
Failed: Container size (3,00GB) should be greater than minimum allocation(3,93GB)
java.lang.IllegalArgumentException: Container size (3,00GB) should be greater than minimum allocation(3,93GB)
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:92)
at org.apache.hadoop.hive.llap.cli.LlapServiceDriver.run(LlapServiceDriver.java:309)
at org.apache.hadoop.hive.llap.cli.LlapServiceDriver.main(LlapServiceDriver.java:113)
... View more
- Tags:
- Hadoop Core
- llap
09-12-2017
01:18 PM
@Geoffrey Shelton Okot Im using Windows, and my Domain level is Windows 2008 R2.
... View more
09-11-2017
08:09 PM
Error is Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py", line 161, in <module>
DataNode().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 329, in execute
method(env)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 850, in restart
self.start(env, upgrade_type=upgrade_type)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py", line 67, in start
datanode(action="start")
File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_datanode.py", line 68, in datanode
create_log_dir=True
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/utils.py", line 274, in service
Execute(daemon_cmd, not_if=process_id_exists_command, environment=hadoop_env_exports)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 262, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call
raise ExecutionFailed(err_msg, code, out, err) resource_management.core.exceptions.ExecutionFailed: Execution of 'ambari-sudo.sh -H -E /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start datanode' returned 1. starting datanode, logging to /var/log/hadoop/hdfs/hadoop-hdfs-datanode-hadoop-server01.cetax.corp.out
... View more
09-11-2017
04:24 PM
Good afternoon guys,
I am having problems with Kerberos again, I have already re-created my lab cluster, I no longer know what to do.
The problem is the following I was able to put the ambari in the domain, all the options of the kerberos executed in the correct way.
But when Hadoop will "Kerberize" the services, they stop working and do not go up.
I ran rollback of the server settings, and returned the services at the time of implementation.
I need help to run Kerberus correctly, I do not know what else to do, someone could help me in detail I'm still new to the Hadoop solution, and i have hadoop 2.6.2.0 and Ambari 2.5.1.0 . Best Regards, Rui Ornellas Junior
... View more
Labels:
09-04-2017
06:25 PM
good afternoon guys,
I'm here again needing help from the community .... I'm trying to activate Kerberos in my hadoop environment with ambari.
I've enabled SSL on my domain controllers so that the connections are accepted by LDAPS.
I created the admin user in the domain with full administrator properties of the environment. And I enter the correct data of the user but when the ambari goes to create principals the ambari presents the following error
2017-09-04 15: 17: 49,600 - Failed to create master, hadoopcetax-090417@cetax.corp - Can not create principal: hadoopcetax-090417@cetax.corp 2017-09-04 15: 17: 49,386 -Processing identities ...
2017-09-04 15: 17: 49,596 - Main processing, hadoopcetax-090417@cetax.corp Can someone help me? I do not know what else to do. Best Regards, Rui Ornellas Junior
... View more
Labels:
07-11-2017
08:58 PM
Ok I accessed via ssh my linux server where the ambari is, then I put the sudo su - hdfs command to access with the hdfs user ... then I accessed the / etc / hadoop / conf / directory and inside it I changed The core-site.xml, I went to the screen of the ambari -server and I selected all the hosts, I restarted all the services, then I accessed the files view screen where I performed a file upload and the error continues. My question is how should I change the core-site.xml file and what are the next steps
... View more
07-11-2017
06:50 PM
After changing the file, which services I reboot. @lraheja
... View more
07-11-2017
06:48 PM
Where that file gets to change those parameters, I'm noob on hadoop.
... View more
07-11-2017
05:07 PM
Hi Guys .. i need help,
I Try upload file but i receive error Unauthorized connection for super-user: root from IP 172.30.12.81, IP ADDR 172.30.12.81 belongs to my workstation, that is, I am not authorized to send files from other machines. I would like to know where I add to free so that everyone who connects to hadoop can send files. regards, Rui
... View more