Member since
12-06-2016
16
Posts
0
Kudos Received
0
Solutions
06-26-2019
04:06 AM
I installed R from sources and copied to /usr/bin then I went to ambari and setup the zeppeling configuration below: export JAVA_HOME={{java64_home}} export MASTER=yarn-client export ZEPPELIN_LOG_DIR={{zeppelin_log_dir}} export ZEPPELIN_PID_DIR={{zeppelin_pid_dir}} export ZEPPELIN_INTP_CLASSPATH_OVERRIDES="{{external_dependency_conf}}" export KINIT_FAIL_THRESHOLD=5 export KERBEROS_REFRESH_INTERVAL=1d export HADOOP_CONF_DIR=/etc/hadoop/conf
export SPARK_HOME=/usr/hdp/3.1.0.0-78/spark2 Hope this helps, please let me know otherwise
... View more
06-26-2019
04:06 AM
Dear HDP community, I am trying to setup R interpreter. R is installed and running on host (means I can run R script in ssh session through command line) and Spark is installed through HDP/ambari platform. This is my notebook: %spark2.r
1 + 1 which fails like below: org.apache.zeppelin.interpreter.InterpreterException: sparkr is not responding
R version 3.6.0 (2019-04-26) -- "Planting of a Tree"
Copyright (C) 2019 The R Foundation for Statistical Computing
Platform: x86_64-pc-linux-gnu (64-bit)
R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.
Natural language support but running in an English locale
R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.
Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.
> #
> # Licensed to the Apache Software Foundation (ASF) under one
> # or more contributor license agreements. See the NOTICE file
> # distributed with this work for additional information
> # regarding copyright ownership. The ASF licenses this file
> # to you under the Apache License, Version 2.0 (the
> # "License"); you may not use this file except in compliance
> # with the License. You may obtain a copy of the License at
> #
> # http://www.apache.org/licenses/LICENSE-2.0
> #
> # Unless required by applicable law or agreed to in writing, software
> # distributed under the License is distributed on an "AS IS" BASIS,
> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> # See the License for the specific language governing permissions and
> # limitations under the License.
> #
>
> args <- commandArgs(trailingOnly = TRUE)
>
> hashCode <- as.integer(args[1])
> port <- as.integer(args[2])
> libPath <- args[3]
> version <- as.integer(args[4])
> timeout <- as.integer(args[5])
> authSecret <- NULL
> if (length(args) >= 6) {
+ authSecret <- args[6]
+ }
>
> rm(args)
>
> print(paste("Port ", toString(port)))
[1] "Port 42291"
> print(paste("LibPath ", libPath))
[1] "LibPath /usr/hdp/3.1.0.0-78/spark2/R/lib"
>
> .libPaths(c(file.path(libPath), .libPaths()))
> library(SparkR)
Attaching package: ‘SparkR’
The following objects are masked from ‘package:stats’:
cov, fi> lter, lag, na.omit, predict, sd, var, window
The following objects are masked from ‘package:base’:
>
i as.f (is.null(authSecret)) {dat
a+. frame, colnames, colname SparkR:::connectBackend("localhost",s <p-o,r drop, endsWith, interste,ct, ti
meout)
+r } else {
+ SparkR:::connectBackend("localhost", port, timeout, authSecret)a
+ }
nk, rbind, sample, startsWith, subset, summary, transform, union
at org.apache.zeppelin.spark.ZeppelinR.waitForRScriptInitialized(ZeppelinR.java:294)
at org.apache.zeppelin.spark.ZeppelinR.request(ZeppelinR.java:236)
at org.apache.zeppelin.spark.ZeppelinR.eval(ZeppelinR.java:185)
at org.apache.zeppelin.spark.ZeppelinR.open(ZeppelinR.java:174)
at org.apache.zeppelin.spark.SparkRInterpreter.open(SparkRInterpreter.java:106)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:69)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:617)
at org.apache.zeppelin.scheduler.Job.run(Job.java:188)
at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:140)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745) Any thoughts? thank you very much
... View more
Labels:
- Labels:
-
Apache Spark
-
Apache Zeppelin
05-30-2018
07:27 AM
Hi Carlos, my guess is that you need to put values for activeDirectoryRealm.systemUsername and activeDirectoryRealm.systemPassword for zeppelin to be able to use your AD thanks
... View more
05-30-2018
07:16 AM
Dear HDP community, I have deployed zeppeling using ambari and also configured authentication against AD which works fine. The only issue is that zeppelin runs commands as zeppelin server instead of using the account I authenticated against. For instance, I can login into zeppelin with my AD credentials userMS however running whoami command on the shell interpreter shows zeppelin as output which means I am not login as myself... Is there a way I can setup zeppelin to use my own credentials instead of zeppelin local account? thank you very much Manuel
... View more
Labels:
- Labels:
-
Apache Zeppelin
03-14-2018
04:45 AM
Hi, we have a new HDP installation and we deployed zepeling a couple of days ago, it was working fine untill I edited the Advanced zeppelin-shiro-ini --> shiro_ini_content and after that zepeling stopped working. I even restored the shiro_ini_content to default but problem persists. Ambari dashboard looks ok but service check fails giving this output: stderr: /var/lib/ambari-agent/data/errors-396.txt
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.7.0/package/scripts/service_check.py", line 40, in <module>
ZeppelinServiceCheck().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 367, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.7.0/package/scripts/service_check.py", line 37, in service_check
logoutput=True)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 262, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'curl -s -o /dev/null -w'%{http_code}' --negotiate -u: -k http://zeppelin-mlx.mlx:9995 | grep 200' returned 1.
stdout: /var/lib/ambari-agent/data/output-396.txt
2018-03-14 15:33:25,366 - call['ambari-python-wrap /usr/bin/hdp-select status spark-client'] {'timeout': 20}
2018-03-14 15:33:25,390 - call returned (0, 'spark-client - 2.6.3.0-235')
2018-03-14 15:33:25,393 - Using hadoop conf dir: /usr/hdp/2.6.3.0-235/hadoop/conf
2018-03-14 15:33:25,394 - Execute['curl -s -o /dev/null -w'%{http_code}' --negotiate -u: -k http://zeppelin-mlx.mlx:9995 | grep 200'] {'logoutput': True, 'tries': 10, 'try_sleep': 3}
2018-03-14 15:33:25,527 - Retrying after 3 seconds. Reason: Execution of 'curl -s -o /dev/null -w'%{http_code}' --negotiate -u: -k http://zeppelin-mlx.mlx:9995 | grep 200' returned 1.
2018-03-14 15:33:28,545 - Retrying after 3 seconds. Reason: Execution of 'curl -s -o /dev/null -w'%{http_code}' --negotiate -u: -k http://zeppelin-mlx.mlx:9995 | grep 200' returned 1.
2018-03-14 15:33:31,564 - Retrying after 3 seconds. Reason: Execution of 'curl -s -o /dev/null -w'%{http_code}' --negotiate -u: -k http://zeppelin-mlx.mlx:9995 | grep 200' returned 1.
2018-03-14 15:33:34,583 - Retrying after 3 seconds. Reason: Execution of 'curl -s -o /dev/null -w'%{http_code}' --negotiate -u: -k http://zeppelin-mlx.mlx:9995 | grep 200' returned 1.
2018-03-14 15:33:37,604 - Retrying after 3 seconds. Reason: Execution of 'curl -s -o /dev/null -w'%{http_code}' --negotiate -u: -k http://zeppelin-mlx.mlx:9995 | grep 200' returned 1.
2018-03-14 15:33:40,621 - Retrying after 3 seconds. Reason: Execution of 'curl -s -o /dev/null -w'%{http_code}' --negotiate -u: -k http://zeppelin-mlx.mlx:9995 | grep 200' returned 1.
2018-03-14 15:33:43,639 - Retrying after 3 seconds. Reason: Execution of 'curl -s -o /dev/null -w'%{http_code}' --negotiate -u: -k http://zeppelin-mlx.mlx:9995 | grep 200' returned 1.
2018-03-14 15:33:46,659 - Retrying after 3 seconds. Reason: Execution of 'curl -s -o /dev/null -w'%{http_code}' --negotiate -u: -k http://zeppelin-mlx.mlx:9995 | grep 200' returned 1.
2018-03-14 15:33:49,678 - Retrying after 3 seconds. Reason: Execution of 'curl -s -o /dev/null -w'%{http_code}' --negotiate -u: -k http://zeppelin-mlx.mlx:9995 | grep 200' returned 1.
Command failed after 1 tries I can also see this in the /var/log/zeppelin/zeppelin-zeppelin-zeppelin.local.log file: INFO [2018-03-14 16:11:46,038] ({main} ZeppelinConfiguration.java[create]:102) - Load configuration from file:/etc/zeppelin/2.6.3.0-235/0/zeppelin-site.xml
INFO [2018-03-14 16:11:46,139] ({main} ZeppelinConfiguration.java[create]:110) - Server Host: 0.0.0.0
INFO [2018-03-14 16:11:46,139] ({main} ZeppelinConfiguration.java[create]:112) - Server Port: 9995
INFO [2018-03-14 16:11:46,140] ({main} ZeppelinConfiguration.java[create]:116) - Context Path: /
INFO [2018-03-14 16:11:46,142] ({main} ZeppelinConfiguration.java[create]:117) - Zeppelin Version: 0.7.3
INFO [2018-03-14 16:11:46,170] ({main} Log.java[initialized]:186) - Logging initialized @748ms
INFO [2018-03-14 16:11:46,243] ({main} ZeppelinServer.java[setupWebAppContext]:370) - ZeppelinServer Webapp path: /usr/hdp/current/zeppelin-server/webapps
INFO [2018-03-14 16:11:46,524] ({main} AuthorizingRealm.java[getAuthorizationCacheLazy]:248) - No cache or cacheManager properties have been set. Authorization cache cannot be obtained.
INFO [2018-03-14 16:11:46,572] ({main} ZeppelinServer.java[main]:211) - Starting zeppelin server
INFO [2018-03-14 16:11:46,574] ({main} Server.java[doStart]:327) - jetty-9.2.15.v20160210
WARN [2018-03-14 16:11:46,954] ({main} WebAppContext.java[doStart]:514) - Failed startup of context o.e.j.w.WebAppContext@49b0b76{/,null,null}{/usr/hdp/current/zeppelin-server/lib/zeppelin-web-0.7.3.2.6.3.0-235.war}
java.lang.IllegalStateException: Failed to delete temp dir /usr/hdp/2.6.3.0-235/zeppelin/webapps
at org.eclipse.jetty.webapp.WebInfConfiguration.configureTempDirectory(WebInfConfiguration.java:372)
at org.eclipse.jetty.webapp.WebInfConfiguration.resolveTempDirectory(WebInfConfiguration.java:260)
at org.eclipse.jetty.webapp.WebInfConfiguration.preConfigure(WebInfConfiguration.java:69)
at org.eclipse.jetty.webapp.WebAppContext.preConfigure(WebAppContext.java:468)
at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:504)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:163)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)
at org.eclipse.jetty.server.Server.start(Server.java:387)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
at org.eclipse.jetty.server.Server.doStart(Server.java:354)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.apache.zeppelin.server.ZeppelinServer.main(ZeppelinServer.java:213)
INFO [2018-03-14 16:11:46,978] ({main} AbstractConnector.java[doStart]:266) - Started ServerConnector@3daa422a{HTTP/1.1}{0.0.0.0:9995}
INFO [2018-03-14 16:11:46,978] ({main} Server.java[doStart]:379) - Started @1558ms
INFO [2018-03-14 16:11:46,978] ({main} ZeppelinServer.java[main]:221) - Done, zeppelin server started any thoughts? thank you very much
... View more
- Tags:
- Hadoop Core
- hdp-2.6.0
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
07-19-2017
07:52 AM
Hi, I am trying to install spark 2 service to my cluster but for some reason one of the nodes is having a warning, please see the logs below: stderr:
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/service_check.py", line 193, in <module>
HiveServiceCheck().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/service_check.py", line 99, in service_check
webhcat_service_check()
File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/webhcat_service_check.py", line 125, in webhcat_service_check
logoutput=True)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 273, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call
tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 293, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of '/var/lib/ambari-agent/tmp/templetonSmoke.sh clusternode2.novalocal ambari-qa 50111 idtest.ambari-qa.1500449371.28.pig no_keytab false kinit no_principal /var/lib/ambari-agent/tmp' returned 1. Templeton Smoke Test (pig cmd): Failed. : {"error":"User: hcat is not allowed to impersonate ambari-qa"}http_code <500>
stdout:
2017-07-19 07:29:10,602 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-07-19 07:29:10,620 - call['ambari-python-wrap /usr/bin/hdp-select status hive-server2'] {'timeout': 20}
2017-07-19 07:29:10,641 - call returned (0, 'hive-server2 - 2.5.3.0-37')
2017-07-19 07:29:10,642 - Stack Feature Version Info: stack_version=2.5, version=None, current_cluster_version=2.5.3.0-37 -> 2.5
2017-07-19 07:29:10,658 - Running Hive Server checks
2017-07-19 07:29:10,658 - --------------------------
2017-07-19 07:29:10,659 - Server Address List : [u'clusternode2.novalocal'], Port : 10000, SSL KeyStore : None
2017-07-19 07:29:10,659 - Waiting for the Hive Server to start...
2017-07-19 07:29:10,659 - Execute['! beeline -u 'jdbc:hive2://clusternode2.novalocal:10000/;transportMode=binary' -e '' 2>&1| awk '{print}'|grep -i -e 'Connection refused' -e 'Invalid URL''] {'path': ['/bin/', '/usr/bin/', '/usr/lib/hive/bin/', '/usr/sbin/'], 'user': 'ambari-qa', 'timeout': 30}
2017-07-19 07:29:13,300 - Successfully connected to clusternode2.novalocal on port 10000
2017-07-19 07:29:13,301 - Successfully stayed connected to 'Hive Server' on host: clusternode3.novalocal and port 10000 after 2.64190602303 seconds
2017-07-19 07:29:13,301 - Running HCAT checks
2017-07-19 07:29:13,301 - -------------------
2017-07-19 07:29:13,302 - checked_call['hostid'] {}
2017-07-19 07:29:13,305 - checked_call returned (0, 'a8c07201')
2017-07-19 07:29:13,306 - File['/var/lib/ambari-agent/tmp/hcatSmoke.sh'] {'content': StaticFile('hcatSmoke.sh'), 'mode': 0755}
2017-07-19 07:29:13,307 - Writing File['/var/lib/ambari-agent/tmp/hcatSmoke.sh'] because it doesn't exist
2017-07-19 07:29:13,307 - Changing permission for /var/lib/ambari-agent/tmp/hcatSmoke.sh from 644 to 755
2017-07-19 07:29:13,307 - Execute['env JAVA_HOME=/usr/jdk64/jdk1.8.0_77 /var/lib/ambari-agent/tmp/hcatSmoke.sh hcatsmokeida8c07201_date291917 prepare true'] {'logoutput': True, 'path': ['/usr/sbin', '/usr/local/bin', '/bin', '/usr/bin', u'/usr/sbin:/sbin:/usr/lib/ambari-server/*:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/var/lib/ambari-agent:/usr/hdp/current/hive-client/bin:/usr/hdp/current/hadoop-client/bin'], 'tries': 3, 'user': 'ambari-qa', 'try_sleep': 5}
OK
Time taken: 1.585 seconds
OK
Time taken: 1.261 seconds
OK
Time taken: 1.691 seconds
2017-07-19 07:29:25,460 - ExecuteHadoop['fs -test -e /apps/hive/warehouse/hcatsmokeida8c07201_date291917'] {'logoutput': True, 'bin_dir': '/usr/sbin:/sbin:/usr/lib/ambari-server/*:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/var/lib/ambari-agent:/usr/hdp/current/hive-client/bin:/usr/hdp/current/hadoop-client/bin', 'user': 'hdfs', 'conf_dir': '/usr/hdp/current/hadoop-client/conf'}
2017-07-19 07:29:25,461 - Execute['hadoop --config /usr/hdp/current/hadoop-client/conf fs -test -e /apps/hive/warehouse/hcatsmokeida8c07201_date291917'] {'logoutput': True, 'try_sleep': 0, 'environment': {}, 'tries': 1, 'user': 'hdfs', 'path': [u'/usr/sbin:/sbin:/usr/lib/ambari-server/*:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/var/lib/ambari-agent:/usr/hdp/current/hive-client/bin:/usr/hdp/current/hadoop-client/bin']}
2017-07-19 07:29:27,285 - Execute[' /var/lib/ambari-agent/tmp/hcatSmoke.sh hcatsmokeida8c07201_date291917 cleanup true'] {'logoutput': True, 'path': ['/usr/sbin', '/usr/local/bin', '/bin', '/usr/bin', u'/usr/sbin:/sbin:/usr/lib/ambari-server/*:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/var/lib/ambari-agent:/usr/hdp/current/hive-client/bin:/usr/hdp/current/hadoop-client/bin'], 'tries': 3, 'user': 'ambari-qa', 'try_sleep': 5}
OK
Time taken: 1.397 seconds
2017-07-19 07:29:31,280 - Running WEBHCAT checks
2017-07-19 07:29:31,280 - ---------------------
2017-07-19 07:29:31,281 - File['/var/lib/ambari-agent/tmp/templetonSmoke.sh'] {'content': StaticFile('templetonSmoke.sh'), 'mode': 0755}
2017-07-19 07:29:31,282 - Writing File['/var/lib/ambari-agent/tmp/templetonSmoke.sh'] because it doesn't exist
2017-07-19 07:29:31,282 - Changing permission for /var/lib/ambari-agent/tmp/templetonSmoke.sh from 644 to 755
2017-07-19 07:29:31,288 - File['/var/lib/ambari-agent/tmp/idtest.ambari-qa.1500449371.28.pig'] {'owner': 'hdfs', 'content': Template('templeton_smoke.pig.j2')}
2017-07-19 07:29:31,289 - Writing File['/var/lib/ambari-agent/tmp/idtest.ambari-qa.1500449371.28.pig'] because it doesn't exist
2017-07-19 07:29:31,289 - Changing owner for /var/lib/ambari-agent/tmp/idtest.ambari-qa.1500449371.28.pig from 0 to hdfs
2017-07-19 07:29:31,290 - HdfsResource['/tmp/idtest.ambari-qa.1500449371.28.pig'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'source': '/var/lib/ambari-agent/tmp/idtest.ambari-qa.1500449371.28.pig', 'dfs_type': '', 'default_fs': 'hdfs://clusternode1.novalocal:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'ambari-qa', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'file', 'action': ['create_on_execute'], 'immutable_paths': [u'/apps/hive/warehouse', u'/mr-history/done', u'/app-logs', u'/tmp']}
2017-07-19 07:29:31,293 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://clusternode1.novalocal:50070/webhdfs/v1/tmp/idtest.ambari-qa.1500449371.28.pig?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpDykg27 2>/tmp/tmpCNcotQ''] {'logoutput': None, 'quiet': False}
2017-07-19 07:29:31,346 - call returned (0, '')
2017-07-19 07:29:31,346 - Creating new file /tmp/idtest.ambari-qa.1500449371.28.pig in DFS
2017-07-19 07:29:31,347 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT --data-binary @/var/lib/ambari-agent/tmp/idtest.ambari-qa.1500449371.28.pig -H '"'"'Content-Type: application/octet-stream'"'"' '"'"'http://clusternode1.novalocal:50070/webhdfs/v1/tmp/idtest.ambari-qa.1500449371.28.pig?op=CREATE&user.name=hdfs&overwrite=True'"'"' 1>/tmp/tmpOoKCx9 2>/tmp/tmp0_Gjm3''] {'logoutput': None, 'quiet': False}
2017-07-19 07:29:31,421 - call returned (0, '')
2017-07-19 07:29:31,422 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://clusternode1.novalocal:50070/webhdfs/v1/tmp/idtest.ambari-qa.1500449371.28.pig?op=SETOWNER&user.name=hdfs&owner=ambari-qa&group='"'"' 1>/tmp/tmpqw5LXY 2>/tmp/tmpBAOQp8''] {'logoutput': None, 'quiet': False}
2017-07-19 07:29:31,470 - call returned (0, '')
2017-07-19 07:29:31,471 - HdfsResource['/tmp/idtest.ambari-qa.1500449371.28.in'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'source': '/etc/passwd', 'dfs_type': '', 'default_fs': 'hdfs://clusternode1.novalocal:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'ambari-qa', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'file', 'action': ['create_on_execute'], 'immutable_paths': [u'/apps/hive/warehouse', u'/mr-history/done', u'/app-logs', u'/tmp']}
2017-07-19 07:29:31,472 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://clusternode1.novalocal:50070/webhdfs/v1/tmp/idtest.ambari-qa.1500449371.28.in?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpJDR6me 2>/tmp/tmpwPw2ST''] {'logoutput': None, 'quiet': False}
2017-07-19 07:29:31,515 - call returned (0, '')
2017-07-19 07:29:31,515 - Creating new file /tmp/idtest.ambari-qa.1500449371.28.in in DFS
2017-07-19 07:29:31,516 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT --data-binary @/etc/passwd -H '"'"'Content-Type: application/octet-stream'"'"' '"'"'http://clusternode1.novalocal:50070/webhdfs/v1/tmp/idtest.ambari-qa.1500449371.28.in?op=CREATE&user.name=hdfs&overwrite=True'"'"' 1>/tmp/tmp7twz7Y 2>/tmp/tmpvQt5Z3''] {'logoutput': None, 'quiet': False}
2017-07-19 07:29:31,586 - call returned (0, '')
2017-07-19 07:29:31,587 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://clusternode1.novalocal:50070/webhdfs/v1/tmp/idtest.ambari-qa.1500449371.28.in?op=SETOWNER&user.name=hdfs&owner=ambari-qa&group='"'"' 1>/tmp/tmpllAwXI 2>/tmp/tmphfG8Eg''] {'logoutput': None, 'quiet': False}
2017-07-19 07:29:31,639 - call returned (0, '')
2017-07-19 07:29:31,640 - HdfsResource[None] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'dfs_type': '', 'default_fs': 'hdfs://clusternode1.novalocal:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'immutable_paths': [u'/apps/hive/warehouse', u'/mr-history/done', u'/app-logs', u'/tmp']}
2017-07-19 07:29:31,641 - Execute['/var/lib/ambari-agent/tmp/templetonSmoke.sh clusternode2.novalocal ambari-qa 50111 idtest.ambari-qa.1500449371.28.pig no_keytab false kinit no_principal /var/lib/ambari-agent/tmp'] {'logoutput': True, 'path': ['/usr/sbin:/sbin:/usr/local/bin:/bin:/usr/bin'], 'tries': 3, 'try_sleep': 5}
Templeton Smoke Test (pig cmd): Failed. : {"error":"User: hcat is not allowed to impersonate ambari-qa"}http_code <500>
2017-07-19 07:29:36,974 - Retrying after 5 seconds. Reason: Execution of '/var/lib/ambari-agent/tmp/templetonSmoke.sh clusternode2.novalocal ambari-qa 50111 idtest.ambari-qa.1500449371.28.pig no_keytab false kinit no_principal /var/lib/ambari-agent/tmp' returned 1. Templeton Smoke Test (pig cmd): Failed. : {"error":"User: hcat is not allowed to impersonate ambari-qa"}http_code <500>
Templeton Smoke Test (pig cmd): Failed. : {"error":"User: hcat is not allowed to impersonate ambari-qa"}http_code <500>
2017-07-19 07:29:46,275 - Retrying after 5 seconds. Reason: Execution of '/var/lib/ambari-agent/tmp/templetonSmoke.sh clusternode2.novalocal ambari-qa 50111 idtest.ambari-qa.1500449371.28.pig no_keytab false kinit no_principal /var/lib/ambari-agent/tmp' returned 1. Templeton Smoke Test (pig cmd): Failed. : {"error":"User: hcat is not allowed to impersonate ambari-qa"}http_code <500>
Templeton Smoke Test (pig cmd): Failed. : {"error":"User: hcat is not allowed to impersonate ambari-qa"}http_code <500>
Command failed after 1 tries any idea would be quite appreciated thank you very much
... View more
Labels:
12-08-2016
07:20 AM
thank you @khorvath, waht about support for openstack newton? is there any plan to support it?
... View more
12-06-2016
06:07 AM
Hi, I am trying to use cloudbreak to manage my openstack newton but I need a Floating IP pool ID for that, now as far as I know Openstack neutron does not support Floating IP pools. Question 1: is there a workaround about this? Question 2: I was wondering how do you do it in mikata as it also uses neutron? thank you
... View more
Labels:
- Labels:
-
Apache Falcon
-
Hortonworks Cloudbreak