Member since
08-27-2017
43
Posts
1
Kudos Received
0
Solutions
02-02-2018
01:19 AM
Hi @Jay Kumar SenSharma, I already stopped the firewall at each slot but it still not been displayed. I did the following "netstat -tnlpa | grep 50070" . Below is the result [root@slot2 ~]# netstat -tnlpa | grep 50070
tcp 0 0 10.10.10.20:50070 0.0.0.0:* LISTEN 58845/java
tcp 0 0 10.10.10.20:50070 10.10.10.20:48374 TIME_WAIT -
tcp 0 0 10.10.10.20:50070 10.10.10.20:48366 TIME_WAIT -
... View more
01-30-2018
08:59 AM
Hi Jay, the following show the Ambari UI, hosts and one of ambari services UI, HDFS UI for example.
... View more
01-30-2018
08:06 AM
Hi, I am having trouble with all Ambari Service UI does not display but I can see all those services in Ambari UI in green state which make it doesn't have error at all. Please help me out. Thanks for advance.
... View more
Labels:
- Labels:
-
Apache Ambari
12-07-2017
01:25 AM
Alright. Thanks for the links Aditya. 🙂
... View more
12-05-2017
05:40 AM
Hi @Aditya Sirna, I have a problem that when I execute spark2 script, there is error stated that prefix not found. I tried to add spark2 interpreter from 'Create Interpreter' settings but there is no spark2. When I look at Interpreter Tab in Zeppelin UI, there is already Spark2 Menu after I deployed Spark2 service from Ambari. Hope you can help me out. Thanks.
... View more
Labels:
- Labels:
-
Apache Zeppelin
12-05-2017
05:13 AM
Hi Aditya, I have problem that Zeppelin UI cant found my spark2 interpreter. Error stated 'Prefix not found'
... View more
12-04-2017
07:21 AM
It works like a charm. Thanks a lot Aditya 🙂
... View more
12-04-2017
05:26 AM
The following is the result for hdp-select : [root@host ~]# hdp-select
accumulo-client - None
accumulo-gc - None
accumulo-master - None
accumulo-monitor - None
accumulo-tablet - None
accumulo-tracer - None
atlas-client - 2.5.3.0-37
atlas-server - 2.5.3.0-37
falcon-client - None
falcon-server - None
flume-server - None
hadoop-client - 2.5.3.0-37
hadoop-hdfs-datanode - 2.5.3.0-37
hadoop-hdfs-journalnode - 2.5.3.0-37
hadoop-hdfs-namenode - 2.5.3.0-37
hadoop-hdfs-nfs3 - 2.5.3.0-37
hadoop-hdfs-portmap - 2.5.3.0-37
hadoop-hdfs-secondarynamenode - 2.5.3.0-37
hadoop-hdfs-zkfc - 2.5.3.0-37
hadoop-httpfs - None
hadoop-mapreduce-historyserver - 2.5.3.0-37
hadoop-yarn-nodemanager - 2.5.3.0-37
hadoop-yarn-resourcemanager - 2.5.3.0-37
hadoop-yarn-timelineserver - 2.5.3.0-37
hbase-client - 2.5.3.0-37
hbase-master - 2.5.3.0-37
hbase-regionserver - 2.5.3.0-37
hive-metastore - 2.5.3.0-37
hive-server2 - 2.5.3.0-37
hive-server2-hive2 - 2.5.3.0-37
hive-webhcat - 2.5.3.0-37
kafka-broker - 2.5.3.0-37
knox-server - None
livy-server - 2.5.3.0-37
mahout-client - None
oozie-client - None
oozie-server - None
phoenix-client - None
phoenix-server - None
ranger-admin - None
ranger-kms - None
ranger-tagsync - None
ranger-usersync - None
slider-client - 2.5.3.0-37
spark-client - 2.5.3.0-37
spark-historyserver - 2.5.3.0-37
spark-thriftserver - 2.5.3.0-37
spark2-client - 2.5.3.0-37
spark2-historyserver - 2.5.3.0-37
spark2-thriftserver - 2.5.3.0-37
sqoop-client - None
sqoop-server - None
storm-client - 2.5.3.0-37
storm-nimbus - 2.5.3.0-37
storm-slider-client - 2.5.3.0-37
storm-supervisor - 2.5.3.0-37
zeppelin-server - 2.5.3.0-37
zookeeper-client - 2.5.3.0-37
zookeeper-server - 2.5.3.0-37
[root@slot2 ~]# hdp-select set all 2.5.3.0-37
[root@slot2 ~]# hdp-select versions
2.5.3.0-37
2.6.3.0-235
[root@slot2 ~]# hdp-select set all 2.6.3.0-235
[root@slot2 ~]# hdp-select
accumulo-client - None
accumulo-gc - None
accumulo-master - None
accumulo-monitor - None
accumulo-tablet - None
accumulo-tracer - None
atlas-client - 2.6.3.0-235
atlas-server - 2.6.3.0-235
falcon-client - None
falcon-server - None
flume-server - None
hadoop-client - 2.6.3.0-235
hadoop-hdfs-datanode - None
hadoop-hdfs-journalnode - None
hadoop-hdfs-namenode - None
hadoop-hdfs-nfs3 - None
hadoop-hdfs-portmap - None
hadoop-hdfs-secondarynamenode - None
hadoop-hdfs-zkfc - None
hadoop-httpfs - None
hadoop-mapreduce-historyserver - None
hadoop-yarn-nodemanager - None
hadoop-yarn-resourcemanager - None
hadoop-yarn-timelineserver - None
hbase-client - 2.6.3.0-235
hbase-master - 2.6.3.0-235
hbase-regionserver - 2.6.3.0-235
hive-metastore - 2.6.3.0-235
hive-server2 - 2.6.3.0-235
hive-server2-hive2 - 2.6.3.0-235
hive-webhcat - None
kafka-broker - 2.6.3.0-235
knox-server - None
livy-server - None
mahout-client - None
oozie-client - None
oozie-server - None
phoenix-client - None
phoenix-server - None
ranger-admin - None
ranger-kms - None
ranger-tagsync - None
ranger-usersync - None
slider-client - None
spark-client - 2.6.3.0-235
spark-historyserver - 2.6.3.0-235
spark-thriftserver - 2.6.3.0-235
spark2-client - 2.6.3.0-235
spark2-historyserver - 2.6.3.0-235
spark2-thriftserver - 2.6.3.0-235
sqoop-client - None
sqoop-server - None
storm-client - 2.6.3.0-235
storm-nimbus - 2.6.3.0-235
storm-slider-client - None
storm-supervisor - 2.6.3.0-235
zeppelin-server - 2.6.3.0-235
zookeeper-client - None
zookeeper-server - None
No result if I run hdp-select set all 2.5.3.0-37 The following is the result if I run hdp-select versions [root@host ~]# hdp-select versions
2.5.3.0-37
2.6.3.0-235
... View more
12-04-2017
04:53 AM
Yes. I also have folder /usr/hdp/2.6.3.0-235
... View more
12-04-2017
04:52 AM
I downgrade HDP from 2.6 to 2.5. I did proper cleanup. I follow your steps but there is error stated that put: `/hdp/apps/2.5.3.0-37/spark2': No such file or directory: `hdfs://hostname:8020/hdp/apps/2.5.3.0-37/spark2'
... View more
12-04-2017
01:07 AM
Hi Aditya, here it is [root@host ~]# ls /usr/hdp/2.5.3.0-37
atlas HDP-LICENSE.txt ranger-hbase-plugin spark2
datafu HDP-NOTICES.txt ranger-hdfs-plugin storm
etc hive ranger-hive-plugin storm-slider-client
hadoop hive2 ranger-kafka-plugin tez
hadoop-hdfs hive-hcatalog ranger-storm-plugin tez_hive2
hadoop-mapreduce kafka ranger-yarn-plugin usr
hadoop-yarn livy slider zeppelin
hbase pig spark zookeeper
... View more
12-03-2017
08:39 AM
Hi, I am having trouble to start Spark2 History Server in Ambari. Below is the standard errors. stderr: /var/lib/ambari-agent/data/errors-3723.txt 2017-12-01 11:26:34,759 - Found multiple matches for stack version, cannot identify the correct one from: 2.5.3.0-37, 2.6.3.0-235
2017-12-01 11:26:34,759 - Cannot copy spark2 tarball to HDFS because stack version could be be determined.
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/SPARK2/2.0.0/package/scripts/job_history_server.py", line 103, in <module>
JobHistoryServer().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 329, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/SPARK2/2.0.0/package/scripts/job_history_server.py", line 56, in start
spark_service('jobhistoryserver', upgrade_type=upgrade_type, action='start')
File "/var/lib/ambari-agent/cache/common-services/SPARK2/2.0.0/package/scripts/spark_service.py", line 65, in spark_service
make_tarfile(tmp_archive_file, source_dir)
File "/var/lib/ambari-agent/cache/common-services/SPARK2/2.0.0/package/scripts/spark_service.py", line 38, in make_tarfile
os.remove(output_filename)
TypeError: coercing to Unicode: need string or buffer, NoneType found
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Spark
12-01-2017
07:15 AM
Thank you Jay. It works for me. 🙂
... View more
12-01-2017
07:12 AM
Thank you Aditya it worked. I made a backup file. 🙂
... View more
12-01-2017
06:18 AM
Hi Aditya, here is my /etc/zeppelin/conf/interpreter.json {
"interpreterSettings": {
"2CZ3ASUMC": {
"id": "2CZ3ASUMC",
"name": "python",
"group": "python",
"properties": {
"zeppelin.python": "/usr/lib/miniconda2/bin/python",
"zeppelin.python.maxResult": "1000000000",
"zeppelin.interpreter.localRepo": "/usr/hdp/current/zeppelin-server/local -repo/2CZ3ASUMC",
"zeppelin.python.useIPython": "true",
"zeppelin.ipython.launch.timeout": "30000"
},
"interpreterGroup": [
{
"class": "org.apache.zeppelin.python.PythonInterpreter",
"name": "python"
}
],
"dependencies": [],
"option": {
"remote": true,
"perNoteSession": false,
"perNoteProcess": false,
"isExistingProcess": false,
"isUserImpersonate": false
}
},
"2CKEKWY8Z": {
"id": "2CKEKWY8Z",
"name": "angular",
"group": "angular",
"properties": {},
"interpreterGroup": [
{
"class": "org.apache.zeppelin.angular.AngularInterpreter",
"name": "angular"
}
],
"dependencies": [],
"option": {
"remote": true,
"perNoteSession": false,
"perNoteProcess": false,
"isExistingProcess": false,
"port": "-1",
"isUserImpersonate": false
}
},
"2CK8A9MEG": {
"id": "2CK8A9MEG",
"name": "jdbc",
"group": "jdbc",
"properties": {
"phoenix.user": "phoenixuser",
"hive.url": "jdbc:hive2://slot4:2181,slot2:2181,slot3:2181/;serviceDiscov eryMode\u003dzooKeeper;zooKeeperNamespace\u003dhiveserver2",
"default.driver": "org.postgresql.Driver",
"phoenix.driver": "org.apache.phoenix.jdbc.PhoenixDriver",
"hive.user": "hive",
"psql.password": "",
"psql.user": "phoenixuser",
"psql.url": "jdbc:postgresql://localhost:5432/",
"default.user": "gpadmin",
"phoenix.hbase.client.retries.number": "1",
"phoenix.url": "jdbc:phoenix:slot4,slot2,slot3:/hbase-unsecure",
"tajo.url": "jdbc:tajo://localhost:26002/default",
"tajo.driver": "org.apache.tajo.jdbc.TajoDriver",
"psql.driver": "org.postgresql.Driver",
"default.password": "",
"zeppelin.interpreter.localRepo": "/usr/hdp/current/zeppelin-server/local -repo/2CK8A9MEG",
"zeppelin.jdbc.auth.type": "SIMPLE",
"hive.proxy.user.property": "hive.server2.proxy.user",
"hive.password": "",
"zeppelin.jdbc.concurrent.use": "true",
"hive.driver": "org.apache.hive.jdbc.HiveDriver",
"zeppelin.jdbc.keytab.location": "",
"common.max_count": "1000",
"phoenix.password": "",
"zeppelin.jdbc.principal": "",
"zeppelin.jdbc.concurrent.max_connection": "10",
"default.url": "jdbc:postgresql://localhost:5432/"
},
"interpreterGroup": [
{
"class": "org.apache.zeppelin.jdbc.JDBCInterpreter",
"name": "sql"
}
],
"dependencies": [],
"option": {
"remote": true,
"perNoteSession": false,
"perNoteProcess": false,
"isExistingProcess": false,
"port": "-1",
"isUserImpersonate": false
}
},
"2CYSZ9Q7Q": {
"id": "2CYSZ9Q7Q",
"name": "spark",
"group": "spark",
"properties": {
"spark.cores.max": "",
"zeppelin.spark.printREPLOutput": "true",
"master": "local[*]",
"zeppelin.spark.maxResult": "1000",
"zeppelin.dep.localrepo": "local-repo",
"spark.app.name": "Zeppelin",
"spark.executor.memory": "",
"zeppelin.spark.sql.stacktrace": "false",
"zeppelin.spark.importImplicit": "true",
"zeppelin.spark.useHiveContext": "true",
"zeppelin.interpreter.localRepo": "/usr/hdp/current/zeppelin-server/local -repo/2CYSZ9Q7Q",
"zeppelin.spark.concurrentSQL": "false",
"args": "",
"zeppelin.pyspark.python": "/usr/lib/miniconda2/bin/python",
"spark.yarn.keytab": "",
... View more
12-01-2017
05:26 AM
Hi Jay, I did the command but Zeppelin UI still giving error 503.
... View more
12-01-2017
04:47 AM
Hi Jay, 1. and 2. Yes. Its not corrupted and the ownership is zeppelin:zeppelin. -rw-r--r--. 1 zeppelin zeppelin 4096 Nov 26 23:30 /etc/zeppelin/conf/interpreter.json 3. Below shows file content before I do backup
... View more
12-01-2017
01:12 AM
Hi, I am having trouble to start Zeppelin Notebook in Ambari. Below is the standard errors. stderr: /var/lib/ambari-agent/data/errors-3626.txt Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/master.py", line 467, in <module>
Master().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 329, in execute
method(env)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 865, in restart
self.start(env, upgrade_type=upgrade_type)
File "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/master.py", line 223, in start
self.update_kerberos_properties()
File "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/master.py", line 273, in update_kerberos_properties
config_data = self.get_interpreter_settings()
File "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/master.py", line 248, in get_interpreter_settings
config_data = json.loads(config_content)
File "/usr/lib64/python2.7/json/__init__.py", line 338, in loads
return _default_decoder.decode(s)
File "/usr/lib64/python2.7/json/decoder.py", line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib64/python2.7/json/decoder.py", line 382, in raw_decode
obj, end = self.scan_once(s, idx)
ValueError: Unterminated string starting at: line 119 column 9 (char 4087
... View more
Labels:
- Labels:
-
Apache Zeppelin
11-30-2017
10:13 AM
Hi Jay, thanks for replying. myhostname is not really my hostname. I kept it confidential. 1) I did 'telnet myhostname 50070'. This is the result. [root@myhost ~]# nc -v myhostname 50070
Ncat: Version 6.40 ( http://nmap.org/ncat )
Ncat: Connected to myhostip:50070.
.
HTTP/1.1 400 Bad Request
Connection: close
Server: Jetty(6.1.26.hwx)
2) When I grep 50070. [root@myhost hdfs]# netstat -tnlpa | grep 50070
tcp 0 0 myhostip:50070 0.0.0.0:* LISTEN 17042/jav a
tcp 0 0 myhostip:50070 myhostip:53422 TIME_WAIT -
tcp 0 0 myhostip:53080 myhostip:50070 CLOSE_WAIT 194862/nc
tcp 0 0 myhostip:50070 myhostip:53420 TIME_WAIT -
tcp 0 0 myhostip:50070 myhostip:53424 TIME_WAIT -
tcp 0 0 myhostip:50070 myhostip:53426 TIME_WAIT -
tcp 0 0 myhostip:50070 myhostip:53440 TIME_WAIT -
tcp 0 0 myhostip:50070 myhostip:53418 TIME_WAIT -
... View more
11-30-2017
09:40 AM
Hi, I am trying to start YARN Service in Ambari but it is giving error. I'm using multi-nodes. Please find below details of stderr and stdout. Thanks. stderr: /var/lib/ambari-agent/data/errors-3528.txt Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/application_timeline_server.py", line 94, in <module>
ApplicationTimelineServer().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 329, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/application_timeline_server.py", line 44, in start
self.configure(env) # FOR SECURITY
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 119, in locking_configure
original_configure(obj, *args, **kw)
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/application_timeline_server.py", line 55, in configure
yarn(name='apptimelineserver')
File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/yarn.py", line 356, in yarn
mode=0755
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 604, in action_create_on_execute
self.action_delayed("create")
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 601, in action_delayed
self.get_hdfs_resource_executor().action_delayed(action_name, self)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 328, in action_delayed
self._assert_valid()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 287, in _assert_valid
self.target_status = self._get_file_status(target)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 430, in _get_file_status
list_status = self.util.run_command(target, 'GETFILESTATUS', method='GET', ignore_status_codes=['404'], assertable_result=False)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 177, in run_command
return self._run_command(*args, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 235, in _run_command
_, out, err = get_user_call_output(cmd, user=self.run_user, logoutput=self.logoutput, quiet=False)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/get_user_call_output.py", line 61, in get_user_call_output
raise ExecutionFailed(err_msg, code, files_output[0], files_output[1])
resource_management.core.exceptions.ExecutionFailed: Execution of 'curl -sS -L -w '%{http_code}' -X GET 'http://myhostname:50070/webhdfs/v1/ats/done?op=GETFILESTATUS&user.name=hdfs' 1>/tmp/tmpdOQron 2>/tmp/tmprXPUdn' returned 7. curl: (7) Failed connect to myhostname:50070; No route to host
000 stdout: /var/lib/ambari-agent/data/output-3528.txt 2017-11-30 01:59:09,238 - Stack Feature Version Info: Cluster Stack=2.5, Cluster Current Version=None, Command Stack=None, Command Version=2.5.3.0-37 -> 2.5.3.0-37
2017-11-30 01:59:09,260 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-11-30 01:59:09,464 - Stack Feature Version Info: Cluster Stack=2.5, Cluster Current Version=None, Command Stack=None, Command Version=2.5.3.0-37 -> 2.5.3.0-37
2017-11-30 01:59:09,473 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
User Group mapping (user_group) is missing in the hostLevelParams
2017-11-30 01:59:09,474 - Group['metron'] {}
2017-11-30 01:59:09,475 - Group['livy'] {}
2017-11-30 01:59:09,475 - Group['elasticsearch'] {}
2017-11-30 01:59:09,475 - Group['spark'] {}
2017-11-30 01:59:09,476 - Group['zeppelin'] {}
2017-11-30 01:59:09,476 - Group['hadoop'] {}
2017-11-30 01:59:09,476 - Group['kibana'] {}
2017-11-30 01:59:09,476 - Group['users'] {}
2017-11-30 01:59:09,477 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-30 01:59:09,478 - call['/var/lib/ambari-agent/tmp/changeUid.sh hive'] {}
2017-11-30 01:59:09,489 - call returned (0, '1001')
2017-11-30 01:59:09,489 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1001}
2017-11-30 01:59:09,492 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-30 01:59:09,494 - call['/var/lib/ambari-agent/tmp/changeUid.sh storm'] {}
2017-11-30 01:59:09,505 - call returned (0, '1002')
2017-11-30 01:59:09,506 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1002}
2017-11-30 01:59:09,508 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-30 01:59:09,509 - call['/var/lib/ambari-agent/tmp/changeUid.sh zookeeper'] {}
2017-11-30 01:59:09,521 - call returned (0, '1003')
2017-11-30 01:59:09,521 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1003}
2017-11-30 01:59:09,523 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-30 01:59:09,525 - call['/var/lib/ambari-agent/tmp/changeUid.sh ams'] {}
2017-11-30 01:59:09,536 - call returned (0, '1004')
2017-11-30 01:59:09,536 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1004}
2017-11-30 01:59:09,538 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-30 01:59:09,540 - call['/var/lib/ambari-agent/tmp/changeUid.sh tez'] {}
2017-11-30 01:59:09,551 - call returned (0, '1005')
2017-11-30 01:59:09,551 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': 1005}
2017-11-30 01:59:09,553 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-30 01:59:09,555 - call['/var/lib/ambari-agent/tmp/changeUid.sh zeppelin'] {}
2017-11-30 01:59:09,565 - call returned (0, '1007')
2017-11-30 01:59:09,566 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'zeppelin', u'hadoop'], 'uid': 1007}
2017-11-30 01:59:09,567 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-30 01:59:09,568 - call['/var/lib/ambari-agent/tmp/changeUid.sh metron'] {}
2017-11-30 01:59:09,579 - call returned (0, '1008')
2017-11-30 01:59:09,580 - User['metron'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1008}
2017-11-30 01:59:09,582 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-30 01:59:09,583 - call['/var/lib/ambari-agent/tmp/changeUid.sh livy'] {}
2017-11-30 01:59:09,594 - call returned (0, '1009')
2017-11-30 01:59:09,594 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1009}
2017-11-30 01:59:09,596 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-30 01:59:09,597 - call['/var/lib/ambari-agent/tmp/changeUid.sh elasticsearch'] {}
2017-11-30 01:59:09,608 - call returned (0, '1010')
2017-11-30 01:59:09,608 - User['elasticsearch'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1010}
2017-11-30 01:59:09,610 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-30 01:59:09,612 - call['/var/lib/ambari-agent/tmp/changeUid.sh spark'] {}
2017-11-30 01:59:09,624 - call returned (0, '1019')
2017-11-30 01:59:09,624 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1019}
2017-11-30 01:59:09,626 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2017-11-30 01:59:09,628 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-30 01:59:09,630 - call['/var/lib/ambari-agent/tmp/changeUid.sh flume'] {}
2017-11-30 01:59:09,641 - call returned (0, '1011')
2017-11-30 01:59:09,642 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1011}
2017-11-30 01:59:09,644 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-30 01:59:09,645 - call['/var/lib/ambari-agent/tmp/changeUid.sh kafka'] {}
2017-11-30 01:59:09,655 - call returned (0, '1012')
2017-11-30 01:59:09,655 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1012}
2017-11-30 01:59:09,657 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-30 01:59:09,658 - call['/var/lib/ambari-agent/tmp/changeUid.sh hdfs'] {}
2017-11-30 01:59:09,668 - call returned (0, '1013')
2017-11-30 01:59:09,669 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1013}
2017-11-30 01:59:09,671 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-30 01:59:09,673 - call['/var/lib/ambari-agent/tmp/changeUid.sh yarn'] {}
2017-11-30 01:59:09,683 - call returned (0, '1014')
2017-11-30 01:59:09,683 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1014}
2017-11-30 01:59:09,685 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-30 01:59:09,687 - call['/var/lib/ambari-agent/tmp/changeUid.sh kibana'] {}
2017-11-30 01:59:09,697 - call returned (0, '1016')
2017-11-30 01:59:09,697 - User['kibana'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1016}
2017-11-30 01:59:09,699 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-30 01:59:09,701 - call['/var/lib/ambari-agent/tmp/changeUid.sh mapred'] {}
2017-11-30 01:59:09,710 - call returned (0, '1015')
2017-11-30 01:59:09,711 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1015}
2017-11-30 01:59:09,712 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-30 01:59:09,714 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {}
2017-11-30 01:59:09,723 - call returned (0, '1017')
2017-11-30 01:59:09,724 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1017}
2017-11-30 01:59:09,726 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-30 01:59:09,727 - call['/var/lib/ambari-agent/tmp/changeUid.sh hcat'] {}
2017-11-30 01:59:09,737 - call returned (0, '1018')
2017-11-30 01:59:09,738 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1018}
2017-11-30 01:59:09,739 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-30 01:59:09,740 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-11-30 01:59:09,747 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2017-11-30 01:59:09,747 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2017-11-30 01:59:09,749 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-30 01:59:09,750 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-30 01:59:09,751 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {}
2017-11-30 01:59:09,761 - call returned (0, '1017')
2017-11-30 01:59:09,762 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1017'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2017-11-30 01:59:09,769 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1017'] due to not_if
2017-11-30 01:59:09,769 - Group['hdfs'] {}
2017-11-30 01:59:09,770 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']}
2017-11-30 01:59:09,770 - FS Type:
2017-11-30 01:59:09,771 - Directory['/etc/hadoop'] {'mode': 0755}
2017-11-30 01:59:09,793 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-11-30 01:59:09,795 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-11-30 01:59:09,814 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2017-11-30 01:59:09,828 - Skipping Execute[('setenforce', '0')] due to only_if
2017-11-30 01:59:09,828 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2017-11-30 01:59:09,831 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2017-11-30 01:59:09,832 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2017-11-30 01:59:09,838 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2017-11-30 01:59:09,841 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2017-11-30 01:59:09,850 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2017-11-30 01:59:09,864 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs', 'group': 'hadoop'}
2017-11-30 01:59:09,865 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2017-11-30 01:59:09,866 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2017-11-30 01:59:09,871 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644}
2017-11-30 01:59:09,876 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2017-11-30 01:59:10,143 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-11-30 01:59:10,144 - Stack Feature Version Info: Cluster Stack=2.5, Cluster Current Version=None, Command Stack=None, Command Version=2.5.3.0-37 -> 2.5.3.0-37
2017-11-30 01:59:10,145 - call['ambari-python-wrap /usr/bin/hdp-select status hadoop-yarn-resourcemanager'] {'timeout': 20}
2017-11-30 01:59:10,183 - call returned (0, 'hadoop-yarn-resourcemanager - 2.5.3.0-37')
2017-11-30 01:59:10,234 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-11-30 01:59:10,255 - Directory['/var/log/hadoop-yarn/nodemanager/recovery-state'] {'owner': 'yarn', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2017-11-30 01:59:10,257 - Directory['/var/run/hadoop-yarn'] {'owner': 'yarn', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2017-11-30 01:59:10,258 - Directory['/var/run/hadoop-yarn/yarn'] {'owner': 'yarn', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2017-11-30 01:59:10,258 - Directory['/var/log/hadoop-yarn/yarn'] {'owner': 'yarn', 'group': 'hadoop', 'create_parents': True, 'cd_access': 'a'}
2017-11-30 01:59:10,259 - Directory['/var/run/hadoop-mapreduce'] {'owner': 'mapred', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2017-11-30 01:59:10,259 - Directory['/var/run/hadoop-mapreduce/mapred'] {'owner': 'mapred', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2017-11-30 01:59:10,260 - Directory['/var/log/hadoop-mapreduce'] {'owner': 'mapred', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2017-11-30 01:59:10,260 - Directory['/var/log/hadoop-mapreduce/mapred'] {'owner': 'mapred', 'group': 'hadoop', 'create_parents': True, 'cd_access': 'a'}
2017-11-30 01:59:10,261 - Directory['/var/log/hadoop-yarn'] {'owner': 'yarn', 'group': 'hadoop', 'ignore_failures': True, 'create_parents': True, 'cd_access': 'a'}
2017-11-30 01:59:10,262 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {u'final': {u'fs.defaultFS': u'true'}}, 'owner': 'hdfs', 'configurations': ...}
2017-11-30 01:59:10,272 - Generating config: /usr/hdp/current/hadoop-client/conf/core-site.xml
2017-11-30 01:59:10,272 - File['/usr/hdp/current/hadoop-client/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2017-11-30 01:59:10,293 - XmlConfig['hdfs-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {u'final': {u'dfs.support.append': u'true', u'dfs.datanode.data.dir': u'true', u'dfs.namenode.http-address': u'true', u'dfs.namenode.name.dir': u'true', u'dfs.webhdfs.enabled': u'true', u'dfs.datanode.failed.volumes.tolerated': u'true'}}, 'owner': 'hdfs', 'configurations': ...}
2017-11-30 01:59:10,301 - Generating config: /usr/hdp/current/hadoop-client/conf/hdfs-site.xml
2017-11-30 01:59:10,301 - File['/usr/hdp/current/hadoop-client/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2017-11-30 01:59:10,338 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn', 'configurations': ...}
2017-11-30 01:59:10,344 - Generating config: /usr/hdp/current/hadoop-client/conf/mapred-site.xml
2017-11-30 01:59:10,345 - File['/usr/hdp/current/hadoop-client/conf/mapred-site.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2017-11-30 01:59:10,371 - Changing owner for /usr/hdp/current/hadoop-client/conf/mapred-site.xml from 1015 to yarn
2017-11-30 01:59:10,371 - XmlConfig['yarn-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn', 'configurations': ...}
2017-11-30 01:59:10,376 - Generating config: /usr/hdp/current/hadoop-client/conf/yarn-site.xml
2017-11-30 01:59:10,377 - File['/usr/hdp/current/hadoop-client/conf/yarn-site.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2017-11-30 01:59:10,439 - XmlConfig['capacity-scheduler.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn', 'configurations': ...}
2017-11-30 01:59:10,444 - Generating config: /usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml
2017-11-30 01:59:10,444 - File['/usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2017-11-30 01:59:10,453 - Changing owner for /usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml from 1013 to yarn
2017-11-30 01:59:10,453 - Directory['/hadoop/yarn/timeline'] {'owner': 'yarn', 'group': 'hadoop', 'create_parents': True, 'cd_access': 'a'}
2017-11-30 01:59:10,453 - Directory['/hadoop/yarn/timeline'] {'owner': 'yarn', 'group': 'hadoop', 'create_parents': True, 'cd_access': 'a'}
2017-11-30 01:59:10,454 - HdfsResource['/ats/done'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'dfs_type': '', 'default_fs': 'hdfs://slot2:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'change_permissions_for_parents': True, 'owner': 'yarn', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/apps/hive/warehouse', u'/mr-history/done', u'/app-logs', u'/tmp'], 'mode': 0755}
2017-11-30 01:59:10,456 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://slot2:50070/webhdfs/v1/ats/done?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpdOQron 2>/tmp/tmprXPUdn''] {'logoutput': None, 'quiet': False}
2017-11-30 01:59:10,748 - call returned (7, '')
Command failed after 1 tries
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache YARN
11-27-2017
08:08 AM
Hi Jay, this is the log 2017-11-13 03:52:28,337 INFO namenode.NameNode (LogAdapter.java:info(47)) - STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: user = hdfs
STARTUP_MSG: host = hostname
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.7.3.2.5.3.0-37
STARTUP_MSG: classpath = /usr/hdp/current/hadoop-client/conf:/usr/hdp/2.5.3.0-37/hadoop/lib/ojdbc6.jar:/usr/hdp/2.5.3.0-37/hadoop/lib/jackson-annotations-2.2.3.jar:/usr/hdp/2.5.3.0-37/hadoop/lib/ranger-hdfs-plugin-shim-0.6.0.2.5.3.0-3$
STARTUP_MSG: build = git@github.com:hortonworks/hadoop.git -r 9828acfdec41a121f0121f556b09e2d112259e92; compiled by 'jenkins' on 2016-11-29T18:06Z
STARTUP_MSG: java = 1.8.0_112
************************************************************/
2017-11-13 03:52:28,351 INFO namenode.NameNode (LogAdapter.java:info(47)) - registered UNIX signal handlers for [TERM, HUP, INT]
2017-11-13 03:52:28,356 INFO namenode.NameNode (NameNode.java:createNameNode(1600)) - createNameNode []
2017-11-13 03:52:28,567 INFO impl.MetricsConfig (MetricsConfig.java:loadFirst(112)) - loaded properties from hadoop-metrics2.properties
2017-11-13 03:52:28,708 INFO timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(82)) - Initializing Timeline metrics sink.
2017-11-13 03:52:28,709 INFO timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(102)) - Identified hostname = slot2, serviceName = namenode
2017-11-13 03:52:28,813 INFO availability.MetricCollectorHAHelper (MetricCollectorHAHelper.java:findLiveCollectorHostsFromZNode(79)) - /ambari-metrics-cluster znode does not exist. Skipping requesting live instances from zookeeper
2017-11-13 03:52:28,817 INFO timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(128)) - No suitable collector found.
2017-11-13 03:52:28,823 INFO timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(180)) - RPC port properties configured: {8020=client}
2017-11-13 03:52:28,833 INFO impl.MetricsSinkAdapter (MetricsSinkAdapter.java:start(206)) - Sink timeline started
2017-11-13 03:52:28,903 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:startTimer(376)) - Scheduled snapshot period at 10 second(s).
2017-11-13 03:52:28,903 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:start(192)) - NameNode metrics system started
2017-11-13 03:52:28,908 INFO namenode.NameNode (NameNode.java:setClientNamenodeAddress(450)) - fs.defaultFS is hdfs://slot2:8020
2017-11-13 03:52:28,908 INFO namenode.NameNode (NameNode.java:setClientNamenodeAddress(470)) - Clients are to use slot2:8020 to access this namenode/service.
2017-11-13 03:52:29,025 INFO util.JvmPauseMonitor (JvmPauseMonitor.java:run(179)) - Starting JVM pause monitor
2017-11-13 03:52:29,032 INFO hdfs.DFSUtil (DFSUtil.java:httpServerTemplateForNNAndJN(1780)) - Starting Web-server for hdfs at: http://slot2:50070
2017-11-13 03:52:29,072 INFO mortbay.log (Slf4jLog.java:info(67)) - Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2017-11-13 03:52:29,078 INFO server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(293)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2017-11-13 03:52:29,082 INFO http.HttpRequestLog (HttpRequestLog.java:getRequestLog(80)) - Http request log for http.requests.namenode is not defined
2017-11-13 03:52:29,086 INFO http.HttpServer2 (HttpServer2.java:addGlobalFilter(754)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2017-11-13 03:52:29,088 INFO http.HttpServer2 (HttpServer2.java:addFilter(729)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2017-11-13 03:52:29,088 INFO http.HttpServer2 (HttpServer2.java:addFilter(737)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2017-11-13 03:52:29,088 INFO http.HttpServer2 (HttpServer2.java:addFilter(737)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2017-11-13 03:52:29,089 INFO security.HttpCrossOriginFilterInitializer (HttpCrossOriginFilterInitializer.java:initFilter(49)) - CORS filter not enabled. Please set hadoop.http.cross-origin.enabled to 'true' to enable it
2017-11-13 03:52:29,107 INFO http.HttpServer2 (NameNodeHttpServer.java:initWebHdfs(93)) - Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2017-11-13 03:52:29,108 INFO http.HttpServer2 (HttpServer2.java:addJerseyResourcePackage(653)) - addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=$
2017-11-13 03:52:29,117 INFO http.HttpServer2 (HttpServer2.java:openListeners(959)) - Jetty bound to port 50070
2017-11-13 03:52:29,117 INFO mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26.hwx
2017-11-13 03:52:29,224 INFO mortbay.log (Slf4jLog.java:info(67)) - Started HttpServer2$SelectChannelConnectorWithSafeStartup@slot2:50070
2017-11-13 03:52:29,265 WARN common.Util (Util.java:stringAsURI(56)) - Path /hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2017-11-13 03:52:29,265 WARN common.Util (Util.java:stringAsURI(56)) - Path /hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2017-11-13 03:52:29,266 WARN namenode.FSNamesystem (FSNamesystem.java:checkConfiguration(656)) - Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2017-11-13 03:52:29,266 WARN namenode.FSNamesystem (FSNamesystem.java:checkConfiguration(661)) - Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage direc$
2017-11-13 03:52:29,269 WARN common.Util (Util.java:stringAsURI(56)) - Path /hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2017-11-13 03:52:29,270 WARN common.Util (Util.java:stringAsURI(56)) - Path /hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2017-11-13 03:52:29,274 WARN common.Storage (NNStorage.java:setRestoreFailedStorage(210)) - set restore failed storage to true
2017-11-13 03:52:29,291 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(725)) - No KeyProvider found.
2017-11-13 03:52:29,291 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(731)) - Enabling async auditlog
2017-11-13 03:52:29,292 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(735)) - fsLock is fair:false
2017-11-13 03:52:29,313 INFO blockmanagement.HeartbeatManager (HeartbeatManager.java:<init>(90)) - Setting heartbeat recheck interval to 30000 since dfs.namenode.stale.datanode.interval is less than dfs.namenode.heartbeat.recheck-inter$
2017-11-13 03:52:29,321 INFO blockmanagement.DatanodeManager (DatanodeManager.java:<init>(242)) - dfs.block.invalidate.limit=1000
2017-11-13 03:52:29,321 INFO blockmanagement.DatanodeManager (DatanodeManager.java:<init>(248)) - dfs.namenode.datanode.registration.ip-hostname-check=true
2017-11-13 03:52:29,323 INFO blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(71)) - dfs.namenode.startup.delay.block.deletion.sec is set to 000:01:00:00.000
2017-11-13 03:52:29,323 INFO blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(76)) - The block deletion will start around 2017 Nov 13 04:52:29
2017-11-13 03:52:29,324 INFO util.GSet (LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map BlocksMap
2017-11-13 03:52:29,324 INFO util.GSet (LightWeightGSet.java:computeCapacity(355)) - VM type = 64-bit
2017-11-13 03:52:29,326 INFO util.GSet (LightWeightGSet.java:computeCapacity(356)) - 2.0% max memory 1011.3 MB = 20.2 MB
2017-11-13 03:52:29,326 INFO util.GSet (LightWeightGSet.java:computeCapacity(361)) - capacity = 2^21 = 2097152 entries
... View more
11-27-2017
04:57 AM
Hi Jay, I did the leave safe mode command but it still in safemode. Where can I access NameNode logs? Is it in /var/log/hadoop/hdfs?
... View more
11-27-2017
03:37 AM
Hi, I am trying to start YARN Service in Ambari but it is giving error stated that I need to add resource before turning YARN to safe mode. Please find below details of stderr and stdout. Thanks. stderr: /var/lib/ambari-agent/data/errors-3271.txt Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/application_timeline_server.py", line 94, in <module>
ApplicationTimelineServer().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 329, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/application_timeline_server.py", line 44, in start
self.configure(env) # FOR SECURITY
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 119, in locking_configure
original_configure(obj, *args, **kw)
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/application_timeline_server.py", line 55, in configure
yarn(name='apptimelineserver')
File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/yarn.py", line 356, in yarn
mode=0755
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 604, in action_create_on_execute
self.action_delayed("create")
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 601, in action_delayed
self.get_hdfs_resource_executor().action_delayed(action_name, self)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 337, in action_delayed
self._set_mode(self.target_status)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 508, in _set_mode
self.util.run_command(self.main_resource.resource.target, 'SETPERMISSION', method='PUT', permission=self.mode, assertable_result=False)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 177, in run_command
return self._run_command(*args, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 248, in _run_command
raise WebHDFSCallException(err_msg, result_dict)
resource_management.libraries.providers.hdfs_resource.WebHDFSCallException: Execution of 'curl -sS -L -w '%{http_code}' -X PUT 'http://slot2:50070/webhdfs/v1/ats/done?op=SETPERMISSION&user.name=hdfs&permission=755'' returned status_code=403.
{
"RemoteException": {
"exception": "SafeModeException",
"javaClassName": "org.apache.hadoop.hdfs.server.namenode.SafeModeException",
"message": "Cannot set permission for /ats/done. Name node is in safe mode.\nResources are low on NN. Please add or free up more resources then turn off safe mode manually. NOTE: If you turn off safe mode before adding resources, the NN will immediately return to safe mode. Use \"hdfs dfsadmin -safemode leave\" to turn safe mode off."
}
} stdout: /var/lib/ambari-agent/data/output-3271.txt 2017-11-26 21:02:30,897 - Stack Feature Version Info: Cluster Stack=2.5, Cluster Current Version=None, Command Stack=None, Command Version=2.5.3.0-37 -> 2.5.3.0-37
2017-11-26 21:02:30,919 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-11-26 21:02:31,118 - Stack Feature Version Info: Cluster Stack=2.5, Cluster Current Version=None, Command Stack=None, Command Version=2.5.3.0-37 -> 2.5.3.0-37
2017-11-26 21:02:31,127 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
User Group mapping (user_group) is missing in the hostLevelParams
2017-11-26 21:02:31,128 - Group['metron'] {}
2017-11-26 21:02:31,129 - Group['livy'] {}
2017-11-26 21:02:31,129 - Group['elasticsearch'] {}
2017-11-26 21:02:31,129 - Group['spark'] {}
2017-11-26 21:02:31,130 - Group['zeppelin'] {}
2017-11-26 21:02:31,130 - Group['hadoop'] {}
2017-11-26 21:02:31,130 - Group['kibana'] {}
2017-11-26 21:02:31,130 - Group['users'] {}
2017-11-26 21:02:31,131 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-26 21:02:31,132 - call['/var/lib/ambari-agent/tmp/changeUid.sh hive'] {}
2017-11-26 21:02:31,145 - call returned (0, '1001')
2017-11-26 21:02:31,145 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1001}
2017-11-26 21:02:31,147 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-26 21:02:31,149 - call['/var/lib/ambari-agent/tmp/changeUid.sh storm'] {}
2017-11-26 21:02:31,162 - call returned (0, '1002')
2017-11-26 21:02:31,163 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1002}
2017-11-26 21:02:31,164 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-26 21:02:31,166 - call['/var/lib/ambari-agent/tmp/changeUid.sh zookeeper'] {}
2017-11-26 21:02:31,178 - call returned (0, '1003')
2017-11-26 21:02:31,178 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1003}
2017-11-26 21:02:31,180 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-26 21:02:31,182 - call['/var/lib/ambari-agent/tmp/changeUid.sh ams'] {}
2017-11-26 21:02:31,194 - call returned (0, '1004')
2017-11-26 21:02:31,194 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1004}
2017-11-26 21:02:31,196 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-26 21:02:31,198 - call['/var/lib/ambari-agent/tmp/changeUid.sh tez'] {}
2017-11-26 21:02:31,209 - call returned (0, '1005')
2017-11-26 21:02:31,210 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': 1005}
2017-11-26 21:02:31,212 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-26 21:02:31,214 - call['/var/lib/ambari-agent/tmp/changeUid.sh zeppelin'] {}
2017-11-26 21:02:31,225 - call returned (0, '1007')
2017-11-26 21:02:31,226 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'zeppelin', u'hadoop'], 'uid': 1007}
2017-11-26 21:02:31,228 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-26 21:02:31,230 - call['/var/lib/ambari-agent/tmp/changeUid.sh metron'] {}
2017-11-26 21:02:31,241 - call returned (0, '1008')
2017-11-26 21:02:31,242 - User['metron'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1008}
2017-11-26 21:02:31,244 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-26 21:02:31,245 - call['/var/lib/ambari-agent/tmp/changeUid.sh livy'] {}
2017-11-26 21:02:31,256 - call returned (0, '1009')
2017-11-26 21:02:31,257 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1009}
2017-11-26 21:02:31,259 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-26 21:02:31,260 - call['/var/lib/ambari-agent/tmp/changeUid.sh elasticsearch'] {}
2017-11-26 21:02:31,271 - call returned (0, '1010')
2017-11-26 21:02:31,272 - User['elasticsearch'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1010}
2017-11-26 21:02:31,274 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-26 21:02:31,275 - call['/var/lib/ambari-agent/tmp/changeUid.sh spark'] {}
2017-11-26 21:02:31,286 - call returned (0, '1019')
2017-11-26 21:02:31,287 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1019}
2017-11-26 21:02:31,288 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2017-11-26 21:02:31,290 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-26 21:02:31,291 - call['/var/lib/ambari-agent/tmp/changeUid.sh flume'] {}
2017-11-26 21:02:31,302 - call returned (0, '1011')
2017-11-26 21:02:31,303 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1011}
2017-11-26 21:02:31,304 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-26 21:02:31,306 - call['/var/lib/ambari-agent/tmp/changeUid.sh kafka'] {}
2017-11-26 21:02:31,317 - call returned (0, '1012')
2017-11-26 21:02:31,317 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1012}
2017-11-26 21:02:31,319 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-26 21:02:31,320 - call['/var/lib/ambari-agent/tmp/changeUid.sh hdfs'] {}
2017-11-26 21:02:31,331 - call returned (0, '1013')
2017-11-26 21:02:31,332 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1013}
2017-11-26 21:02:31,334 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-26 21:02:31,336 - call['/var/lib/ambari-agent/tmp/changeUid.sh yarn'] {}
2017-11-26 21:02:31,347 - call returned (0, '1014')
2017-11-26 21:02:31,347 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1014}
2017-11-26 21:02:31,349 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-26 21:02:31,350 - call['/var/lib/ambari-agent/tmp/changeUid.sh kibana'] {}
2017-11-26 21:02:31,361 - call returned (0, '1016')
2017-11-26 21:02:31,362 - User['kibana'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1016}
2017-11-26 21:02:31,364 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-26 21:02:31,366 - call['/var/lib/ambari-agent/tmp/changeUid.sh mapred'] {}
2017-11-26 21:02:31,377 - call returned (0, '1015')
2017-11-26 21:02:31,378 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1015}
2017-11-26 21:02:31,379 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-26 21:02:31,380 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {}
2017-11-26 21:02:31,391 - call returned (0, '1017')
2017-11-26 21:02:31,392 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1017}
2017-11-26 21:02:31,394 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-26 21:02:31,395 - call['/var/lib/ambari-agent/tmp/changeUid.sh hcat'] {}
2017-11-26 21:02:31,406 - call returned (0, '1018')
2017-11-26 21:02:31,407 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1018}
2017-11-26 21:02:31,408 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-26 21:02:31,410 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-11-26 21:02:31,416 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2017-11-26 21:02:31,417 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2017-11-26 21:02:31,418 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-26 21:02:31,420 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-26 21:02:31,421 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {}
2017-11-26 21:02:31,432 - call returned (0, '1017')
2017-11-26 21:02:31,433 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1017'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2017-11-26 21:02:31,439 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1017'] due to not_if
2017-11-26 21:02:31,440 - Group['hdfs'] {}
2017-11-26 21:02:31,440 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']}
2017-11-26 21:02:31,441 - FS Type:
2017-11-26 21:02:31,441 - Directory['/etc/hadoop'] {'mode': 0755}
2017-11-26 21:02:31,463 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-11-26 21:02:31,464 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-11-26 21:02:31,485 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2017-11-26 21:02:31,500 - Skipping Execute[('setenforce', '0')] due to only_if
2017-11-26 21:02:31,501 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2017-11-26 21:02:31,505 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2017-11-26 21:02:31,505 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2017-11-26 21:02:31,512 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2017-11-26 21:02:31,515 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2017-11-26 21:02:31,525 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2017-11-26 21:02:31,540 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs', 'group': 'hadoop'}
2017-11-26 21:02:31,541 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2017-11-26 21:02:31,543 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2017-11-26 21:02:31,549 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644}
2017-11-26 21:02:31,555 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2017-11-26 21:02:31,819 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-11-26 21:02:31,820 - Stack Feature Version Info: Cluster Stack=2.5, Cluster Current Version=None, Command Stack=None, Command Version=2.5.3.0-37 -> 2.5.3.0-37
2017-11-26 21:02:31,820 - call['ambari-python-wrap /usr/bin/hdp-select status hadoop-yarn-resourcemanager'] {'timeout': 20}
2017-11-26 21:02:31,855 - call returned (0, 'hadoop-yarn-resourcemanager - 2.5.3.0-37')
2017-11-26 21:02:31,887 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-11-26 21:02:31,904 - Directory['/var/log/hadoop-yarn/nodemanager/recovery-state'] {'owner': 'yarn', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2017-11-26 21:02:31,905 - Directory['/var/run/hadoop-yarn'] {'owner': 'yarn', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2017-11-26 21:02:31,906 - Directory['/var/run/hadoop-yarn/yarn'] {'owner': 'yarn', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2017-11-26 21:02:31,906 - Directory['/var/log/hadoop-yarn/yarn'] {'owner': 'yarn', 'group': 'hadoop', 'create_parents': True, 'cd_access': 'a'}
2017-11-26 21:02:31,907 - Directory['/var/run/hadoop-mapreduce'] {'owner': 'mapred', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2017-11-26 21:02:31,907 - Directory['/var/run/hadoop-mapreduce/mapred'] {'owner': 'mapred', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2017-11-26 21:02:31,908 - Directory['/var/log/hadoop-mapreduce'] {'owner': 'mapred', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2017-11-26 21:02:31,908 - Directory['/var/log/hadoop-mapreduce/mapred'] {'owner': 'mapred', 'group': 'hadoop', 'create_parents': True, 'cd_access': 'a'}
2017-11-26 21:02:31,909 - Directory['/var/log/hadoop-yarn'] {'owner': 'yarn', 'group': 'hadoop', 'ignore_failures': True, 'create_parents': True, 'cd_access': 'a'}
2017-11-26 21:02:31,909 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {u'final': {u'fs.defaultFS': u'true'}}, 'owner': 'hdfs', 'configurations': ...}
2017-11-26 21:02:31,917 - Generating config: /usr/hdp/current/hadoop-client/conf/core-site.xml
2017-11-26 21:02:31,917 - File['/usr/hdp/current/hadoop-client/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2017-11-26 21:02:31,937 - XmlConfig['hdfs-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {u'final': {u'dfs.support.append': u'true', u'dfs.datanode.data.dir': u'true', u'dfs.namenode.http-address': u'true', u'dfs.namenode.name.dir': u'true', u'dfs.webhdfs.enabled': u'true', u'dfs.datanode.failed.volumes.tolerated': u'true'}}, 'owner': 'hdfs', 'configurations': ...}
2017-11-26 21:02:31,944 - Generating config: /usr/hdp/current/hadoop-client/conf/hdfs-site.xml
2017-11-26 21:02:31,944 - File['/usr/hdp/current/hadoop-client/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2017-11-26 21:02:31,986 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn', 'configurations': ...}
2017-11-26 21:02:31,993 - Generating config: /usr/hdp/current/hadoop-client/conf/mapred-site.xml
2017-11-26 21:02:31,993 - File['/usr/hdp/current/hadoop-client/conf/mapred-site.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2017-11-26 21:02:32,021 - XmlConfig['yarn-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn', 'configurations': ...}
2017-11-26 21:02:32,027 - Generating config: /usr/hdp/current/hadoop-client/conf/yarn-site.xml
2017-11-26 21:02:32,027 - File['/usr/hdp/current/hadoop-client/conf/yarn-site.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2017-11-26 21:02:32,090 - XmlConfig['capacity-scheduler.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn', 'configurations': ...}
2017-11-26 21:02:32,096 - Generating config: /usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml
2017-11-26 21:02:32,096 - File['/usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2017-11-26 21:02:32,105 - Directory['/hadoop/yarn/timeline'] {'owner': 'yarn', 'group': 'hadoop', 'create_parents': True, 'cd_access': 'a'}
2017-11-26 21:02:32,105 - Directory['/hadoop/yarn/timeline'] {'owner': 'yarn', 'group': 'hadoop', 'create_parents': True, 'cd_access': 'a'}
2017-11-26 21:02:32,106 - HdfsResource['/ats/done'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'dfs_type': '', 'default_fs': 'hdfs://slot2:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'change_permissions_for_parents': True, 'owner': 'yarn', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/apps/hive/warehouse', u'/mr-history/done', u'/app-logs', u'/tmp'], 'mode': 0755}
2017-11-26 21:02:32,108 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://slot2:50070/webhdfs/v1/ats/done?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpt6S1E0 2>/tmp/tmpGLmVBg''] {'logoutput': None, 'quiet': False}
2017-11-26 21:02:32,408 - call returned (0, '')
2017-11-26 21:02:32,411 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://slot2:50070/webhdfs/v1/ats/done?op=SETPERMISSION&user.name=hdfs&permission=755'"'"' 1>/tmp/tmp7jV7NM 2>/tmp/tmp4y9X2E''] {'logoutput': None, 'quiet': False}
2017-11-26 21:02:32,664 - call returned (0, '')
Command failed after 1 tries
... View more
Labels:
- Labels:
-
Apache YARN
11-15-2017
08:03 AM
Hi, I have some problems when deploying Metron UI(Kibana) and in Elasticsearch Indexes (from Quick Links) . Below are the details. Metron UI error Elasticsearch Indexes and Health (port 9200): {"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}
... View more
Labels:
- Labels:
-
Apache Metron
11-15-2017
07:08 AM
1 Kudo
Hi @asubramanian, it works like a charm! Thank you so much 🙂
... View more
11-15-2017
12:52 AM
Hi, I have a problem. There are no alert (green tick) but I cannot start Metron REST Service in Ambari and login into Metron Management UI, but there is no standard error though. Below is the details of stdout. There are also error in /var/log/metron/metron-rest.log. Thanks. /var/log/metron/metron-rest.log Error: Could not find or load main class org.apache.metron.rest.MetronRestApplication
stderr: /var/lib/ambari-agent/data/errors-2944.txt None stdout: /var/lib/ambari-agent/data/output-2944.txt 2017-11-14 19:28:47,209 - Stack Feature Version Info: Cluster Stack=2.5, Cluster Current Version=None, Command Stack=None, Command Version=2.5.3.0-37 -> 2.5.3.0-37
2017-11-14 19:28:47,222 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-11-14 19:28:47,401 - Stack Feature Version Info: Cluster Stack=2.5, Cluster Current Version=None, Command Stack=None, Command Version=2.5.3.0-37 -> 2.5.3.0-37
2017-11-14 19:28:47,408 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
User Group mapping (user_group) is missing in the hostLevelParams
2017-11-14 19:28:47,409 - Group['metron'] {}
2017-11-14 19:28:47,410 - Group['elasticsearch'] {}
2017-11-14 19:28:47,410 - Group['zeppelin'] {}
2017-11-14 19:28:47,410 - Group['hadoop'] {}
2017-11-14 19:28:47,410 - Group['kibana'] {}
2017-11-14 19:28:47,410 - Group['users'] {}
2017-11-14 19:28:47,411 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-14 19:28:47,412 - call['/var/lib/ambari-agent/tmp/changeUid.sh hive'] {}
2017-11-14 19:28:47,423 - call returned (0, '1001')
2017-11-14 19:28:47,423 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1001}
2017-11-14 19:28:47,424 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-14 19:28:47,425 - call['/var/lib/ambari-agent/tmp/changeUid.sh storm'] {}
2017-11-14 19:28:47,435 - call returned (0, '1002')
2017-11-14 19:28:47,436 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1002}
2017-11-14 19:28:47,437 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-14 19:28:47,437 - call['/var/lib/ambari-agent/tmp/changeUid.sh zookeeper'] {}
2017-11-14 19:28:47,448 - call returned (0, '1003')
2017-11-14 19:28:47,449 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1003}
2017-11-14 19:28:47,451 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-14 19:28:47,452 - call['/var/lib/ambari-agent/tmp/changeUid.sh ams'] {}
2017-11-14 19:28:47,462 - call returned (0, '1004')
2017-11-14 19:28:47,462 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1004}
2017-11-14 19:28:47,464 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-14 19:28:47,464 - call['/var/lib/ambari-agent/tmp/changeUid.sh tez'] {}
2017-11-14 19:28:47,476 - call returned (0, '1005')
2017-11-14 19:28:47,477 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': 1005}
2017-11-14 19:28:47,478 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-14 19:28:47,478 - call['/var/lib/ambari-agent/tmp/changeUid.sh zeppelin'] {}
2017-11-14 19:28:47,487 - call returned (0, '1007')
2017-11-14 19:28:47,488 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'zeppelin', u'hadoop'], 'uid': 1007}
2017-11-14 19:28:47,490 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-14 19:28:47,491 - call['/var/lib/ambari-agent/tmp/changeUid.sh metron'] {}
2017-11-14 19:28:47,500 - call returned (0, '1008')
2017-11-14 19:28:47,501 - User['metron'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1008}
2017-11-14 19:28:47,502 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-14 19:28:47,504 - call['/var/lib/ambari-agent/tmp/changeUid.sh elasticsearch'] {}
2017-11-14 19:28:47,512 - call returned (0, '1010')
2017-11-14 19:28:47,513 - User['elasticsearch'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1010}
2017-11-14 19:28:47,514 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2017-11-14 19:28:47,516 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-14 19:28:47,517 - call['/var/lib/ambari-agent/tmp/changeUid.sh flume'] {}
2017-11-14 19:28:47,526 - call returned (0, '1012')
2017-11-14 19:28:47,527 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1012}
2017-11-14 19:28:47,528 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-14 19:28:47,529 - call['/var/lib/ambari-agent/tmp/changeUid.sh kafka'] {}
2017-11-14 19:28:47,539 - call returned (0, '1013')
2017-11-14 19:28:47,539 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1013}
2017-11-14 19:28:47,541 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-14 19:28:47,542 - call['/var/lib/ambari-agent/tmp/changeUid.sh hdfs'] {}
2017-11-14 19:28:47,551 - call returned (0, '1014')
2017-11-14 19:28:47,551 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1014}
2017-11-14 19:28:47,553 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-14 19:28:47,555 - call['/var/lib/ambari-agent/tmp/changeUid.sh yarn'] {}
2017-11-14 19:28:47,564 - call returned (0, '1015')
2017-11-14 19:28:47,564 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1015}
2017-11-14 19:28:47,566 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-14 19:28:47,567 - call['/var/lib/ambari-agent/tmp/changeUid.sh kibana'] {}
2017-11-14 19:28:47,576 - call returned (0, '1016')
2017-11-14 19:28:47,577 - User['kibana'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1016}
2017-11-14 19:28:47,578 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-14 19:28:47,579 - call['/var/lib/ambari-agent/tmp/changeUid.sh mapred'] {}
2017-11-14 19:28:47,588 - call returned (0, '1017')
2017-11-14 19:28:47,588 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1017}
2017-11-14 19:28:47,590 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-14 19:28:47,591 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {}
2017-11-14 19:28:47,599 - call returned (0, '1018')
2017-11-14 19:28:47,600 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1018}
2017-11-14 19:28:47,601 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-14 19:28:47,602 - call['/var/lib/ambari-agent/tmp/changeUid.sh hcat'] {}
2017-11-14 19:28:47,613 - call returned (0, '1019')
2017-11-14 19:28:47,614 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1019}
2017-11-14 19:28:47,615 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-14 19:28:47,617 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-11-14 19:28:47,623 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2017-11-14 19:28:47,623 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2017-11-14 19:28:47,624 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-14 19:28:47,626 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-14 19:28:47,627 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {}
2017-11-14 19:28:47,637 - call returned (0, '1018')
2017-11-14 19:28:47,637 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1018'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2017-11-14 19:28:47,643 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1018'] due to not_if
2017-11-14 19:28:47,644 - Group['hdfs'] {}
2017-11-14 19:28:47,644 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']}
2017-11-14 19:28:47,645 - FS Type:
2017-11-14 19:28:47,645 - Directory['/etc/hadoop'] {'mode': 0755}
2017-11-14 19:28:47,663 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-11-14 19:28:47,664 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-11-14 19:28:47,683 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2017-11-14 19:28:47,696 - Skipping Execute[('setenforce', '0')] due to only_if
2017-11-14 19:28:47,697 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2017-11-14 19:28:47,700 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2017-11-14 19:28:47,701 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2017-11-14 19:28:47,708 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2017-11-14 19:28:47,711 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2017-11-14 19:28:47,721 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2017-11-14 19:28:47,735 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs', 'group': 'hadoop'}
2017-11-14 19:28:47,736 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2017-11-14 19:28:47,737 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2017-11-14 19:28:47,743 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644}
2017-11-14 19:28:47,749 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2017-11-14 19:28:47,990 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-11-14 19:28:48,001 - File['/etc/default/metron'] {'content': Template('metron.j2')}
2017-11-14 19:28:48,002 - Directory['/var/run/metron'] {'owner': 'metron', 'group': 'metron', 'create_parents': True, 'mode': 0755}
2017-11-14 19:28:48,003 - Directory['/var/log/metron'] {'owner': 'metron', 'group': 'metron', 'create_parents': True, 'mode': 0755}
2017-11-14 19:28:48,003 - Creating Kafka topics for rest
2017-11-14 19:28:48,003 - Creating Kafka topics
2017-11-14 19:28:48,003 - Creating topic'escalation'
2017-11-14 19:28:48,003 - Execute['/usr/hdp/current/kafka-broker/bin/kafka-topics.sh --zookeeper slot4:2181,slot3:2181,slot2:2181 --create --if-not-exists --topic escalation --partitions 1 --replication-factor 1 --config retention.bytes=10737418240'] {'logoutput': True, 'tries': 3, 'user': 'kafka', 'try_sleep': 5}
2017-11-14 19:28:49,680 - Done creating Kafka topics
2017-11-14 19:28:49,681 - Directory['/var/run/metron'] {'owner': 'metron', 'group': 'metron', 'create_parents': True, 'mode': 0755}
2017-11-14 19:28:49,682 - Directory['/var/log/metron'] {'owner': 'metron', 'group': 'metron', 'create_parents': True, 'mode': 0755}
2017-11-14 19:28:49,682 - Starting REST application
2017-11-14 19:28:49,684 - call['ambari-sudo.sh su metron -l -s /bin/bash -c 'cat /var/run/metron/metron-rest.pid 1>/tmp/tmpFx5rE5 2>/tmp/tmpp9HGqu''] {'quiet': False}
2017-11-14 19:28:49,893 - call returned (0, '')
2017-11-14 19:28:49,895 - Execute['set -o allexport; source /etc/default/metron; set +o allexport;export METRON_JDBC_PASSWORD=[PROTECTED];/usr/jdk64/jdk1.8.0_112/bin/java -cp /usr/hdp/current/hadoop-client/conf:/usr/hdp/current/hbase-client/conf:/usr/hcp/1.3.0.0-51/metron/lib/metron-rest-0.4.1.1.3.0.0.jar:/usr/share/java/mysql-connector-java.jar:/usr/hcp/1.3.0.0-51/metron/lib/metron-elasticsearch-0.4.1.1.3.0.0-uber.jar org.apache.metron.rest.MetronRestApplication --server.port=8082 --spring.config.location=/usr/hcp/1.3.0.0-51/metron/config/rest_application.yml >> /var/log/metron/metron-rest.log 2>&1 & echo $! > /var/run/metron/metron-rest.pid;unset METRON_JDBC_PASSWORD;'] {'logoutput': True, 'not_if': 'ls /var/run/metron/metron-rest.pid >/dev/null 2>&1 && ps -p 33489 >/dev/null 2>&1', 'user': 'metron'}
2017-11-14 19:28:50,115 - Done starting REST application
Command completed successfully! Here's from Metron REST config tab
... View more
- Tags:
- CyberSecurity
- Metron
Labels:
- Labels:
-
Apache Metron
11-15-2017
12:35 AM
Hi Jay, I update the nodejs version and it works. Thanks a lot. 🙂
... View more
11-13-2017
05:01 AM
Hi, I am trying to install Metron Service in Ambari but it is giving error. Please find below details of stderr and stdout. Thanks. stderr: /var/lib/ambari-agent/data/errors-2381.txt Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/METRON/0.4.1.1.3.0.0/package/scripts/enrichment_master.py", line 117, in <module>
Enrichment().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 329, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/METRON/0.4.1.1.3.0.0/package/scripts/enrichment_master.py", line 34, in install
self.install_packages(env)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 708, in install_packages
retry_count=agent_stack_retry_count)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 54, in action_install
self.install_package(package_name, self.resource.use_repos, self.resource.skip_repos)
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py", line 53, in install_package
self.checked_call_with_retries(cmd, sudo=True, logoutput=self.get_logoutput())
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 86, in checked_call_with_retries
return self._call_with_retries(cmd, is_checked=True, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 98, in _call_with_retries
code, out = func(cmd, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of '/usr/bin/yum -d 0 -e 0 -y install nodejs' returned 1. Error: Nothing to do stdout: /var/lib/ambari-agent/data/output-2381.txt 2017-11-12 23:36:51,149 - Stack Feature Version Info: Cluster Stack=2.5, Cluster Current Version=None, Command Stack=None, Command Version=None -> 2.5
2017-11-12 23:36:51,158 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
User Group mapping (user_group) is missing in the hostLevelParams
2017-11-12 23:36:51,160 - Group['metron'] {}
2017-11-12 23:36:51,161 - Group['elasticsearch'] {}
2017-11-12 23:36:51,161 - Group['zeppelin'] {}
2017-11-12 23:36:51,161 - Group['hadoop'] {}
2017-11-12 23:36:51,161 - Group['kibana'] {}
2017-11-12 23:36:51,162 - Group['users'] {}
2017-11-12 23:36:51,162 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-12 23:36:51,163 - call['/var/lib/ambari-agent/tmp/changeUid.sh hive'] {}
2017-11-12 23:36:51,175 - call returned (0, '1001')
2017-11-12 23:36:51,175 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1001}
2017-11-12 23:36:51,178 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-12 23:36:51,180 - call['/var/lib/ambari-agent/tmp/changeUid.sh storm'] {}
2017-11-12 23:36:51,191 - call returned (0, '1002')
2017-11-12 23:36:51,191 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1002}
2017-11-12 23:36:51,193 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-12 23:36:51,194 - call['/var/lib/ambari-agent/tmp/changeUid.sh zookeeper'] {}
2017-11-12 23:36:51,205 - call returned (0, '1003')
2017-11-12 23:36:51,206 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1003}
2017-11-12 23:36:51,208 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-12 23:36:51,209 - call['/var/lib/ambari-agent/tmp/changeUid.sh ams'] {}
2017-11-12 23:36:51,220 - call returned (0, '1004')
2017-11-12 23:36:51,220 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1004}
2017-11-12 23:36:51,222 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-12 23:36:51,223 - call['/var/lib/ambari-agent/tmp/changeUid.sh tez'] {}
2017-11-12 23:36:51,234 - call returned (0, '1005')
2017-11-12 23:36:51,234 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': 1005}
2017-11-12 23:36:51,236 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-12 23:36:51,237 - call['/var/lib/ambari-agent/tmp/changeUid.sh zeppelin'] {}
2017-11-12 23:36:51,248 - call returned (0, '1007')
2017-11-12 23:36:51,249 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'zeppelin', u'hadoop'], 'uid': 1007}
2017-11-12 23:36:51,250 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-12 23:36:51,251 - call['/var/lib/ambari-agent/tmp/changeUid.sh metron'] {}
2017-11-12 23:36:51,263 - call returned (0, '1008')
2017-11-12 23:36:51,263 - User['metron'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1008}
2017-11-12 23:36:51,264 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-12 23:36:51,266 - call['/var/lib/ambari-agent/tmp/changeUid.sh elasticsearch'] {}
2017-11-12 23:36:51,276 - call returned (0, '1010')
2017-11-12 23:36:51,277 - User['elasticsearch'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1010}
2017-11-12 23:36:51,279 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2017-11-12 23:36:51,280 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-12 23:36:51,282 - call['/var/lib/ambari-agent/tmp/changeUid.sh flume'] {}
2017-11-12 23:36:51,293 - call returned (0, '1012')
2017-11-12 23:36:51,293 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1012}
2017-11-12 23:36:51,294 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-12 23:36:51,295 - call['/var/lib/ambari-agent/tmp/changeUid.sh kafka'] {}
2017-11-12 23:36:51,306 - call returned (0, '1013')
2017-11-12 23:36:51,307 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1013}
2017-11-12 23:36:51,309 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-12 23:36:51,310 - call['/var/lib/ambari-agent/tmp/changeUid.sh hdfs'] {}
2017-11-12 23:36:51,321 - call returned (0, '1014')
2017-11-12 23:36:51,321 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1014}
2017-11-12 23:36:51,322 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-12 23:36:51,323 - call['/var/lib/ambari-agent/tmp/changeUid.sh yarn'] {}
2017-11-12 23:36:51,334 - call returned (0, '1015')
2017-11-12 23:36:51,334 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1015}
2017-11-12 23:36:51,336 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-12 23:36:51,337 - call['/var/lib/ambari-agent/tmp/changeUid.sh kibana'] {}
2017-11-12 23:36:51,348 - call returned (0, '1016')
2017-11-12 23:36:51,348 - User['kibana'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1016}
2017-11-12 23:36:51,350 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-12 23:36:51,351 - call['/var/lib/ambari-agent/tmp/changeUid.sh mapred'] {}
2017-11-12 23:36:51,361 - call returned (0, '1017')
2017-11-12 23:36:51,361 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1017}
2017-11-12 23:36:51,363 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-12 23:36:51,364 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {}
2017-11-12 23:36:51,375 - call returned (0, '1018')
2017-11-12 23:36:51,375 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1018}
2017-11-12 23:36:51,377 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-12 23:36:51,378 - call['/var/lib/ambari-agent/tmp/changeUid.sh hcat'] {}
2017-11-12 23:36:51,388 - call returned (0, '1019')
2017-11-12 23:36:51,388 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1019}
2017-11-12 23:36:51,389 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-12 23:36:51,391 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-11-12 23:36:51,398 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2017-11-12 23:36:51,399 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2017-11-12 23:36:51,400 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-12 23:36:51,401 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-12 23:36:51,402 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {}
2017-11-12 23:36:51,413 - call returned (0, '1018')
2017-11-12 23:36:51,413 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1018'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2017-11-12 23:36:51,420 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1018'] due to not_if
2017-11-12 23:36:51,420 - Group['hdfs'] {}
2017-11-12 23:36:51,421 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']}
2017-11-12 23:36:51,421 - FS Type:
2017-11-12 23:36:51,421 - Directory['/etc/hadoop'] {'mode': 0755}
2017-11-12 23:36:51,438 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-11-12 23:36:51,439 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-11-12 23:36:51,455 - Initializing 6 repositories
2017-11-12 23:36:51,456 - Repository['HDP-2.5'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.3.0', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None}
2017-11-12 23:36:51,467 - File['/etc/yum.repos.d/HDP.repo'] {'content': '[HDP-2.5]\nname=HDP-2.5\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.3.0\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-11-12 23:36:51,468 - Repository['HDP-UTILS-1.1.0.21'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None}
2017-11-12 23:36:51,473 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': '[HDP-UTILS-1.1.0.21]\nname=HDP-UTILS-1.1.0.21\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-11-12 23:36:51,474 - Repository['HCP-1.3.0.0-51'] {'base_url': 'http://s3.amazonaws.com/dev.hortonworks.com/HCP/centos6/1.x/BUILDS/1.3.0.0-51', 'action': ['create'], 'components': [u'METRON', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'METRON', 'mirror_list': None}
2017-11-12 23:36:51,478 - File['/etc/yum.repos.d/METRON.repo'] {'content': '[HCP-1.3.0.0-51]\nname=HCP-1.3.0.0-51\nbaseurl=http://s3.amazonaws.com/dev.hortonworks.com/HCP/centos6/1.x/BUILDS/1.3.0.0-51\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-11-12 23:36:51,479 - Repository['elasticsearch-2.x'] {'base_url': 'https://packages.elastic.co/elasticsearch/2.x/centos', 'action': ['create'], 'components': [u'ELASTICSEARCH', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ELASTICSEARCH', 'mirror_list': None}
2017-11-12 23:36:51,483 - File['/etc/yum.repos.d/ELASTICSEARCH.repo'] {'content': '[elasticsearch-2.x]\nname=elasticsearch-2.x\nbaseurl=https://packages.elastic.co/elasticsearch/2.x/centos\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-11-12 23:36:51,484 - Repository['kibana-4.x'] {'base_url': 'http://packages.elastic.co/kibana/4.5/centos', 'action': ['create'], 'components': [u'KIBANA', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'KIBANA', 'mirror_list': None}
2017-11-12 23:36:51,488 - File['/etc/yum.repos.d/KIBANA.repo'] {'content': '[kibana-4.x]\nname=kibana-4.x\nbaseurl=http://packages.elastic.co/kibana/4.5/centos\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-11-12 23:36:51,489 - Repository['ES-Curator-4.x'] {'base_url': 'http://packages.elastic.co/curator/4/centos/7', 'action': ['create'], 'components': [u'CURATOR', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'CURATOR', 'mirror_list': None}
2017-11-12 23:36:51,493 - File['/etc/yum.repos.d/CURATOR.repo'] {'content': '[ES-Curator-4.x]\nname=ES-Curator-4.x\nbaseurl=http://packages.elastic.co/curator/4/centos/7\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-11-12 23:36:51,493 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-11-12 23:36:51,583 - Skipping installation of existing package unzip
2017-11-12 23:36:51,583 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-11-12 23:36:51,589 - Skipping installation of existing package curl
2017-11-12 23:36:51,589 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-11-12 23:36:51,595 - Skipping installation of existing package hdp-select
2017-11-12 23:36:51,838 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-11-12 23:36:51,844 - Package['metron-common'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-11-12 23:36:51,910 - Skipping installation of existing package metron-common
2017-11-12 23:36:51,911 - Package['metron-data-management'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-11-12 23:36:51,917 - Skipping installation of existing package metron-data-management
2017-11-12 23:36:51,918 - Package['metron-management'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-11-12 23:36:51,924 - Installing package metron-management ('/usr/bin/yum -d 0 -e 0 -y install metron-management')
2017-11-12 23:36:59,282 - Package['metron-parsers'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-11-12 23:36:59,300 - Skipping installation of existing package metron-parsers
2017-11-12 23:36:59,302 - Package['metron-enrichment'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-11-12 23:36:59,316 - Skipping installation of existing package metron-enrichment
2017-11-12 23:36:59,318 - Package['metron-profiler'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-11-12 23:36:59,330 - Skipping installation of existing package metron-profiler
2017-11-12 23:36:59,332 - Package['metron-indexing'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-11-12 23:36:59,344 - Skipping installation of existing package metron-indexing
2017-11-12 23:36:59,346 - Package['metron-elasticsearch'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-11-12 23:36:59,357 - Skipping installation of existing package metron-elasticsearch
2017-11-12 23:36:59,358 - Package['metron-pcap'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-11-12 23:36:59,368 - Skipping installation of existing package metron-pcap
2017-11-12 23:36:59,370 - Package['metron-rest'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-11-12 23:36:59,380 - Skipping installation of existing package metron-rest
2017-11-12 23:36:59,381 - Package['nodejs'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-11-12 23:36:59,390 - Installing package nodejs ('/usr/bin/yum -d 0 -e 0 -y install nodejs')
2017-11-12 23:36:59,785 - Execution of '/usr/bin/yum -d 0 -e 0 -y install nodejs' returned 1. Error: Nothing to do
2017-11-12 23:36:59,785 - Failed to install package nodejs. Executing '/usr/bin/yum clean metadata'
2017-11-12 23:36:59,977 - Retrying to install package nodejs after 30 seconds
Command failed after 1 tries
... View more
- Tags:
- CyberSecurity
- Metron
Labels:
- Labels:
-
Apache Metron
10-13-2017
06:59 AM
Aditya, I got this error at Zeppelin UI
... View more
10-13-2017
05:39 AM
Should I declare the variable like this? export PYSPARK_DRIVER_PYTHON=miniconda2
... View more