Member since
08-27-2017
43
Posts
1
Kudos Received
0
Solutions
12-01-2017
07:15 AM
Thank you Jay. It works for me. 🙂
... View more
12-01-2017
07:12 AM
Thank you Aditya it worked. I made a backup file. 🙂
... View more
12-01-2017
06:18 AM
Hi Aditya, here is my /etc/zeppelin/conf/interpreter.json {
"interpreterSettings": {
"2CZ3ASUMC": {
"id": "2CZ3ASUMC",
"name": "python",
"group": "python",
"properties": {
"zeppelin.python": "/usr/lib/miniconda2/bin/python",
"zeppelin.python.maxResult": "1000000000",
"zeppelin.interpreter.localRepo": "/usr/hdp/current/zeppelin-server/local -repo/2CZ3ASUMC",
"zeppelin.python.useIPython": "true",
"zeppelin.ipython.launch.timeout": "30000"
},
"interpreterGroup": [
{
"class": "org.apache.zeppelin.python.PythonInterpreter",
"name": "python"
}
],
"dependencies": [],
"option": {
"remote": true,
"perNoteSession": false,
"perNoteProcess": false,
"isExistingProcess": false,
"isUserImpersonate": false
}
},
"2CKEKWY8Z": {
"id": "2CKEKWY8Z",
"name": "angular",
"group": "angular",
"properties": {},
"interpreterGroup": [
{
"class": "org.apache.zeppelin.angular.AngularInterpreter",
"name": "angular"
}
],
"dependencies": [],
"option": {
"remote": true,
"perNoteSession": false,
"perNoteProcess": false,
"isExistingProcess": false,
"port": "-1",
"isUserImpersonate": false
}
},
"2CK8A9MEG": {
"id": "2CK8A9MEG",
"name": "jdbc",
"group": "jdbc",
"properties": {
"phoenix.user": "phoenixuser",
"hive.url": "jdbc:hive2://slot4:2181,slot2:2181,slot3:2181/;serviceDiscov eryMode\u003dzooKeeper;zooKeeperNamespace\u003dhiveserver2",
"default.driver": "org.postgresql.Driver",
"phoenix.driver": "org.apache.phoenix.jdbc.PhoenixDriver",
"hive.user": "hive",
"psql.password": "",
"psql.user": "phoenixuser",
"psql.url": "jdbc:postgresql://localhost:5432/",
"default.user": "gpadmin",
"phoenix.hbase.client.retries.number": "1",
"phoenix.url": "jdbc:phoenix:slot4,slot2,slot3:/hbase-unsecure",
"tajo.url": "jdbc:tajo://localhost:26002/default",
"tajo.driver": "org.apache.tajo.jdbc.TajoDriver",
"psql.driver": "org.postgresql.Driver",
"default.password": "",
"zeppelin.interpreter.localRepo": "/usr/hdp/current/zeppelin-server/local -repo/2CK8A9MEG",
"zeppelin.jdbc.auth.type": "SIMPLE",
"hive.proxy.user.property": "hive.server2.proxy.user",
"hive.password": "",
"zeppelin.jdbc.concurrent.use": "true",
"hive.driver": "org.apache.hive.jdbc.HiveDriver",
"zeppelin.jdbc.keytab.location": "",
"common.max_count": "1000",
"phoenix.password": "",
"zeppelin.jdbc.principal": "",
"zeppelin.jdbc.concurrent.max_connection": "10",
"default.url": "jdbc:postgresql://localhost:5432/"
},
"interpreterGroup": [
{
"class": "org.apache.zeppelin.jdbc.JDBCInterpreter",
"name": "sql"
}
],
"dependencies": [],
"option": {
"remote": true,
"perNoteSession": false,
"perNoteProcess": false,
"isExistingProcess": false,
"port": "-1",
"isUserImpersonate": false
}
},
"2CYSZ9Q7Q": {
"id": "2CYSZ9Q7Q",
"name": "spark",
"group": "spark",
"properties": {
"spark.cores.max": "",
"zeppelin.spark.printREPLOutput": "true",
"master": "local[*]",
"zeppelin.spark.maxResult": "1000",
"zeppelin.dep.localrepo": "local-repo",
"spark.app.name": "Zeppelin",
"spark.executor.memory": "",
"zeppelin.spark.sql.stacktrace": "false",
"zeppelin.spark.importImplicit": "true",
"zeppelin.spark.useHiveContext": "true",
"zeppelin.interpreter.localRepo": "/usr/hdp/current/zeppelin-server/local -repo/2CYSZ9Q7Q",
"zeppelin.spark.concurrentSQL": "false",
"args": "",
"zeppelin.pyspark.python": "/usr/lib/miniconda2/bin/python",
"spark.yarn.keytab": "",
... View more
12-01-2017
05:26 AM
Hi Jay, I did the command but Zeppelin UI still giving error 503.
... View more
12-01-2017
04:47 AM
Hi Jay, 1. and 2. Yes. Its not corrupted and the ownership is zeppelin:zeppelin. -rw-r--r--. 1 zeppelin zeppelin 4096 Nov 26 23:30 /etc/zeppelin/conf/interpreter.json 3. Below shows file content before I do backup
... View more
12-01-2017
01:12 AM
Hi, I am having trouble to start Zeppelin Notebook in Ambari. Below is the standard errors. stderr: /var/lib/ambari-agent/data/errors-3626.txt Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/master.py", line 467, in <module>
Master().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 329, in execute
method(env)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 865, in restart
self.start(env, upgrade_type=upgrade_type)
File "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/master.py", line 223, in start
self.update_kerberos_properties()
File "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/master.py", line 273, in update_kerberos_properties
config_data = self.get_interpreter_settings()
File "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/master.py", line 248, in get_interpreter_settings
config_data = json.loads(config_content)
File "/usr/lib64/python2.7/json/__init__.py", line 338, in loads
return _default_decoder.decode(s)
File "/usr/lib64/python2.7/json/decoder.py", line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib64/python2.7/json/decoder.py", line 382, in raw_decode
obj, end = self.scan_once(s, idx)
ValueError: Unterminated string starting at: line 119 column 9 (char 4087
... View more
Labels:
- Labels:
-
Apache Zeppelin
11-30-2017
10:13 AM
Hi Jay, thanks for replying. myhostname is not really my hostname. I kept it confidential. 1) I did 'telnet myhostname 50070'. This is the result. [root@myhost ~]# nc -v myhostname 50070
Ncat: Version 6.40 ( http://nmap.org/ncat )
Ncat: Connected to myhostip:50070.
.
HTTP/1.1 400 Bad Request
Connection: close
Server: Jetty(6.1.26.hwx)
2) When I grep 50070. [root@myhost hdfs]# netstat -tnlpa | grep 50070
tcp 0 0 myhostip:50070 0.0.0.0:* LISTEN 17042/jav a
tcp 0 0 myhostip:50070 myhostip:53422 TIME_WAIT -
tcp 0 0 myhostip:53080 myhostip:50070 CLOSE_WAIT 194862/nc
tcp 0 0 myhostip:50070 myhostip:53420 TIME_WAIT -
tcp 0 0 myhostip:50070 myhostip:53424 TIME_WAIT -
tcp 0 0 myhostip:50070 myhostip:53426 TIME_WAIT -
tcp 0 0 myhostip:50070 myhostip:53440 TIME_WAIT -
tcp 0 0 myhostip:50070 myhostip:53418 TIME_WAIT -
... View more
11-30-2017
09:40 AM
Hi, I am trying to start YARN Service in Ambari but it is giving error. I'm using multi-nodes. Please find below details of stderr and stdout. Thanks. stderr: /var/lib/ambari-agent/data/errors-3528.txt Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/application_timeline_server.py", line 94, in <module>
ApplicationTimelineServer().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 329, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/application_timeline_server.py", line 44, in start
self.configure(env) # FOR SECURITY
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 119, in locking_configure
original_configure(obj, *args, **kw)
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/application_timeline_server.py", line 55, in configure
yarn(name='apptimelineserver')
File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/yarn.py", line 356, in yarn
mode=0755
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 604, in action_create_on_execute
self.action_delayed("create")
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 601, in action_delayed
self.get_hdfs_resource_executor().action_delayed(action_name, self)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 328, in action_delayed
self._assert_valid()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 287, in _assert_valid
self.target_status = self._get_file_status(target)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 430, in _get_file_status
list_status = self.util.run_command(target, 'GETFILESTATUS', method='GET', ignore_status_codes=['404'], assertable_result=False)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 177, in run_command
return self._run_command(*args, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 235, in _run_command
_, out, err = get_user_call_output(cmd, user=self.run_user, logoutput=self.logoutput, quiet=False)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/get_user_call_output.py", line 61, in get_user_call_output
raise ExecutionFailed(err_msg, code, files_output[0], files_output[1])
resource_management.core.exceptions.ExecutionFailed: Execution of 'curl -sS -L -w '%{http_code}' -X GET 'http://myhostname:50070/webhdfs/v1/ats/done?op=GETFILESTATUS&user.name=hdfs' 1>/tmp/tmpdOQron 2>/tmp/tmprXPUdn' returned 7. curl: (7) Failed connect to myhostname:50070; No route to host
000 stdout: /var/lib/ambari-agent/data/output-3528.txt 2017-11-30 01:59:09,238 - Stack Feature Version Info: Cluster Stack=2.5, Cluster Current Version=None, Command Stack=None, Command Version=2.5.3.0-37 -> 2.5.3.0-37
2017-11-30 01:59:09,260 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-11-30 01:59:09,464 - Stack Feature Version Info: Cluster Stack=2.5, Cluster Current Version=None, Command Stack=None, Command Version=2.5.3.0-37 -> 2.5.3.0-37
2017-11-30 01:59:09,473 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
User Group mapping (user_group) is missing in the hostLevelParams
2017-11-30 01:59:09,474 - Group['metron'] {}
2017-11-30 01:59:09,475 - Group['livy'] {}
2017-11-30 01:59:09,475 - Group['elasticsearch'] {}
2017-11-30 01:59:09,475 - Group['spark'] {}
2017-11-30 01:59:09,476 - Group['zeppelin'] {}
2017-11-30 01:59:09,476 - Group['hadoop'] {}
2017-11-30 01:59:09,476 - Group['kibana'] {}
2017-11-30 01:59:09,476 - Group['users'] {}
2017-11-30 01:59:09,477 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-30 01:59:09,478 - call['/var/lib/ambari-agent/tmp/changeUid.sh hive'] {}
2017-11-30 01:59:09,489 - call returned (0, '1001')
2017-11-30 01:59:09,489 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1001}
2017-11-30 01:59:09,492 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-30 01:59:09,494 - call['/var/lib/ambari-agent/tmp/changeUid.sh storm'] {}
2017-11-30 01:59:09,505 - call returned (0, '1002')
2017-11-30 01:59:09,506 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1002}
2017-11-30 01:59:09,508 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-30 01:59:09,509 - call['/var/lib/ambari-agent/tmp/changeUid.sh zookeeper'] {}
2017-11-30 01:59:09,521 - call returned (0, '1003')
2017-11-30 01:59:09,521 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1003}
2017-11-30 01:59:09,523 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-30 01:59:09,525 - call['/var/lib/ambari-agent/tmp/changeUid.sh ams'] {}
2017-11-30 01:59:09,536 - call returned (0, '1004')
2017-11-30 01:59:09,536 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1004}
2017-11-30 01:59:09,538 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-30 01:59:09,540 - call['/var/lib/ambari-agent/tmp/changeUid.sh tez'] {}
2017-11-30 01:59:09,551 - call returned (0, '1005')
2017-11-30 01:59:09,551 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': 1005}
2017-11-30 01:59:09,553 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-30 01:59:09,555 - call['/var/lib/ambari-agent/tmp/changeUid.sh zeppelin'] {}
2017-11-30 01:59:09,565 - call returned (0, '1007')
2017-11-30 01:59:09,566 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'zeppelin', u'hadoop'], 'uid': 1007}
2017-11-30 01:59:09,567 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-30 01:59:09,568 - call['/var/lib/ambari-agent/tmp/changeUid.sh metron'] {}
2017-11-30 01:59:09,579 - call returned (0, '1008')
2017-11-30 01:59:09,580 - User['metron'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1008}
2017-11-30 01:59:09,582 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-30 01:59:09,583 - call['/var/lib/ambari-agent/tmp/changeUid.sh livy'] {}
2017-11-30 01:59:09,594 - call returned (0, '1009')
2017-11-30 01:59:09,594 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1009}
2017-11-30 01:59:09,596 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-30 01:59:09,597 - call['/var/lib/ambari-agent/tmp/changeUid.sh elasticsearch'] {}
2017-11-30 01:59:09,608 - call returned (0, '1010')
2017-11-30 01:59:09,608 - User['elasticsearch'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1010}
2017-11-30 01:59:09,610 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-30 01:59:09,612 - call['/var/lib/ambari-agent/tmp/changeUid.sh spark'] {}
2017-11-30 01:59:09,624 - call returned (0, '1019')
2017-11-30 01:59:09,624 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1019}
2017-11-30 01:59:09,626 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2017-11-30 01:59:09,628 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-30 01:59:09,630 - call['/var/lib/ambari-agent/tmp/changeUid.sh flume'] {}
2017-11-30 01:59:09,641 - call returned (0, '1011')
2017-11-30 01:59:09,642 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1011}
2017-11-30 01:59:09,644 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-30 01:59:09,645 - call['/var/lib/ambari-agent/tmp/changeUid.sh kafka'] {}
2017-11-30 01:59:09,655 - call returned (0, '1012')
2017-11-30 01:59:09,655 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1012}
2017-11-30 01:59:09,657 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-30 01:59:09,658 - call['/var/lib/ambari-agent/tmp/changeUid.sh hdfs'] {}
2017-11-30 01:59:09,668 - call returned (0, '1013')
2017-11-30 01:59:09,669 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1013}
2017-11-30 01:59:09,671 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-30 01:59:09,673 - call['/var/lib/ambari-agent/tmp/changeUid.sh yarn'] {}
2017-11-30 01:59:09,683 - call returned (0, '1014')
2017-11-30 01:59:09,683 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1014}
2017-11-30 01:59:09,685 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-30 01:59:09,687 - call['/var/lib/ambari-agent/tmp/changeUid.sh kibana'] {}
2017-11-30 01:59:09,697 - call returned (0, '1016')
2017-11-30 01:59:09,697 - User['kibana'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1016}
2017-11-30 01:59:09,699 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-30 01:59:09,701 - call['/var/lib/ambari-agent/tmp/changeUid.sh mapred'] {}
2017-11-30 01:59:09,710 - call returned (0, '1015')
2017-11-30 01:59:09,711 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1015}
2017-11-30 01:59:09,712 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-30 01:59:09,714 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {}
2017-11-30 01:59:09,723 - call returned (0, '1017')
2017-11-30 01:59:09,724 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1017}
2017-11-30 01:59:09,726 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-30 01:59:09,727 - call['/var/lib/ambari-agent/tmp/changeUid.sh hcat'] {}
2017-11-30 01:59:09,737 - call returned (0, '1018')
2017-11-30 01:59:09,738 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1018}
2017-11-30 01:59:09,739 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-30 01:59:09,740 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-11-30 01:59:09,747 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2017-11-30 01:59:09,747 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2017-11-30 01:59:09,749 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-30 01:59:09,750 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-30 01:59:09,751 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {}
2017-11-30 01:59:09,761 - call returned (0, '1017')
2017-11-30 01:59:09,762 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1017'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2017-11-30 01:59:09,769 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1017'] due to not_if
2017-11-30 01:59:09,769 - Group['hdfs'] {}
2017-11-30 01:59:09,770 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']}
2017-11-30 01:59:09,770 - FS Type:
2017-11-30 01:59:09,771 - Directory['/etc/hadoop'] {'mode': 0755}
2017-11-30 01:59:09,793 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-11-30 01:59:09,795 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-11-30 01:59:09,814 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2017-11-30 01:59:09,828 - Skipping Execute[('setenforce', '0')] due to only_if
2017-11-30 01:59:09,828 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2017-11-30 01:59:09,831 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2017-11-30 01:59:09,832 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2017-11-30 01:59:09,838 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2017-11-30 01:59:09,841 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2017-11-30 01:59:09,850 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2017-11-30 01:59:09,864 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs', 'group': 'hadoop'}
2017-11-30 01:59:09,865 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2017-11-30 01:59:09,866 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2017-11-30 01:59:09,871 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644}
2017-11-30 01:59:09,876 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2017-11-30 01:59:10,143 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-11-30 01:59:10,144 - Stack Feature Version Info: Cluster Stack=2.5, Cluster Current Version=None, Command Stack=None, Command Version=2.5.3.0-37 -> 2.5.3.0-37
2017-11-30 01:59:10,145 - call['ambari-python-wrap /usr/bin/hdp-select status hadoop-yarn-resourcemanager'] {'timeout': 20}
2017-11-30 01:59:10,183 - call returned (0, 'hadoop-yarn-resourcemanager - 2.5.3.0-37')
2017-11-30 01:59:10,234 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-11-30 01:59:10,255 - Directory['/var/log/hadoop-yarn/nodemanager/recovery-state'] {'owner': 'yarn', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2017-11-30 01:59:10,257 - Directory['/var/run/hadoop-yarn'] {'owner': 'yarn', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2017-11-30 01:59:10,258 - Directory['/var/run/hadoop-yarn/yarn'] {'owner': 'yarn', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2017-11-30 01:59:10,258 - Directory['/var/log/hadoop-yarn/yarn'] {'owner': 'yarn', 'group': 'hadoop', 'create_parents': True, 'cd_access': 'a'}
2017-11-30 01:59:10,259 - Directory['/var/run/hadoop-mapreduce'] {'owner': 'mapred', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2017-11-30 01:59:10,259 - Directory['/var/run/hadoop-mapreduce/mapred'] {'owner': 'mapred', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2017-11-30 01:59:10,260 - Directory['/var/log/hadoop-mapreduce'] {'owner': 'mapred', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2017-11-30 01:59:10,260 - Directory['/var/log/hadoop-mapreduce/mapred'] {'owner': 'mapred', 'group': 'hadoop', 'create_parents': True, 'cd_access': 'a'}
2017-11-30 01:59:10,261 - Directory['/var/log/hadoop-yarn'] {'owner': 'yarn', 'group': 'hadoop', 'ignore_failures': True, 'create_parents': True, 'cd_access': 'a'}
2017-11-30 01:59:10,262 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {u'final': {u'fs.defaultFS': u'true'}}, 'owner': 'hdfs', 'configurations': ...}
2017-11-30 01:59:10,272 - Generating config: /usr/hdp/current/hadoop-client/conf/core-site.xml
2017-11-30 01:59:10,272 - File['/usr/hdp/current/hadoop-client/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2017-11-30 01:59:10,293 - XmlConfig['hdfs-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {u'final': {u'dfs.support.append': u'true', u'dfs.datanode.data.dir': u'true', u'dfs.namenode.http-address': u'true', u'dfs.namenode.name.dir': u'true', u'dfs.webhdfs.enabled': u'true', u'dfs.datanode.failed.volumes.tolerated': u'true'}}, 'owner': 'hdfs', 'configurations': ...}
2017-11-30 01:59:10,301 - Generating config: /usr/hdp/current/hadoop-client/conf/hdfs-site.xml
2017-11-30 01:59:10,301 - File['/usr/hdp/current/hadoop-client/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2017-11-30 01:59:10,338 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn', 'configurations': ...}
2017-11-30 01:59:10,344 - Generating config: /usr/hdp/current/hadoop-client/conf/mapred-site.xml
2017-11-30 01:59:10,345 - File['/usr/hdp/current/hadoop-client/conf/mapred-site.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2017-11-30 01:59:10,371 - Changing owner for /usr/hdp/current/hadoop-client/conf/mapred-site.xml from 1015 to yarn
2017-11-30 01:59:10,371 - XmlConfig['yarn-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn', 'configurations': ...}
2017-11-30 01:59:10,376 - Generating config: /usr/hdp/current/hadoop-client/conf/yarn-site.xml
2017-11-30 01:59:10,377 - File['/usr/hdp/current/hadoop-client/conf/yarn-site.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2017-11-30 01:59:10,439 - XmlConfig['capacity-scheduler.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn', 'configurations': ...}
2017-11-30 01:59:10,444 - Generating config: /usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml
2017-11-30 01:59:10,444 - File['/usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2017-11-30 01:59:10,453 - Changing owner for /usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml from 1013 to yarn
2017-11-30 01:59:10,453 - Directory['/hadoop/yarn/timeline'] {'owner': 'yarn', 'group': 'hadoop', 'create_parents': True, 'cd_access': 'a'}
2017-11-30 01:59:10,453 - Directory['/hadoop/yarn/timeline'] {'owner': 'yarn', 'group': 'hadoop', 'create_parents': True, 'cd_access': 'a'}
2017-11-30 01:59:10,454 - HdfsResource['/ats/done'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'dfs_type': '', 'default_fs': 'hdfs://slot2:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'change_permissions_for_parents': True, 'owner': 'yarn', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/apps/hive/warehouse', u'/mr-history/done', u'/app-logs', u'/tmp'], 'mode': 0755}
2017-11-30 01:59:10,456 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://slot2:50070/webhdfs/v1/ats/done?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpdOQron 2>/tmp/tmprXPUdn''] {'logoutput': None, 'quiet': False}
2017-11-30 01:59:10,748 - call returned (7, '')
Command failed after 1 tries
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache YARN
11-27-2017
08:08 AM
Hi Jay, this is the log 2017-11-13 03:52:28,337 INFO namenode.NameNode (LogAdapter.java:info(47)) - STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: user = hdfs
STARTUP_MSG: host = hostname
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.7.3.2.5.3.0-37
STARTUP_MSG: classpath = /usr/hdp/current/hadoop-client/conf:/usr/hdp/2.5.3.0-37/hadoop/lib/ojdbc6.jar:/usr/hdp/2.5.3.0-37/hadoop/lib/jackson-annotations-2.2.3.jar:/usr/hdp/2.5.3.0-37/hadoop/lib/ranger-hdfs-plugin-shim-0.6.0.2.5.3.0-3$
STARTUP_MSG: build = git@github.com:hortonworks/hadoop.git -r 9828acfdec41a121f0121f556b09e2d112259e92; compiled by 'jenkins' on 2016-11-29T18:06Z
STARTUP_MSG: java = 1.8.0_112
************************************************************/
2017-11-13 03:52:28,351 INFO namenode.NameNode (LogAdapter.java:info(47)) - registered UNIX signal handlers for [TERM, HUP, INT]
2017-11-13 03:52:28,356 INFO namenode.NameNode (NameNode.java:createNameNode(1600)) - createNameNode []
2017-11-13 03:52:28,567 INFO impl.MetricsConfig (MetricsConfig.java:loadFirst(112)) - loaded properties from hadoop-metrics2.properties
2017-11-13 03:52:28,708 INFO timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(82)) - Initializing Timeline metrics sink.
2017-11-13 03:52:28,709 INFO timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(102)) - Identified hostname = slot2, serviceName = namenode
2017-11-13 03:52:28,813 INFO availability.MetricCollectorHAHelper (MetricCollectorHAHelper.java:findLiveCollectorHostsFromZNode(79)) - /ambari-metrics-cluster znode does not exist. Skipping requesting live instances from zookeeper
2017-11-13 03:52:28,817 INFO timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(128)) - No suitable collector found.
2017-11-13 03:52:28,823 INFO timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(180)) - RPC port properties configured: {8020=client}
2017-11-13 03:52:28,833 INFO impl.MetricsSinkAdapter (MetricsSinkAdapter.java:start(206)) - Sink timeline started
2017-11-13 03:52:28,903 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:startTimer(376)) - Scheduled snapshot period at 10 second(s).
2017-11-13 03:52:28,903 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:start(192)) - NameNode metrics system started
2017-11-13 03:52:28,908 INFO namenode.NameNode (NameNode.java:setClientNamenodeAddress(450)) - fs.defaultFS is hdfs://slot2:8020
2017-11-13 03:52:28,908 INFO namenode.NameNode (NameNode.java:setClientNamenodeAddress(470)) - Clients are to use slot2:8020 to access this namenode/service.
2017-11-13 03:52:29,025 INFO util.JvmPauseMonitor (JvmPauseMonitor.java:run(179)) - Starting JVM pause monitor
2017-11-13 03:52:29,032 INFO hdfs.DFSUtil (DFSUtil.java:httpServerTemplateForNNAndJN(1780)) - Starting Web-server for hdfs at: http://slot2:50070
2017-11-13 03:52:29,072 INFO mortbay.log (Slf4jLog.java:info(67)) - Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2017-11-13 03:52:29,078 INFO server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(293)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2017-11-13 03:52:29,082 INFO http.HttpRequestLog (HttpRequestLog.java:getRequestLog(80)) - Http request log for http.requests.namenode is not defined
2017-11-13 03:52:29,086 INFO http.HttpServer2 (HttpServer2.java:addGlobalFilter(754)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2017-11-13 03:52:29,088 INFO http.HttpServer2 (HttpServer2.java:addFilter(729)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2017-11-13 03:52:29,088 INFO http.HttpServer2 (HttpServer2.java:addFilter(737)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2017-11-13 03:52:29,088 INFO http.HttpServer2 (HttpServer2.java:addFilter(737)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2017-11-13 03:52:29,089 INFO security.HttpCrossOriginFilterInitializer (HttpCrossOriginFilterInitializer.java:initFilter(49)) - CORS filter not enabled. Please set hadoop.http.cross-origin.enabled to 'true' to enable it
2017-11-13 03:52:29,107 INFO http.HttpServer2 (NameNodeHttpServer.java:initWebHdfs(93)) - Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2017-11-13 03:52:29,108 INFO http.HttpServer2 (HttpServer2.java:addJerseyResourcePackage(653)) - addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=$
2017-11-13 03:52:29,117 INFO http.HttpServer2 (HttpServer2.java:openListeners(959)) - Jetty bound to port 50070
2017-11-13 03:52:29,117 INFO mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26.hwx
2017-11-13 03:52:29,224 INFO mortbay.log (Slf4jLog.java:info(67)) - Started HttpServer2$SelectChannelConnectorWithSafeStartup@slot2:50070
2017-11-13 03:52:29,265 WARN common.Util (Util.java:stringAsURI(56)) - Path /hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2017-11-13 03:52:29,265 WARN common.Util (Util.java:stringAsURI(56)) - Path /hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2017-11-13 03:52:29,266 WARN namenode.FSNamesystem (FSNamesystem.java:checkConfiguration(656)) - Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2017-11-13 03:52:29,266 WARN namenode.FSNamesystem (FSNamesystem.java:checkConfiguration(661)) - Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage direc$
2017-11-13 03:52:29,269 WARN common.Util (Util.java:stringAsURI(56)) - Path /hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2017-11-13 03:52:29,270 WARN common.Util (Util.java:stringAsURI(56)) - Path /hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2017-11-13 03:52:29,274 WARN common.Storage (NNStorage.java:setRestoreFailedStorage(210)) - set restore failed storage to true
2017-11-13 03:52:29,291 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(725)) - No KeyProvider found.
2017-11-13 03:52:29,291 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(731)) - Enabling async auditlog
2017-11-13 03:52:29,292 INFO namenode.FSNamesystem (FSNamesystem.java:<init>(735)) - fsLock is fair:false
2017-11-13 03:52:29,313 INFO blockmanagement.HeartbeatManager (HeartbeatManager.java:<init>(90)) - Setting heartbeat recheck interval to 30000 since dfs.namenode.stale.datanode.interval is less than dfs.namenode.heartbeat.recheck-inter$
2017-11-13 03:52:29,321 INFO blockmanagement.DatanodeManager (DatanodeManager.java:<init>(242)) - dfs.block.invalidate.limit=1000
2017-11-13 03:52:29,321 INFO blockmanagement.DatanodeManager (DatanodeManager.java:<init>(248)) - dfs.namenode.datanode.registration.ip-hostname-check=true
2017-11-13 03:52:29,323 INFO blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(71)) - dfs.namenode.startup.delay.block.deletion.sec is set to 000:01:00:00.000
2017-11-13 03:52:29,323 INFO blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(76)) - The block deletion will start around 2017 Nov 13 04:52:29
2017-11-13 03:52:29,324 INFO util.GSet (LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map BlocksMap
2017-11-13 03:52:29,324 INFO util.GSet (LightWeightGSet.java:computeCapacity(355)) - VM type = 64-bit
2017-11-13 03:52:29,326 INFO util.GSet (LightWeightGSet.java:computeCapacity(356)) - 2.0% max memory 1011.3 MB = 20.2 MB
2017-11-13 03:52:29,326 INFO util.GSet (LightWeightGSet.java:computeCapacity(361)) - capacity = 2^21 = 2097152 entries
... View more
11-27-2017
04:57 AM
Hi Jay, I did the leave safe mode command but it still in safemode. Where can I access NameNode logs? Is it in /var/log/hadoop/hdfs?
... View more