Member since
12-11-2015
213
Posts
87
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1202 | 12-20-2016 03:27 PM | |
5860 | 07-26-2016 06:38 PM |
03-20-2020
10:34 AM
Yes I did.. Is there a way I can check in MySql if the changes are applied. Thanks
... View more
03-19-2020
05:21 PM
No help still not able to start ranger: 2020-03-19 20:20:10,288 [JISQL] /usr/jdk64/jdk1.8.0_112/bin/java -cp /usr/hdp/current/ranger-admin/ews/lib/mysql-connector-java.jar:/usr/hdp/current/ranger-admin/jisql/lib/* org.apache.util.sql.Jisql -driver mysqlconj -cstring jdbc:mysql://namenode.asotc.com/mysql -u root -p '********' -noheader -trim -c \; -query "SELECT version();"
Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary.
SQLException : SQL state: 01S00 java.sql.SQLException: The server time zone value 'EDT' is unrecognized or represents more than one time zone. You must configure either the server or JDBC driver (via the 'serverTimezone' configuration property) to use a more specifc time zone value if you want to utilize time zone support. ErrorCode: 0
2020-03-19 20:20:11,050 [E] Can't establish db connection.. Exiting..
... View more
03-19-2020
04:23 PM
select now command produce right result. Also I am putting below in the connection string which should address the issue ?serverTimezone=UTC MariaDB [(none)]> select now();
+---------------------+
| now() |
+---------------------+
| 2020-03-19 19:21:24 |
+---------------------+
1 row in set (0.00 sec)
... View more
03-19-2020
02:07 PM
I have recently installed Ranger in my cluster and I am not able to start All database connection is OK. Below is the error. It's something about the time zone. I am using MySql, I am also attaching screen shot of main configuration.
stderr:
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/RANGER/package/scripts/ranger_admin.py", line 236, in
RangerAdmin().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 353, in execute
method(env)
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/RANGER/package/scripts/ranger_admin.py", line 97, in start
self.configure(env, upgrade_type=upgrade_type, setup_db=params.stack_supports_ranger_setup_db_on_start)
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/RANGER/package/scripts/ranger_admin.py", line 132, in configure
setup_ranger_xml.setup_ranger_db()
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/RANGER/package/scripts/setup_ranger_xml.py", line 267, in setup_ranger_db
user=params.unix_user,
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 263, in action_run
returns=self.resource.returns)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'ambari-python-wrap /usr/hdp/current/ranger-admin/dba_script.py -q' returned 1. 2020-03-19 16:56:38,687 [I] Running DBA setup script. QuiteMode:True
2020-03-19 16:56:38,687 [I] Using Java:/usr/jdk64/jdk1.8.0_112/bin/java
2020-03-19 16:56:38,687 [I] DB FLAVOR:MYSQL
2020-03-19 16:56:38,687 [I] DB Host:namenode.asotc.com
2020-03-19 16:56:38,688 [I] ---------- Verifying DB root password ----------
2020-03-19 16:56:38,688 [I] DBA root user password validated
2020-03-19 16:56:38,688 [I] ---------- Verifying Ranger Admin db user password ----------
2020-03-19 16:56:38,688 [I] admin user password validated
2020-03-19 16:56:38,689 [I] ---------- Creating Ranger Admin db user ----------
2020-03-19 16:56:38,689 [JISQL] /usr/jdk64/jdk1.8.0_112/bin/java -cp /usr/hdp/current/ranger-admin/ews/lib/mysql-connector-java.jar:/usr/hdp/current/ranger-admin/jisql/lib/* org.apache.util.sql.Jisql -driver mysqlconj -cstring jdbc:mysql://namenode.asotc.com/mysql -u root -p '********' -noheader -trim -c \; -query "SELECT version();"
Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary.
SQLException : SQL state: 01S00 java.sql.SQLException: The server time zone value 'EDT' is unrecognized or represents more than one time zone. You must configure either the server or JDBC driver (via the 'serverTimezone' configuration property) to use a more specifc time zone value if you want to utilize time zone support. ErrorCode: 0
2020-03-19 16:56:39,340 [E] Can't establish db connection.. Exiting..
... View more
Labels:
- Labels:
-
Apache Ranger
03-13-2020
02:51 PM
I am getting the following error when I access HDFS file browser using HUE as admin:
I changed this: dfs.permission.enabled - changed to false but still getting the same error
Cannot access: //. The HDFS REST service is not available. Note: you are a Hue admin but not a HDFS superuser, "hdfs" or part of HDFS supergroup, "supergroup".
... View more
Labels:
- Labels:
-
Cloudera Hue
03-12-2020
07:48 PM
1 Kudo
@stevenmatison So I changed Hue directory permission to hue user and that fixed the issue..
... View more
03-12-2020
06:17 PM
@stevenmatison So what permission do I need to change to which file ? I changed the permission to 777 to desktop.db which is under desktop folder but no help. Thanks
... View more
03-12-2020
07:54 AM
I managed to install it (version 4.x) of hue on centos but when I hit it from the browser, here is what I get rather than a login screen. This is what I see when I run the supervisor [hue@ds hue-4.0.0]$ sudo build/env/bin/supervisor
[sudo] password for hue:
starting server with options:
{'daemonize': False,
'host': 'ds.asotc.com',
'pidfile': None,
'port': 8888,
'server_group': 'hue',
'server_name': 'localhost',
'server_user': 'hue',
'ssl_certificate': None,
'ssl_certificate_chain': None,
'ssl_cipher_list': 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA',
'ssl_private_key': None,
'threads': 40,
'workdir': None}
Traceback (most recent call last):
File "/home/hue/hue-4.0.0/desktop/core/src/desktop/lib/wsgiserver.py", line 1215, in communicate
req.respond()
File "/home/hue/hue-4.0.0/desktop/core/src/desktop/lib/wsgiserver.py", line 576, in respond
self._respond()
File "/home/hue/hue-4.0.0/desktop/core/src/desktop/lib/wsgiserver.py", line 588, in _respond
response = self.wsgi_app(self.environ, self.start_response)
File "/home/hue/hue-4.0.0/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/core/handlers/wsgi.py", line 206, in __call__
response = self.get_response(request)
File "/home/hue/hue-4.0.0/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/core/handlers/base.py", line 153, in get_response
response = self.handle_uncaught_exception(request, resolver, sys.exc_info())
File "/home/hue/hue-4.0.0/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/core/handlers/base.py", line 236, in handle_uncaught_exception
return callback(request, **param_dict)
File "/home/hue/hue-4.0.0/desktop/core/src/desktop/views.py", line 380, in serve_500_error
return render("500.mako", request, {'traceback': traceback.extract_tb(exc_info[2])})
File "/home/hue/hue-4.0.0/desktop/core/src/desktop/lib/django_util.py", line 230, in render
**kwargs)
File "/home/hue/hue-4.0.0/desktop/core/src/desktop/lib/django_util.py", line 148, in _render_to_response
return django_mako.render_to_response(template, *args, **kwargs)
File "/home/hue/hue-4.0.0/desktop/core/src/desktop/lib/django_mako.py", line 125, in render_to_response
return HttpResponse(render_to_string(template_name, data_dictionary), **kwargs)
File "/home/hue/hue-4.0.0/desktop/core/src/desktop/lib/django_mako.py", line 114, in render_to_string_normal
result = template.render(**data_dict)
File "/home/hue/hue-4.0.0/build/env/lib/python2.7/site-packages/Mako-0.8.1-py2.7.egg/mako/template.py", line 443, in render
return runtime._render(self, self.callable_, args, data)
File "/home/hue/hue-4.0.0/build/env/lib/python2.7/site-packages/Mako-0.8.1-py2.7.egg/mako/runtime.py", line 786, in _render
**_kwargs_for_callable(callable_, data))
File "/home/hue/hue-4.0.0/build/env/lib/python2.7/site-packages/Mako-0.8.1-py2.7.egg/mako/runtime.py", line 818, in _render_context
_exec_template(inherit, lclcontext, args=args, kwargs=kwargs)
File "/home/hue/hue-4.0.0/build/env/lib/python2.7/site-packages/Mako-0.8.1-py2.7.egg/mako/runtime.py", line 844, in _exec_template
callable_(context, *args, **kwargs)
File "/tmp/tmpnJ8DqA/desktop/500.mako.py", line 120, in render_body
__M_writer(unicode( commonfooter(request, messages) ))
File "/home/hue/hue-4.0.0/desktop/core/src/desktop/views.py", line 494, in commonfooter
hue_settings = Settings.get_settings()
File "/home/hue/hue-4.0.0/desktop/core/src/desktop/models.py", line 109, in get_settings
settings, created = Settings.objects.get_or_create(id=1)
File "/home/hue/hue-4.0.0/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/db/models/manager.py", line 154, in get_or_create
return self.get_queryset().get_or_create(**kwargs)
File "/home/hue/hue-4.0.0/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/db/models/query.py", line 391, in get_or_create
six.reraise(*exc_info)
File "/home/hue/hue-4.0.0/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/db/models/query.py", line 383, in get_or_create
obj.save(force_insert=True, using=self.db)
File "/home/hue/hue-4.0.0/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/db/models/base.py", line 545, in save
force_update=force_update, update_fields=update_fields)
File "/home/hue/hue-4.0.0/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/db/models/base.py", line 573, in save_base
updated = self._save_table(raw, cls, force_insert, force_update, using, update_fields)
File "/home/hue/hue-4.0.0/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/db/models/base.py", line 654, in _save_table
result = self._do_insert(cls._base_manager, using, fields, update_pk, raw)
File "/home/hue/hue-4.0.0/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/db/models/base.py", line 687, in _do_insert
using=using, raw=raw)
File "/home/hue/hue-4.0.0/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/db/models/manager.py", line 232, in _insert
return insert_query(self.model, objs, fields, **kwargs)
File "/home/hue/hue-4.0.0/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/db/models/query.py", line 1514, in insert_query
return query.get_compiler(using=using).execute_sql(return_id)
File "/home/hue/hue-4.0.0/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/db/models/sql/compiler.py", line 903, in execute_sql
cursor.execute(sql, params)
File "/home/hue/hue-4.0.0/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/db/backends/util.py", line 53, in execute
return self.cursor.execute(sql, params)
File "/home/hue/hue-4.0.0/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/db/utils.py", line 99, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/home/hue/hue-4.0.0/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/db/backends/util.py", line 53, in execute
return self.cursor.execute(sql, params)
File "/home/hue/hue-4.0.0/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/db/backends/sqlite3/base.py", line 452, in execute
return Database.Cursor.execute(self, query, params)
OperationalError: attempt to write a readonly database
... View more
03-12-2020
05:47 AM
I see supervisor.py file got created in different folder than /usr/local/hue. It's in desktop/core/src/desktop folder. I am running as root. Below is what I get [root@ds desktop]# pwd
/root/hue-3.8.1/desktop/core/src/desktop
[root@ds desktop]# ./supervisor
-bash: ./supervisor: No such file or directory
[root@ds desktop]# ./supervisor.py
-bash: ./supervisor.py: Permission denied
[root@ds desktop]# chmod 777 supervisor.py
[root@ds desktop]# ./supervisor.py
Traceback (most recent call last):
File "./supervisor.py", line 32, in <module>
from daemon.pidlockfile import PIDLockFile
ImportError: No module named daemon.pidlockfile
... View more
03-11-2020
07:52 PM
I have installed SuperSet on Ambari 2.7 but don't know what's the default name and password. How to reset it...
Thanks
Prakash
... View more
Labels:
- Labels:
-
Apache Ambari
03-11-2020
06:05 PM
I have installed Hue 3.8.1 from tarball and it's running on centos 7. I am not sure how to start hue.
Below is the directory structure.
[root@ds hue-3.8.1]# ls -l
total 56
drwxrwxr-x. 21 sitadmin sitadmin 4096 May 4 2015 apps
drwxr-xr-x. 3 root root 17 Mar 10 23:14 build
drwxrwxr-x. 5 sitadmin sitadmin 58 May 4 2015 desktop
drwxrwxr-x. 6 sitadmin sitadmin 142 May 4 2015 docs
drwxrwxr-x. 3 sitadmin sitadmin 24 May 4 2015 ext
-rw-rw-r--. 1 sitadmin sitadmin 11358 May 4 2015 LICENSE.txt
-rw-rw-r--. 1 sitadmin sitadmin 4715 May 4 2015 Makefile
-rw-rw-r--. 1 sitadmin sitadmin 8505 May 4 2015 Makefile.sdk
-rw-rw-r--. 1 sitadmin sitadmin 3498 May 4 2015 Makefile.vars
-rw-rw-r--. 1 sitadmin sitadmin 2192 May 4 2015 Makefile.vars.priv
drwxrwxr-x. 2 sitadmin sitadmin 21 May 4 2015 maven
-rw-rw-r--. 1 sitadmin sitadmin 801 May 4 2015 NOTICE.txt
-rw-rw-r--. 1 sitadmin sitadmin 1562 May 4 2015 README
drwxrwxr-x. 5 sitadmin sitadmin 89 May 4 2015 tools
-rw-rw-r--. 1 sitadmin sitadmin 932 May 4 2015 VERSION
... View more
Labels:
- Labels:
-
Cloudera Hue
02-28-2020
03:47 PM
For evaluation purpose, I am planning to install everything on the same host machine. Do I have to install Ambari-Agent also before I can add the host machine as a node in the cluster ??
Thanks
... View more
Labels:
- Labels:
-
Apache Ambari
02-28-2020
02:48 PM
Some more info: INFO 2020-02-28 02:59:33,768 PingPortListener.py:50 - Ping port listener started on port: 8670 INFO 2020-02-28 02:59:33,771 main.py:439 - Connecting to Ambari server at https://hdp.0lw5ekcxj3kufbug3aze5gfphe.bx.internal.cloudapp.net:8440 (127.0.0.1) INFO 2020-02-28 02:59:33,771 NetUtil.py:70 - Connecting to https://hdp.0lw5ekcxj3kufbug3aze5gfphe.bx.internal.cloudapp.net:8440/ca INFO 2020-02-28 02:59:33,841 main.py:449 - Connected to Ambari server hdp.0lw5ekcxj3kufbug3aze5gfphe.bx.internal.cloudapp.net INFO 2020-02-28 02:59:33,852 hostname.py:67 - agent:hostname_script configuration not defined thus read hostname 'HDP' using socket.getfqdn(). INFO 2020-02-28 02:59:33,853 threadpool.py:58 - Started thread pool with 3 core threads and 20 maximum threads WARNING 2020-02-28 02:59:33,853 AlertSchedulerHandler.py:280 - [AlertScheduler] /var/lib/ambari-agent/cache/alerts/definitions.json not found or invalid. No alerts will be scheduled until registration occurs. INFO 2020-02-28 02:59:33,853 AlertSchedulerHandler.py:175 - [AlertScheduler] Starting <ambari_agent.apscheduler.scheduler.Scheduler object at 0x7fe60a61cfd0>; currently running: False INFO 2020-02-28 02:59:33,865 hostname.py:106 - Read public hostname 'hdp' using socket.getfqdn() INFO 2020-02-28 02:59:33,865 Hardware.py:68 - Initializing host system information. INFO 2020-02-28 02:59:33,874 Hardware.py:188 - Some mount points were ignored: /dev/shm, /run, /sys/fs/cgroup, /run/user/1000 INFO 2020-02-28 02:59:33,894 hostname.py:67 - agent:hostname_script configuration not defined thus read hostname 'HDP' using socket.getfqdn(). INFO 2020-02-28 02:59:33,900 Facter.py:202 - Directory: '/etc/resource_overrides' does not exist - it won't be used for gathering system resources. INFO 2020-02-28 02:59:33,905 Hardware.py:73 - Host system information: {'kernel': 'Linux', 'domain': '', 'physicalprocessorcount': 2, 'kernelrelease': '3.10.0-1062.9.1.el7.x86_64', 'uptime_days': '0', 'memorytotal': 7990252, 'swapfree': '2.00 GB', 'memorysize': 7990252, 'osfamily': 'redhat', 'swapsize': '2.00 GB', 'processorcount': 2, 'netmask': '255.255.255.0', 'timezone': 'UTC', 'hardwareisa': 'x86_64', 'memoryfree': 1985308, 'operatingsystem': 'redhat', 'kernelmajversion': '3.10', 'kernelversion': '3.10.0', 'macaddress': '00:0D:3A:8A:DE:15', 'operatingsystemrelease': '7.7', 'ipaddress': '10.0.0.8', 'hostname': 'hdp', 'uptime_hours': '12', 'fqdn': 'hdp', 'id': 'root', 'architecture': 'x86_64', 'selinux': True, 'mounts': [{'available': '3983556', 'used': '0', 'percent': '0%', 'device': 'devtmpfs', 'mountpoint': '/dev', 'type': 'devtmpfs', 'size': '3983556'}, {'available': '2016956', 'used': '69956', 'percent': '4%', 'device': '/dev/mapper/rootvg-rootlv', 'mountpoint': '/', 'type': 'xfs', 'size': '2086912'}, {'available': '8762208', 'used': '1713312', 'percent': '17%', 'device': '/dev/mapper/rootvg-usrlv', 'mountpoint': '/usr', 'type': 'xfs', 'size': '10475520'}, {'available': '384244', 'used': '121336', 'percent': '24%', 'device': '/dev/sda2', 'mountpoint': '/boot', 'type': 'xfs', 'size': '505580'}, {'available': '501824', 'used': '9896', 'percent': '2%', 'device': '/dev/sda1', 'mountpoint': '/boot/efi', 'type': 'vfat', 'size': '511720'}, {'available': '2053688', 'used': '33224', 'percent': '2%', 'device': '/dev/mapper/rootvg-tmplv', 'mountpoint': '/tmp', 'type': 'xfs', 'size': '2086912'}, {'available': '4209180', 'used': '4169188', 'percent': '50%', 'device': '/dev/mapper/rootvg-varlv', 'mountpoint': '/var', 'type': 'xfs', 'size': '8378368'}, {'available': '1005240', 'used': '33096', 'percent': '4%', 'device': '/dev/mapper/rootvg-homelv', 'mountpoint': '/home', 'type': 'xfs', 'size': '1038336'}, {'available': '1943884', 'used': '143028', 'percent': '7%', 'device': '/dev/mapper/rootvg-optlv', 'mountpoint': '/opt', 'type': 'xfs', 'size': '2086912'}, {'available': '46685624', 'used': '2150432', 'percent': '5%', 'device': '/dev/sdb1', 'mountpoint': '/mnt/resource', 'type': 'ext4', 'size': '51473824'}], 'hardwaremodel': 'x86_64', 'uptime_seconds': '46242', 'interfaces': 'eth0,lo'} @ @ @
... View more
02-28-2020
02:44 PM
I have installed Ambari-server and agent on the same host but not able to add them as part of the ambari cluster as it fails to register. Below is the output of Ambari-agent.log file.
INFO 2020-02-28 02:59:33,768 PingPortListener.py:50 - Ping port listener started on port: 8670
INFO 2020-02-28 02:59:33,771 main.py:439 - Connecting to Ambari server at https://hdp.0lw5ekcxj3kufbug3aze5gfphe.bx.internal.cloudapp.net:8440 (127.0.0.1)
INFO 2020-02-28 02:59:33,771 NetUtil.py:70 - Connecting to https://hdp.0lw5ekcxj3kufbug3aze5gfphe.bx.internal.cloudapp.net:8440/ca
INFO 2020-02-28 02:59:33,841 main.py:449 - Connected to Ambari server hdp.0lw5ekcxj3kufbug3aze5gfphe.bx.internal.cloudapp.net
INFO 2020-02-28 02:59:33,852 hostname.py:67 - agent:hostname_script configuration not defined thus read hostname 'HDP' using socket.getfqdn().
INFO 2020-02-28 02:59:33,853 threadpool.py:58 - Started thread pool with 3 core threads and 20 maximum threads
WARNING 2020-02-28 02:59:33,853 AlertSchedulerHandler.py:280 - [AlertScheduler] /var/lib/ambari-agent/cache/alerts/definitions.json not found or invalid. No alerts will be scheduled until registration occurs.
"/var/log/ambari-agent/ambari-agent.log" [readonly] 414L, 50499C
... View more
Labels:
- Labels:
-
Apache Ambari
07-21-2018
02:35 AM
I need to downgrade to hdp 2.6.0 from 2.6.5, I am able to register it but not able to install.
... View more
- Tags:
- Hadoop Core
- hdp-2.6.0
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
07-16-2018
02:29 AM
Thanks @Jay Kumar SenSharma. Question: On other nodes, do we have to do anything after installing new version of JAVA. Just by putting the JAVA in the same path as Ambari-server, is that going to be ok..?
... View more
07-16-2018
01:26 AM
I need to update Oracle JDK to 8u172. What's the best way of doing it. Is there a documentation
... View more
Labels:
- Labels:
-
Apache Ambari
07-16-2018
01:22 AM
Thank you @Jay Kumar SenSharma It did fixed the problem but wondering why it was working before and stopped working after a VM restart
... View more
07-11-2018
01:26 AM
I have few VM in Azure cloud. Upgraded memory on all the VM's and after reboot, looks like all the cluster members are not able to talk to Ambari server. What could be wrong
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff}
span.s1 {font-variant-ligatures: no-common-ligatures}
INFO 2018-07-11 01:20:43,857 NetUtil.py:70 - Connecting to https://dev1:8440/ca
ERROR 2018-07-11 01:20:43,863 NetUtil.py:96 - EOF occurred in violation of protocol (_ssl.c:579)
ERROR 2018-07-11 01:20:43,863 NetUtil.py:97 - SSLError: Failed to connect. Please check openssl library versions.
Refer to: https://bugzilla.redhat.com/show_bug.cgi?id=1022468 for more details.
WARNING 2018-07-11 01:20:43,863 NetUtil.py:124 - Server at https://dev1:8440 is not reachable, sleeping for 10 seconds...
INFO 2018-07-11 01:20:53,864 main.py:439 - Connecting to Ambari server at https://dev1:8440 (10.0.0.11)
INFO 2018-07-11 01:20:53,864 NetUtil.py:70 - Connecting to https://dev1:8440/ca
ERROR 2018-07-11 01:20:53,869 NetUtil.py:96 - EOF occurred in violation of protocol (_ssl.c:579)
ERROR 2018-07-11 01:20:53,869 NetUtil.py:97 - SSLError: Failed to connect. Please check openssl library versions.
Refer to: https://bugzilla.redhat.com/show_bug.cgi?id=1022468 for more details.
WARNING 2018-07-11 01:20:53,869 NetUtil.py:124 - Server at https://dev1:8440 is not reachable, sleeping for 10 seconds...
INFO 2018-07-11 01:21:03,869 NetUtil.py:70 - Connecting to https://dev1:8440/ca
ERROR 2018-07-11 01:21:03,874 NetUtil.py:96 - EOF occurred in violation of protocol (_ssl.c:579)
ERROR 2018-07-11 01:21:03,874 NetUtil.py:97 - SSLError: Failed to connect. Please check openssl library versions.
Refer to: https://bugzilla.redhat.com/show_bug.cgi?id=1022468 for more details.
WARNING 2018-07-11 01:21:03,874 NetUtil.py:124 - Server at https://dev1:8440 is not reachable, sleeping for 10 seconds...
INFO 2018-07-11 01:21:13,875 NetUtil.py:70 - Connecting to https://dev1:8440/ca
ERROR 2018-07-11 01:21:13,880 NetUtil.py:96 - EOF occurred in violation of protocol (_ssl.c:579)
ERROR 2018-07-11 01:21:13,880 NetUtil.py:97 - SSLError: Failed to connect. Please check openssl library versions.
Refer to: https://bugzilla.redhat.com/show_bug.cgi?id=1022468 for more details.
WARNING 2018-07-11 01:21:13,880 NetUtil.py:124 - Server at https://dev1:8440 is not reachable, sleeping for 10 seconds...
INFO 2018-07-11 01:21:23,881 NetUtil.py:70 - Connecting to https://dev1:8440/ca
ERROR 2018-07-11 01:21:23,886 NetUtil.py:96 - EOF occurred in violation of protocol (_ssl.c:579)
ERROR 2018-07-11 01:21:23,886 NetUtil.py:97 - SSLError: Failed to connect. Please check openssl library versions.
Refer to: https://bugzilla.redhat.com/show_bug.cgi?id=1022468 for more details.
WARNING 2018-07-11 01:21:23,886 NetUtil.py:124 - Server at https://dev1:8440 is not reachable, sleeping for 10 seconds.
Refer to: https://bugzilla.redhat.com/show_bug.cgi?id=1022468 for more details.
WARNING 2018-07-11 01:17:13,729 NetUtil.py:124 - Server at https://dev1:8440 is not reachable, sleeping for 10 seconds...
INFO 2018-07-11 01:17:23,730 NetUtil.py:70 - Connecting to https://dev1:8440/ca
ERROR 2018-07-11 01:17:23,734 NetUtil.py:96 - EOF occurred in violation of protocol (_ssl.c:579)
ERROR 2018-07-11 01:17:23,734 NetUtil.py:97 - SSLError: Failed to connect. Please check openssl library versions.
Refer to: https://bugzilla.redhat.com/show_bug.cgi?id=1022468 for more details.
WARNING 2018-07-11 01:17:23,734 NetUtil.py:124 - Server at https://dev1:8440 is not reachable, sleeping for 10 seconds..
... View more
Labels:
- Labels:
-
Apache Ambari
06-27-2018
06:32 PM
Hi ! I am trying to deploy a new cluster on AWS with the following specs: RHEL 7.5, Ambari 2.6.2, HDP 2.6.5. After registering the node, when I add the components it fails. Below are the log from one of the nodes. How can I skip it to install other components if one is giving issue. Please help.. 2018-06-27 18:05:21,063 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=None -> 2.6
2018-06-27 18:05:21,069 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2018-06-27 18:05:21,070 - Group['livy'] {}
2018-06-27 18:05:21,072 - Adding group Group['livy']
2018-06-27 18:05:21,094 - Group['spark'] {}
2018-06-27 18:05:21,094 - Adding group Group['spark']
2018-06-27 18:05:21,109 - Group['hdfs'] {}
2018-06-27 18:05:21,109 - Adding group Group['hdfs']
2018-06-27 18:05:21,124 - Group['hadoop'] {}
2018-06-27 18:05:21,124 - Adding group Group['hadoop']
2018-06-27 18:05:21,138 - Group['users'] {}
2018-06-27 18:05:21,139 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-06-27 18:05:21,139 - Adding user User['hive']
2018-06-27 18:05:21,169 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-06-27 18:05:21,170 - Adding user User['zookeeper']
2018-06-27 18:05:21,193 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-06-27 18:05:21,193 - Adding user User['infra-solr']
2018-06-27 18:05:21,215 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2018-06-27 18:05:21,216 - Adding user User['oozie']
2018-06-27 18:05:21,238 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-06-27 18:05:21,238 - Adding user User['ams']
2018-06-27 18:05:21,262 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2018-06-27 18:05:21,262 - Adding user User['tez']
2018-06-27 18:05:21,285 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-06-27 18:05:21,285 - Adding user User['livy']
2018-06-27 18:05:21,309 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-06-27 18:05:21,309 - Adding user User['spark']
2018-06-27 18:05:21,333 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2018-06-27 18:05:21,333 - Adding user User['ambari-qa']
2018-06-27 18:05:21,365 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-06-27 18:05:21,365 - Adding user User['kafka']
2018-06-27 18:05:21,389 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs'], 'uid': None}
2018-06-27 18:05:21,389 - Adding user User['hdfs']
2018-06-27 18:05:21,412 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-06-27 18:05:21,412 - Adding user User['sqoop']
2018-06-27 18:05:21,435 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-06-27 18:05:21,435 - Adding user User['yarn']
2018-06-27 18:05:21,457 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-06-27 18:05:21,458 - Adding user User['mapred']
2018-06-27 18:05:21,481 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-06-27 18:05:21,481 - Adding user User['hcat']
2018-06-27 18:05:21,505 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-06-27 18:05:21,509 - Writing File['/var/lib/ambari-agent/tmp/changeUid.sh'] because it doesn't exist
2018-06-27 18:05:21,509 - Changing permission for /var/lib/ambari-agent/tmp/changeUid.sh from 644 to 555
2018-06-27 18:05:21,510 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2018-06-27 18:05:21,514 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2018-06-27 18:05:21,515 - Group['hdfs'] {}
2018-06-27 18:05:21,515 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', u'hdfs']}
2018-06-27 18:05:21,515 - FS Type:
2018-06-27 18:05:21,516 - Directory['/etc/hadoop'] {'mode': 0755}
2018-06-27 18:05:21,516 - Creating directory Directory['/etc/hadoop'] since it doesn't exist.
2018-06-27 18:05:21,516 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2018-06-27 18:05:21,516 - Creating directory Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] since it doesn't exist.
2018-06-27 18:05:21,517 - Changing owner for /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir from 0 to hdfs
2018-06-27 18:05:21,517 - Changing group for /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir from 0 to hadoop
2018-06-27 18:05:21,517 - Changing permission for /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir from 755 to 1777
2018-06-27 18:05:21,517 - Directory['/var/lib/ambari-agent/tmp/AMBARI-artifacts/'] {'create_parents': True}
2018-06-27 18:05:21,517 - Creating directory Directory['/var/lib/ambari-agent/tmp/AMBARI-artifacts/'] since it doesn't exist.
2018-06-27 18:05:21,518 - File['/var/lib/ambari-agent/tmp/jdk-8u112-linux-x64.tar.gz'] {'content': DownloadSource('http://ip-172-31-7-134.us-east-2.compute.internal:8080/resources/jdk-8u112-linux-x64.tar.gz'), 'not_if': 'test -f /var/lib/ambari-agent/tmp/jdk-8u112-linux-x64.tar.gz'}
2018-06-27 18:05:21,520 - Downloading the file from http://ip-172-31-7-134.us-east-2.compute.internal:8080/resources/jdk-8u112-linux-x64.tar.gz
2018-06-27 18:05:23,877 - File['/var/lib/ambari-agent/tmp/jdk-8u112-linux-x64.tar.gz'] {'mode': 0755}
2018-06-27 18:05:23,877 - Changing permission for /var/lib/ambari-agent/tmp/jdk-8u112-linux-x64.tar.gz from 644 to 755
2018-06-27 18:05:23,878 - Directory['/usr/jdk64'] {}
2018-06-27 18:05:23,878 - Creating directory Directory['/usr/jdk64'] since it doesn't exist.
2018-06-27 18:05:23,878 - Execute[('chmod', 'a+x', u'/usr/jdk64')] {'sudo': True}
2018-06-27 18:05:23,884 - Execute['cd /var/lib/ambari-agent/tmp/jdk_tmp_LrBHXG && tar -xf /var/lib/ambari-agent/tmp/jdk-8u112-linux-x64.tar.gz && ambari-sudo.sh cp -rp /var/lib/ambari-agent/tmp/jdk_tmp_LrBHXG/* /usr/jdk64'] {}
2018-06-27 18:05:27,401 - Directory['/var/lib/ambari-agent/tmp/jdk_tmp_LrBHXG'] {'action': ['delete']}
2018-06-27 18:05:27,402 - Removing directory Directory['/var/lib/ambari-agent/tmp/jdk_tmp_LrBHXG'] and all its content
2018-06-27 18:05:28,056 - File['/usr/jdk64/jdk1.8.0_112/bin/java'] {'mode': 0755, 'cd_access': 'a'}
2018-06-27 18:05:28,057 - Execute[('chmod', '-R', '755', u'/usr/jdk64/jdk1.8.0_112')] {'sudo': True}
2018-06-27 18:05:28,088 - Repository['HDP-2.6-repo-1'] {'append_to_file': False, 'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.5.0', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None}
2018-06-27 18:05:28,096 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': InlineTemplate(...)}
2018-06-27 18:05:28,096 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because it doesn't exist
2018-06-27 18:05:28,097 - Repository['HDP-2.6-GPL-repo-1'] {'append_to_file': True, 'base_url': 'http://public-repo-1.hortonworks.com/HDP-GPL/centos7/2.x/updates/2.6.5.0', 'action': ['create'], 'components': [u'HDP-GPL', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None}
2018-06-27 18:05:28,100 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': '[HDP-2.6-repo-1]\nname=HDP-2.6-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.5.0\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-2.6-GPL-repo-1]\nname=HDP-2.6-GPL-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP-GPL/centos7/2.x/updates/2.6.5.0\n\npath=/\nenabled=1\ngpgcheck=0'}
2018-06-27 18:05:28,100 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match
2018-06-27 18:05:28,100 - Repository['HDP-UTILS-1.1.0.22-repo-1'] {'append_to_file': True, 'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.22/repos/centos7', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None}
2018-06-27 18:05:28,103 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': '[HDP-2.6-repo-1]\nname=HDP-2.6-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.5.0\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-2.6-GPL-repo-1]\nname=HDP-2.6-GPL-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP-GPL/centos7/2.x/updates/2.6.5.0\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-UTILS-1.1.0.22-repo-1]\nname=HDP-UTILS-1.1.0.22-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.22/repos/centos7\n\npath=/\nenabled=1\ngpgcheck=0'}
2018-06-27 18:05:28,103 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match
2018-06-27 18:05:28,104 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-06-27 18:05:28,239 - Installing package unzip ('/usr/bin/yum -d 0 -e 0 -y install unzip')
2018-06-27 18:05:30,059 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-06-27 18:05:30,095 - Skipping installation of existing package curl
2018-06-27 18:05:30,096 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-06-27 18:05:30,130 - Installing package hdp-select ('/usr/bin/yum -d 0 -e 0 -y install hdp-select')
2018-06-27 18:05:31,377 - call[('ambari-python-wrap', u'/usr/bin/hdp-select', 'versions')] {}
2018-06-27 18:05:31,400 - call returned (1, 'Traceback (most recent call last):\n File "/usr/bin/hdp-select", line 446, in <module>\n printVersions()\n File "/usr/bin/hdp-select", line 286, in printVersions\n for f in os.listdir(root):\nOSError: [Errno 2] No such file or directory: \'/usr/hdp\'')
2018-06-27 18:05:31,615 - Command repositories: HDP-2.6-repo-1, HDP-2.6-GPL-repo-1, HDP-UTILS-1.1.0.22-repo-1
2018-06-27 18:05:31,615 - Applicable repositories: HDP-2.6-repo-1, HDP-2.6-GPL-repo-1, HDP-UTILS-1.1.0.22-repo-1
2018-06-27 18:05:31,617 - Looking for matching packages in the following repositories: HDP-2.6-repo-1, HDP-2.6-GPL-repo-1, HDP-UTILS-1.1.0.22-repo-1
2018-06-27 18:05:34,280 - Package['hadoop_2_6_5_0_292-yarn'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-06-27 18:05:34,412 - Installing package hadoop_2_6_5_0_292-yarn ('/usr/bin/yum -d 0 -e 0 -y install hadoop_2_6_5_0_292-yarn')
2018-06-27 18:06:05,163 - Package['hadoop_2_6_5_0_292-mapreduce'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-06-27 18:06:05,201 - Installing package hadoop_2_6_5_0_292-mapreduce ('/usr/bin/yum -d 0 -e 0 -y install hadoop_2_6_5_0_292-mapreduce')
2018-06-27 18:06:06,663 - Execution of '/usr/bin/yum -d 0 -e 0 -y install hadoop_2_6_5_0_292-mapreduce' returned 1. Error: Package: hadoop_2_6_5_0_292-hdfs-2.7.3.2.6.5.0-292.x86_64 (HDP-2.6-repo-1)
Requires: libtirpc-devel
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
2018-06-27 18:06:06,663 - Failed to install package hadoop_2_6_5_0_292-mapreduce. Executing '/usr/bin/yum clean metadata'
2018-06-27 18:06:06,910 - Retrying to install package hadoop_2_6_5_0_292-mapreduce after 30 seconds
... View more
Labels:
- Labels:
-
Apache Ambari
06-26-2018
08:34 PM
@Jay Kumar SenSharma - here is the log. When I restart it it stays up for sometime before going down. [root@D02 yarn]# tail -10 yarn-yarn-nodemanager-D02.asotc.com.log
2018-06-26 16:15:09,830 INFO shuffle.ExternalShuffleBlockResolver (ExternalShuffleBlockResolver.java:application) - Application application_1527390036186_1818 removed, cleanupLocalDirs = false
2018-06-26 16:15:09,854 INFO application.ApplicationImpl (ApplicationImpl.java:handle(464)) - Application applic0036186_1818 transitioned from APPLICATION_RESOURCES_CLEANINGUP to FINISHED
2018-06-26 16:15:09,854 INFO logaggregation.AppLogAggregatorImpl (AppLogAggregatorImpl.java:finishLogAggregationlication just finished : application_1527390036186_1818
2018-06-26 16:15:10,267 INFO zlib.ZlibFactory (ZlibFactory.java:<clinit>(49)) - Successfully loaded & initializeb library
2018-06-26 16:15:10,356 INFO ipc.Server (Server.java:saslProcess(1441)) - Auth successful for appattempt_1527390000001 (auth:SIMPLE)
2018-06-26 16:15:10,377 INFO compress.CodecPool (CodecPool.java:getCompressor(153)) - Got brand-new compressor [
2018-06-26 16:15:10,391 INFO logaggregation.AppLogAggregatorImpl (AppLogAggregatorImpl.java:doContainerLogAggreg- Uploading logs for container container_e297_1527390036186_1818_03_000001. Current good log dirs are /data/hadoo
2018-06-26 16:15:10,454 INFO containermanager.ContainerManagerImpl (ContainerManagerImpl.java:startContainerInte Start request for container_e297_1527390036186_1825_01_000001 by user dr.who
2018-06-26 16:15:10,455 INFO containermanager.ContainerManagerImpl (ContainerManagerImpl.java:startContainerInte Creating a new application reference for app application_1527390036186_1825
2018-06-26 16:15:10,473 INFO application.ApplicationImpl (ApplicationImpl.java:handle(464)) - Application applic0036186_1825 transitioned from NEW to INITING
2018-06-26 16:15:10,485 WARN logaggregation.LogAggregationService (LogAggregationService.java:verifyAndCreateRem5)) - Remote Root Log Dir [/app-logs] already exist, but with incorrect permissions. Expected: [rwxrwxrwt], Found]. The cluster may have problems with multiple users.
2018-06-26 16:15:10,486 WARN logaggregation.AppLogAggregatorImpl (AppLogAggregatorImpl.java:<init>(190)) - rollierval is set as -1. The log rolling mornitoring interval is disabled. The logs will be aggregated after this applinished.
2018-06-26 16:15:10,557 INFO nodemanager.NMAuditLogger (NMAuditLogger.java:logSuccess(89)) - USER=dr.who I0 OPERATION=Start Container Request TARGET=ContainerManageImpl RESULT=SUCCESS APPID=application86_1825 CONTAINERID=container_e297_1527390036186_1825_01_000001
2018-06-26 16:15:10,574 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(46ng path : /data/hadoop/yarn/log/application_1527390036186_1818/container_e297_1527390036186_1818_03_000001/launchh
2018-06-26 16:15:10,575 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(46ng path : /data/hadoop/yarn/log/application_1527390036186_1818/container_e297_1527390036186_1818_03_000001/direct
2018-06-26 16:15:10,679 INFO application.ApplicationImpl (ApplicationImpl.java:transition(304)) - Adding contain390036186_1825_01_000001 to application application_1527390036186_1825
2018-06-26 16:15:10,680 INFO application.ApplicationImpl (ApplicationImpl.java:handle(464)) - Application applic0036186_1825 transitioned from INITING to RUNNING
2018-06-26 16:15:10,680 INFO container.ContainerImpl (ContainerImpl.java:handle(1136)) - Container container_e29186_1825_01_000001 transitioned from NEW to LOCALIZED
2018-06-26 16:15:10,680 INFO containermanager.AuxServices (AuxServices.java:handle(196)) - Got event CONTAINER_Id application_1527390036186_1825
2018-06-26 16:15:10,680 INFO yarn.YarnShuffleService (YarnShuffleService.java:initializeContainer(183)) - Initiainer container_e297_1527390036186_1825_01_000001
2018-06-26 16:15:10,902 INFO container.ContainerImpl (ContainerImpl.java:handle(1136)) - Container container_e29186_1825_01_000001 transitioned from LOCALIZED to RUNNING
2018-06-26 16:15:10,910 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:buildCommandExe- launchContainer: [bash, /data/hadoop/yarn/local/usercache/dr.who/appcache/application_1527390036186_1825/contai7390036186_1825_01_000001/default_container_executor.sh]
... View more
06-15-2018
02:03 AM
I have 3 node manager in my cluster and they are failing soon after restart.. I tried deleting /var/log/hadoop-yarn/nodemanager/recovery-state directory but no help... Please help. Where can I find the log to see why is this failing...
... View more
- Tags:
- Hadoop Core
- YARN
Labels:
- Labels:
-
Apache YARN
06-04-2018
02:06 AM
All of a sudden all my cluster machines lost the hearbeat. What could be wro
INFO 2018-06-03 06:14:50,210 NetUtil.py:70 - Connecting to https://localhost:8440/connection_info
INFO 2018-06-03 06:14:50,351 security.py:55 - Server require two-way SSL authentication. Use it instead of one-way...
INFO 2018-06-03 06:14:50,352 security.py:182 - Server certicate exists, ok
INFO 2018-06-03 06:14:50,352 security.py:190 - Agent key exists, ok
INFO 2018-06-03 06:14:50,352 security.py:198 - Agent certificate exists, ok
INFO 2018-06-03 06:14:50,353 security.py:93 - SSL Connect being called.. connecting to the server
ERROR 2018-06-03 06:14:50,361 security.py:80 - Two-way SSL authentication failed. Ensure that server and agent certificates were signed by the same CA and restart the agent.
In order to receive a new agent certificate, remove existing certificate file from keys directory. As a workaround you can turn off two-way SSL authentication in server configuration(ambari.properties)
Exiting..
... View more
- Tags:
- ambari-server
Labels:
- Labels:
-
Apache Ambari
05-04-2018
02:40 PM
INFO 2018-05-04 10:36:31,383 NetUtil.py:70 - Connecting to https://ambari.asotc.com:8440/ca
WARNING 2018-05-04 10:36:31,384 NetUtil.py:101 - Failed to connect to https://ambari.asotc.com:8440/ca due to 'module' object has no attribute 'PROTOCOL_TLSv1_2'
WARNING 2018-05-04 10:36:31,384 NetUtil.py:124 - Server at https://ambari.asotc.com:8440 is not reachable, sleeping for 10 seconds...
INFO 2018-05-04 10:36:41,384 NetUtil.py:70 - Connecting to https://ambari.asotc.com:8440/ca
WARNING 2018-05-04 10:36:41,385 NetUtil.py:101 - Failed to connect to https://ambari.asotc.com:8440/ca due to 'module' object has no attribute 'PROTOCOL_TLSv1_2'
WARNING 2018-05-04 10:36:41,386 NetUtil.py:124 - Server at https://ambari.asotc.com:8440 is not reachable, sleeping for 10 seconds...
INFO 2018-05-04 10:36:51,386 NetUtil.py:70 - Connecting to https://ambari.asotc.com:8440/ca
WARNING 2018-05-04 10:36:51,387 NetUtil.py:101 - Failed to connect to https://ambari.asotc.com:8440/ca due to 'module' object has no attribute 'PROTOCOL_TLSv1_2'
WARNING 2018-05-04 10:36:51,388 NetUtil.py:124 - Server at https://ambari.asotc.com:8440 is not reachable, sleeping for 10 seconds...
INFO 2018-05-04 10:37:01,388 NetUtil.py:70 - Connecting to https://ambari.asotc.com:8440/ca
WARNING 2018-05-04 10:37:01,389 NetUtil.py:101 - Failed to connect to https://ambari.asotc.com:8440/ca due to 'module' object has no attribute 'PROTOCOL_TLSv1_2'
WARNING 2018-05-04 10:37:01,390 NetUtil.py:124 - Server at https://ambari.asotc.com:8440 is not reachable, sleeping for 10 seconds...
INFO 2018-05-04 10:37:11,390 NetUtil.py:70 - Connecting to https://ambari.asotc.com:8440/ca
WARNING 2018-05-04 10:37:11,391 NetUtil.py:101 - Failed to connect to https://ambari.asotc.com:8440/ca due to 'module' object has no attribute 'PROTOCOL_TLSv1_2'
WARNING 2018-05-04 10:37:11,392 NetUtil.py:124 - Server at https://ambari.asotc.com:8440 is not reachable, sleeping for 10 seconds...
INFO 2018-05-04 10:37:21,392 NetUtil.py:70 - Connecting to https://ambari.asotc.com:8440/ca
WARNING 2018-05-04 10:37:21,393 NetUtil.py:101 - Failed to connect to https://ambari.asotc.com:8440/ca due to 'module' object has no attribute 'PROTOCOL_TLSv1_2'
WARNING 2018-05-04 10:37:21,394 NetUtil.py:124 - Server at https://ambari.asotc.com:8440 is not reachable, sleeping for 10 seconds...
INFO 2018-05-04 10:37:31,394 NetUtil.py:70 - Connecting to https://ambari.asotc.com:8440/ca
WARNING 2018-05-04 10:37:31,395 NetUtil.py:101 - Failed to connect to https://ambari.asotc.com:8440/ca due to 'module' object has no attribute 'PROTOCOL_TLSv1_2'
WARNING 2018-05-04 10:37:31,396 NetUtil.py:124 - Server at https://ambari.asotc.com:8440 is not reachable, sleeping for 10 seconds...
... View more
05-04-2018
02:36 PM
@Jay Kumar SenSharma Please find below ambari-agent INI and versio
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific
[server]
hostname=ambari.asotc.com
url_port=8440
secured_url_port=8441
connect_retry_delay=10
max_reconnect_retry_delay=30
[agent]
logdir=/var/log/ambari-agent
piddir=/var/run/ambari-agent
prefix=/var/lib/ambari-agent/data
;loglevel=(DEBUG/INFO)
loglevel=DEBUG
data_cleanup_interval=86400
data_cleanup_max_age=2592000
data_cleanup_max_size_MB = 100
ping_port=8670
cache_dir=/var/lib/ambari-agent/cache
tolerate_download_failures=true
run_as_user=root
parallel_execution=0
alert_grace_period=5
status_command_timeout=5
alert_kinit_timeout=14400000
system_resource_overrides=/etc/resource_overrides
; memory_threshold_soft_mb=400
; memory_threshold_hard_mb=1000
; ignore_mount_points=/mnt/custom1,/mnt/custom2
[security]
force_https_protocol=PROTOCOL_TLSv1_2
keysdir=/var/lib/ambari-agent/keys
server_crt=ca.crt
passphrase_env_var_name=AMBARI_PASSPHRASE
ssl_verify_cert=0
credential_lib_dir=/var/lib/ambari-agent/cred/lib
credential_conf_dir=/var/lib/ambari-agent/cred/conf
credential_shell_cmd=org.apache.hadoop.security.alias.CredentialShell
[n_protocol=PROTOCOL_TLSv1_2twork]
[services]
pidLookupPath=/var/run/
[heartbeat]
state_interval_seconds=60
dirs=/etc/hadoop,/etc/hadoop/conf,/etc/hbase,/etc/hcatalog,/etc/hive,/etc/oozie,
/etc/sqoop,
/var/run/hadoop,/var/run/zookeeper,/var/run/hbase,/var/run/templeton,/var/run/oozie,
/var/log/hadoop,/var/log/zookeeper,/var/log/hbase,/var/run/templeton,/var/log/hive
; 0 - unlimited
log_lines_count=300
idle_interval_min=1
idle_interval_max=10
[logging]
syslog_enabled=0
vds:~ # ambari-agent --version
2.5.1.0
... View more
05-02-2018
08:59 PM
Hi ! Experts. I am working on a POC for SAP-->Hadoop integration prototype. Basically looking into moving data off of SAP and store into Hadoop. BODS is being used for this purpose. Setup: HIVESERVER2 is setup for "NO Authentication" ODBC connection is used to connect to HIVE Warehouse Connection is successful BODS configuration is looking for HDFS directory which I have created Question: When BODS/Job server copies data to hadoop/hive, does it create tables by default. I assume since hiveserver2 is setup as no authentication, BODS will be able to create tables without any authorization What's the purpose of HDFS directory. I created a folder inside HDFS and made it "777" so that anyone create files in it. Thanks
... View more
Labels:
- Labels:
-
Apache Hive
05-02-2018
02:38 PM
@Jay Kumar SenSharma Trying to register another host ( SUSE LINUX 11, SP4) and getting the following error. OS: SUSE LINUX 11, SP4 Python: 2.6.9 AMbari-Agent: 2.5.1 WARNING 2018-05-02 10:30:45,836 NetUtil.py:101 - Failed to connect to https://ambari.asotc.com:8440/ca due to 'module' object has no attribute 'PROTOCOL_TLSv1_2'
... View more
Labels:
- Labels:
-
Apache Ambari
04-24-2018
04:26 PM
@Jay Kumar SenSharma Thank you. Not sure why is this setup like that. security.server.two_way_ssl = true I disabled it and restarted ambari-server. On the host machine I added property "force_https_protocol=PROTOCOL_TLSv1_2" and then I tried registering again BUT this time it did recognized and started registering but then in the review process I got the error below and registration process is stuck at "PREPARING TO DEPLOY: 14 of 14 task COMPLETED" Should I re-image the machine to SUSE LINUX 12 SP1 or SP2 ( per document HDP only supports till SP2) An internal system exception occurred: Trying to map host to cluster where stack does not support host's os type, clusterName=hadoop, clusterStackId=HDP-2.4, hostname=ds.asotc.com, hostOsFamily=suse12
... View more
04-24-2018
02:57 PM
@Jay Kumar SenSharma Looks like it got updated now ds:/var/log/ambari-agent # tail -50 /var/log/ambari-agent/ambari-agent.log
INFO 2018-04-24 10:55:27,940 security.py:55 - Server require two-way SSL authentication. Use it instead of one-way...
INFO 2018-04-24 10:55:27,941 security.py:179 - Server certicate not exists, downloading
INFO 2018-04-24 10:55:27,941 security.py:202 - Downloading server cert from https://ambari.asotc.com:8440/cert/ca/
ERROR 2018-04-24 10:55:27,949 Controller.py:226 - Unable to connect to: https://ambari.asotc.com:8441/agent/v1/register/ds.asotc.com
Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/ambari_agent/Controller.py", line 175, in registerWithServer
ret = self.sendRequest(self.registerUrl, data)
File "/usr/lib/python2.6/site-packages/ambari_agent/Controller.py", line 545, in sendRequest
raise IOError('Request to {0} failed due to {1}'.format(url, str(exception)))
IOError: Request to https://ambari.asotc.com:8441/agent/v1/register/ds.asotc.com failed due to <urlopen error EOF occurred in violation of protocol (_ssl.c:661)>
ERROR 2018-04-24 10:55:27,949 Controller.py:227 - Error:Request to https://ambari.asotc.com:8441/agent/v1/register/ds.asotc.com failed due to <urlopen error EOF occurred in violation of protocol (_ssl.c:661)>
WARNING 2018-04-24 10:55:27,949 Controller.py:228 - Sleeping for 26 seconds and then trying again
DEBUG 2018-04-24 10:55:54,143 HostCheckReportFileHandler.py:126 - Host check report at /var/lib/ambari-agent/data/hostcheck.result
DEBUG 2018-04-24 10:55:54,144 HostCheckReportFileHandler.py:177 - Removing old host check file at /var/lib/ambari-agent/data/hostcheck.result
DEBUG 2018-04-24 10:55:54,145 HostCheckReportFileHandler.py:182 - Creating host check file at /var/lib/ambari-agent/data/hostcheck.result
INFO 2018-04-24 10:55:54,158 Controller.py:170 - Registering with ds.asotc.com (172.16.1.67) (agent='{"hardwareProfile": {"kernel": "Linux", "domain": "asotc.com", "physicalprocessorcount": 1, "kernelrelease": "4.4.73-7-default", "uptime_days": "0", "memorytotal": 8068504, "swapfree": "2.00 GB", "memorysize": 8068504, "osfamily": "suse", "swapsize": "2.00 GB", "processorcount": 1, "netmask": "255.255.255.0", "timezone": "EST", "hardwareisa": "x86_64", "memoryfree": 6838720, "operatingsystem": "sles", "kernelmajversion": "4.4", "kernelversion": "4.4.73", "macaddress": "00:15:5D:01:C8:19", "operatingsystemrelease": "12", "ipaddress": "172.16.1.67", "hostname": "ds", "uptime_hours": "1", "fqdn": "ds.asotc.com", "id": "root", "architecture": "x86_64", "selinux": false, "mounts": [{"available": "4023372", "used": "0", "percent": "0%", "device": "devtmpfs", "mountpoint": "/dev", "type": "devtmpfs", "size": "4023372"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/", "type": "btrfs", "size": "62914560"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/.snapshots", "type": "btrfs", "size": "62914560"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/var/tmp", "type": "btrfs", "size": "62914560"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/home", "type": "btrfs", "size": "62914560"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/var/lib/mysql", "type": "btrfs", "size": "62914560"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/boot/grub2/x86_64-efi", "type": "btrfs", "size": "62914560"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/opt", "type": "btrfs", "size": "62914560"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/tmp", "type": "btrfs", "size": "62914560"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/srv", "type": "btrfs", "size": "62914560"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/var/spool", "type": "btrfs", "size": "62914560"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/usr/local", "type": "btrfs", "size": "62914560"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/var/lib/libvirt/images", "type": "btrfs", "size": "62914560"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/var/lib/mailman", "type": "btrfs", "size": "62914560"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/var/lib/machines", "type": "btrfs", "size": "62914560"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/var/log", "type": "btrfs", "size": "62914560"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/var/crash", "type": "btrfs", "size": "62914560"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/var/lib/pgsql", "type": "btrfs", "size": "62914560"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/boot/grub2/i386-pc", "type": "btrfs", "size": "62914560"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/var/lib/mariadb", "type": "btrfs", "size": "62914560"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/var/lib/named", "type": "btrfs", "size": "62914560"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/var/opt", "type": "btrfs", "size": "62914560"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/var/cache", "type": "btrfs", "size": "62914560"}], "hardwaremodel": "x86_64", "uptime_seconds": "6573", "interfaces": "eth0,lo"}, "currentPingPort": 8670, "prefix": "/var/lib/ambari-agent/data", "agentVersion": "2.5.1.0", "agentEnv": {"transparentHugePage": "", "hostHealth": {"agentTimeStampAtReporting": 1524581754145, "activeJavaProcs": [], "liveServices": [{"status": "Healthy", "name": "ntpd or ntp", "desc": ""}]}, "reverseLookup": true, "alternatives": [], "hasUnlimitedJcePolicy": null, "umask": "18", "firewallName": "rcSuSEfirewall2", "stackFoldersAndFiles": [], "existingUsers": [], "firewallRunning": false}, "timestamp": 1524581753950, "hostname": "ds.asotc.com", "responseId": -1, "publicHostname": "ds.asotc.com"}')
INFO 2018-04-24 10:55:54,162 NetUtil.py:70 - Connecting to https://ambari.asotc.com:8440/connection_info
DEBUG 2018-04-24 10:55:54,335 NetUtil.py:90 - GET https://ambari.asotc.com:8440/connection_info -> 200, body: {"security.server.two_way_ssl":"true"}
DEBUG 2018-04-24 10:55:54,336 security.py:52 - Server two-way SSL authentication required: True
INFO 2018-04-24 10:55:54,336 security.py:55 - Server require two-way SSL authentication. Use it instead of one-way...
INFO 2018-04-24 10:55:54,337 security.py:179 - Server certicate not exists, downloading
INFO 2018-04-24 10:55:54,337 security.py:202 - Downloading server cert from https://ambari.asotc.com:8440/cert/ca/
ERROR 2018-04-24 10:55:54,345 Controller.py:226 - Unable to connect to: https://ambari.asotc.com:8441/agent/v1/register/ds.asotc.com
Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/ambari_agent/Controller.py", line 175, in registerWithServer
ret = self.sendRequest(self.registerUrl, data)
File "/usr/lib/python2.6/site-packages/ambari_agent/Controller.py", line 545, in sendRequest
raise IOError('Request to {0} failed due to {1}'.format(url, str(exception)))
IOError: Request to https://ambari.asotc.com:8441/agent/v1/register/ds.asotc.com failed due to <urlopen error EOF occurred in violation of protocol (_ssl.c:661)>
ERROR 2018-04-24 10:55:54,345 Controller.py:227 - Error:Request to https://ambari.asotc.com:8441/agent/v1/register/ds.asotc.com failed due to <urlopen error EOF occurred in violation of protocol (_ssl.c:661)>
WARNING 2018-04-24 10:55:54,345 Controller.py:228 - Sleeping for 25 seconds and then trying again
DEBUG 2018-04-24 10:56:19,552 HostCheckReportFileHandler.py:126 - Host check report at /var/lib/ambari-agent/data/hostcheck.result
DEBUG 2018-04-24 10:56:19,553 HostCheckReportFileHandler.py:177 - Removing old host check file at /var/lib/ambari-agent/data/hostcheck.result
DEBUG 2018-04-24 10:56:19,553 HostCheckReportFileHandler.py:182 - Creating host check file at /var/lib/ambari-agent/data/hostcheck.result
INFO 2018-04-24 10:56:19,558 Controller.py:170 - Registering with ds.asotc.com (172.16.1.67) (agent='{"hardwareProfile": {"kernel": "Linux", "domain": "asotc.com", "physicalprocessorcount": 1, "kernelrelease": "4.4.73-7-default", "uptime_days": "0", "memorytotal": 8068504, "swapfree": "2.00 GB", "memorysize": 8068504, "osfamily": "suse", "swapsize": "2.00 GB", "processorcount": 1, "netmask": "255.255.255.0", "timezone": "EST", "hardwareisa": "x86_64", "memoryfree": 6838720, "operatingsystem": "sles", "kernelmajversion": "4.4", "kernelversion": "4.4.73", "macaddress": "00:15:5D:01:C8:19", "operatingsystemrelease": "12", "ipaddress": "172.16.1.67", "hostname": "ds", "uptime_hours": "1", "fqdn": "ds.asotc.com", "id": "root", "architecture": "x86_64", "selinux": false, "mounts": [{"available": "4023372", "used": "0", "percent": "0%", "device": "devtmpfs", "mountpoint": "/dev", "type": "devtmpfs", "size": "4023372"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/", "type": "btrfs", "size": "62914560"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/.snapshots", "type": "btrfs", "size": "62914560"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/var/tmp", "type": "btrfs", "size": "62914560"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/home", "type": "btrfs", "size": "62914560"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/var/lib/mysql", "type": "btrfs", "size": "62914560"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/boot/grub2/x86_64-efi", "type": "btrfs", "size": "62914560"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/opt", "type": "btrfs", "size": "62914560"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/tmp", "type": "btrfs", "size": "62914560"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/srv", "type": "btrfs", "size": "62914560"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/var/spool", "type": "btrfs", "size": "62914560"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/usr/local", "type": "btrfs", "size": "62914560"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/var/lib/libvirt/images", "type": "btrfs", "size": "62914560"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/var/lib/mailman", "type": "btrfs", "size": "62914560"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/var/lib/machines", "type": "btrfs", "size": "62914560"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/var/log", "type": "btrfs", "size": "62914560"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/var/crash", "type": "btrfs", "size": "62914560"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/var/lib/pgsql", "type": "btrfs", "size": "62914560"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/boot/grub2/i386-pc", "type": "btrfs", "size": "62914560"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/var/lib/mariadb", "type": "btrfs", "size": "62914560"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/var/lib/named", "type": "btrfs", "size": "62914560"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/var/opt", "type": "btrfs", "size": "62914560"}, {"available": "55906100", "used": "5387708", "percent": "9%", "device": "/dev/mapper/system-root", "mountpoint": "/var/cache", "type": "btrfs", "size": "62914560"}], "hardwaremodel": "x86_64", "uptime_seconds": "6573", "interfaces": "eth0,lo"}, "currentPingPort": 8670, "prefix": "/var/lib/ambari-agent/data", "agentVersion": "2.5.1.0", "agentEnv": {"transparentHugePage": "", "hostHealth": {"agentTimeStampAtReporting": 1524581779553, "activeJavaProcs": [], "liveServices": [{"status": "Healthy", "name": "ntpd or ntp", "desc": ""}]}, "reverseLookup": true, "alternatives": [], "hasUnlimitedJcePolicy": null, "umask": "18", "firewallName": "rcSuSEfirewall2", "stackFoldersAndFiles": [], "existingUsers": [], "firewallRunning": false}, "timestamp": 1524581779358, "hostname": "ds.asotc.com", "responseId": -1, "publicHostname": "ds.asotc.com"}')
INFO 2018-04-24 10:56:19,561 NetUtil.py:70 - Connecting to https://ambari.asotc.com:8440/connection_info
DEBUG 2018-04-24 10:56:19,683 NetUtil.py:90 - GET https://ambari.asotc.com:8440/connection_info -> 200, body: {"security.server.two_way_ssl":"true"}
DEBUG 2018-04-24 10:56:19,684 security.py:52 - Server two-way SSL authentication required: True
INFO 2018-04-24 10:56:19,684 security.py:55 - Server require two-way SSL authentication. Use it instead of one-way...
INFO 2018-04-24 10:56:19,684 security.py:179 - Server certicate not exists, downloading
INFO 2018-04-24 10:56:19,684 security.py:202 - Downloading server cert from https://ambari.asotc.com:8440/cert/ca/
ERROR 2018-04-24 10:56:19,693 Controller.py:226 - Unable to connect to: https://ambari.asotc.com:8441/agent/v1/register/ds.asotc.com
Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/ambari_agent/Controller.py", line 175, in registerWithServer
ret = self.sendRequest(self.registerUrl, data)
File "/usr/lib/python2.6/site-packages/ambari_agent/Controller.py", line 545, in sendRequest
raise IOError('Request to {0} failed due to {1}'.format(url, str(exception)))
IOError: Request to https://ambari.asotc.com:8441/agent/v1/register/ds.asotc.com failed due to <urlopen error EOF occurred in violation of protocol (_ssl.c:661)>
ERROR 2018-04-24 10:56:19,693 Controller.py:227 - Error:Request to https://ambari.asotc.com:8441/agent/v1/register/ds.asotc.com failed due to <urlopen error EOF occurred in violation of protocol (_ssl.c:661)>
WARNING 2018-04-24 10:56:19,694 Controller.py:228 - Sleeping for 28 seconds and then trying again
... View more