Member since
07-18-2018
53
Posts
0
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1095 | 07-30-2018 04:19 PM | |
2609 | 07-26-2018 06:53 PM |
08-21-2019
05:40 PM
Thanks for the quick response. I had firewall disabled and nc -v command seems working. [root@msl-dpe-perf80-100g resources]# java -cp /var/lib/ambari-server/resources/DBConnectionVerification.jar:/usr/share/java/mysql-connector-java.jar org.apache.ambari.server.DBConnectionVerification "jdbc:mysql://msl-dpe-perf80-100g.msl.lab:3306/ambari" "ambari" "bigdata" com.mysql.jdbc.Driver
ERROR: Unable to connect to the DB. Please check DB connection properties.
com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Could not create connection to database server.
[root@msl-dpe-perf80-100g resources]# firewall-cmd --state
not running
[root@msl-dpe-perf80-100g resources]# telnet msl-dpe-perf80-100g.msl.lab 3306
Trying 10.100.10.80...
Connected to msl-dpe-perf80-100g.msl.lab.
Escape character is '^]'.
J
8.0.17\ufffdv2RS3\ufffdKwrx?^]7qj6caching_sha2_passwordConnection closed by foreign host.
[root@msl-dpe-perf80-100g resources]# nc -v msl-dpe-perf80-100g.msl.lab 3306
Ncat: Version 6.40 ( <a href="http://nmap.org/ncat" target="_blank">http://nmap.org/ncat</a> )
Ncat: Connected to 10.100.10.80:3306.
J
8.0.17\ufffdzZbd\ufffd\ufffd\ufffd\ufffd\ufffds_N}6+X?[|^caching_sha2_password
Ncat: Broken pipe. From command line, I can access the mysql. See below for detail [root@msl-dpe-perf80-100g resources]# mysql -uambari -pbigdata
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 172
Server version: 8.0.17 MySQL Community Server - GPL
Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| ambari |
| druid |
| information_schema |
| mysql |
| performance_schema |
| registry |
| streamline |
| superset |
| sys |
+--------------------+
9 rows in set (0.01 sec)
mysql> use ambari;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
mysql> show tables;
+-------------------------------+
| Tables_in_ambari |
+-------------------------------+
| ClusterHostMapping |
| QRTZ_BLOB_TRIGGERS |
| QRTZ_CALENDARS |
| QRTZ_CRON_TRIGGERS |
| QRTZ_FIRED_TRIGGERS |
| QRTZ_JOB_DETAILS |
| QRTZ_LOCKS |
| QRTZ_PAUSED_TRIGGER_GRPS |
| QRTZ_SCHEDULER_STATE |
| QRTZ_SIMPLE_TRIGGERS |
| QRTZ_SIMPROP_TRIGGERS |
| QRTZ_TRIGGERS |
| adminpermission |
| adminprincipal |
| adminprincipaltype |
| adminprivilege |
| adminresource |
| adminresourcetype |
| alert_current |
| alert_definition |
| alert_group |
| alert_group_target |
| alert_grouping |
| alert_history |
| alert_notice |
| alert_target |
| alert_target_states |
| ambari_configuration |
| ambari_operation_history |
| ambari_sequences |
| artifact |
| blueprint |
| blueprint_configuration |
| blueprint_setting |
| clusterconfig |
| clusters |
| clusterservices |
| clusterstate |
| confgroupclusterconfigmapping |
| configgroup |
| configgrouphostmapping |
| execution_command |
| extension |
| extensionlink |
| host_role_command |
| host_version |
| hostcomponentdesiredstate |
| hostcomponentstate |
| hostconfigmapping |
| hostgroup |
| hostgroup_component |
| hostgroup_configuration |
| hosts |
| hoststate |
| kerberos_descriptor |
| kerberos_keytab |
| kerberos_keytab_principal |
| kerberos_principal |
| key_value_store |
| kkp_mapping_service |
| metainfo |
| permission_roleauthorization |
| remoteambaricluster |
| remoteambariclusterservice |
| repo_applicable_services |
| repo_definition |
| repo_os |
| repo_tags |
| repo_version |
| request |
| requestoperationlevel |
| requestresourcefilter |
| requestschedule |
| requestschedulebatchrequest |
| role_success_criteria |
| roleauthorization |
| servicecomponent_version |
| servicecomponentdesiredstate |
| serviceconfig |
| serviceconfighosts |
| serviceconfigmapping |
| servicedesiredstate |
| setting |
| stack |
| stage |
| topology_host_info |
| topology_host_request |
| topology_host_task |
| topology_hostgroup |
| topology_logical_request |
| topology_logical_task |
| topology_request |
| upgrade |
| upgrade_group |
| upgrade_history |
| upgrade_item |
| user_authentication |
| users |
| viewentity |
| viewinstance |
| viewinstancedata |
| viewinstanceproperty |
| viewmain |
| viewparameter |
| viewresource |
| viewurl |
| widget |
| widget_layout |
| widget_layout_user_widget |
+-------------------------------+
109 rows in set (0.00 sec)
mysql> SELECT User FROM mysql.user;
+------------------+
| User |
+------------------+
| ambari |
| druid |
| rangerdba |
| registry |
| streamline |
| superset |
| ambari |
| mysql.infoschema |
| mysql.session |
| mysql.sys |
| rangerdba |
| root |
| ambari |
| rangerdba |
+------------------+
14 rows in set (0.00 sec)
mysql> SHOW GRANTS FOR 'ambari'@'msl-dpe-perf80-100g.msl.lab';
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Grants for ambari@msl-dpe-perf80-100g.msl.lab |
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, SHUTDOWN, PROCESS, FILE, REFERENCES, INDEX, ALTER, SHOW DATABASES, SUPER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER, CREATE TABLESPACE, CREATE ROLE, DROP ROLE ON *.* TO `ambari`@`msl-dpe-perf80-100g.msl.lab` |
| GRANT APPLICATION_PASSWORD_ADMIN,AUDIT_ADMIN,BACKUP_ADMIN,BINLOG_ADMIN,BINLOG_ENCRYPTION_ADMIN,CLONE_ADMIN,CONNECTION_ADMIN,ENCRYPTION_KEY_ADMIN,GROUP_REPLICATION_ADMIN,INNODB_REDO_LOG_ARCHIVE,PERSIST_RO_VARIABLES_ADMIN,REPLICATION_SLAVE_ADMIN,RESOURCE_GROUP_ADMIN,RESOURCE_GROUP_USER,ROLE_ADMIN,SERVICE_CONNECTION_ADMIN,SESSION_VARIABLES_ADMIN,SET_USER_ID,SYSTEM_USER,SYSTEM_VARIABLES_ADMIN,TABLE_ENCRYPTION_ADMIN,XA_RECOVER_ADMIN ON *.* TO `ambari`@`msl-dpe-perf80-100g.msl.lab` |
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
2 rows in set (0.01 sec)
... View more
- Tags:
- ambari-server
- MySQL
08-21-2019
04:58 PM
yes. I posted with a link to ambari installation document. Like this: https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.3.0/bk_ambari-installation/content/set_up_the_ambari_server.html could this be issue? btw, one of my question is showing up now.
... View more
08-21-2019
04:19 PM
Hello, I had trouble to post questions on the community. It keeps marking my question as spam and I dispute to moderator get no answers. Does anyone know how to post questions without being treated as spam thanks, Harry
... View more
08-21-2019
03:54 PM
I am trying to install HDP 3.1 with Ambari 2.7 following this document https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.3.0/bk_ambari-installation/content/set_up_the_ambari_server.html All software installation completed and mysql database had been created. I then setup ambari with "ambari setup" followed by "ambari start". But I got follwing errors. I had listed the screen output for ambari setup and part of ambari-server.log to show the error message. Please help. Thanks, Harry Setup and start ambari-server [root@msl-dpe-perf80-100g resources]# ambari-server setup
Using python /usr/bin/python
Setup ambari-server
Checking SELinux...
SELinux status is 'disabled'
Customize user account for ambari-server daemon [y/n] (n)?
Adjusting ambari-server permissions and ownership...
Checking firewall status...
Checking JDK...
Do you want to change Oracle JDK [y/n] (n)? Check JDK version for Ambari Server...
JDK version found: 8
Minimum JDK version is 8 for Ambari. Skipping to setup different JDK for Ambari Server.
Checking GPL software agreement...
Completing setup...
Configuring database...
Enter advanced database configuration [y/n] (n)? y
Configuring database...
==============================================================================
Choose one of the following options:
[1] - PostgreSQL (Embedded)
[2] - Oracle
[3] - MySQL / MariaDB
[4] - PostgreSQL
[5] - Microsoft SQL Server (Tech Preview)
[6] - SQL Anywhere
[7] - BDB
==============================================================================
Enter choice (3): 3
Hostname (localhost): msl-dpe-perf80-100g.msl.lab
Port (3306):
Database name (ambari):
Username (ambari):
Enter Database Password (bigdata):
Configuring ambari database...
Configuring remote database connection properties...
WARNING: Before starting Ambari Server, you must run the following DDL directly from the database shell to create the schema: /var/lib/ambari-server/resources/Ambari-DDL-MySQL-CREATE.sql
Proceed with configuring remote database connection properties [y/n] (y)?
Extracting system views...
.....
Ambari repo file contains latest json url <a href="<a href="http://public-repo-1.hortonworks.com/HDP/hdp_urlinfo.json" target="_blank">http://public-repo-1.hortonworks.com/HDP/hdp_urlinfo.json</a>" target="_blank"><a href="http://public-repo-1.hortonworks.com/HDP/hdp_urlinfo.json</a" target="_blank">http://public-repo-1.hortonworks.com/HDP/hdp_urlinfo.json</a</a>>, updating stacks repoinfos with it...
Adjusting ambari-server permissions and ownership...
Ambari Server 'setup' completed successfully.
[root@msl-dpe-perf80-100g resources]# ambari-server start
Using python /usr/bin/python
Starting ambari-server
Ambari Server running with administrator privileges.
Organizing resource files at /var/lib/ambari-server/resources...
Ambari database consistency check started...
Server PID at: /var/run/ambari-server/ambari-server.pid
Server out at: /var/log/ambari-server/ambari-server.out
Server log at: /var/log/ambari-server/ambari-server.log
Waiting for server start........................................ERROR: Exiting with exit code -1.
REASON: Ambari Server java process has stopped. Please check the logs for more information.
[root@msl-dpe-perf80-100g resources]# view /var/log/ambari-server/ambari-server.log ambari-server.log errors: 2019-08-21 15:30:03,832 INFO [main] AuditLoggerModule:82 - Binding audit event creator class org.apache.ambari.server.audit.request.eventcreator.UpgradeItemEventCreator
2019-08-21 15:30:03,832 INFO [main] AuditLoggerModule:82 - Binding audit event creator class org.apache.ambari.server.audit.request.eventcreator.UserEventCreator
2019-08-21 15:30:03,832 INFO [main] AuditLoggerModule:82 - Binding audit event creator class org.apache.ambari.server.audit.request.eventcreator.ValidationIgnoreEventCreator
2019-08-21 15:30:03,832 INFO [main] AuditLoggerModule:82 - Binding audit event creator class org.apache.ambari.server.audit.request.eventcreator.ViewInstanceEventCreator
2019-08-21 15:30:03,832 INFO [main] AuditLoggerModule:82 - Binding audit event creator class org.apache.ambari.server.audit.request.eventcreator.ViewPrivilegeEventCreator
2019-08-21 15:30:04,856 INFO [main] HostRoleCommandDAO:277 - Host role command status summary cache enabled !
2019-08-21 15:30:04,858 INFO [main] TransactionalLock$LockArea:121 - LockArea HRC_STATUS_CACHE is enabled
2019-08-21 15:30:05,081 INFO [main] AmbariServerConfigurationProvider:68 - Registered org.apache.ambari.server.ldap.service.AmbariLdapConfigurationProvider in event publisher
2019-08-21 15:30:05,584 INFO [main] CredentialStoreServiceImpl:60 - Initialized the temporary credential store. KeyStore entries will be retained for 90 minutes and will be actively purged
2019-08-21 15:30:05,593 INFO [main] LockFactory:54 - Lock profiling is disabled
2019-08-21 15:30:05,594 INFO [main] AmbariServer:1083 - Getting the controller
2019-08-21 15:30:06,282 INFO [pool-5-thread-1] AmbariServerConfigurationProvider:98 - JPA initialized event received: JpaInitializedEvent{eventType=JPA_INITIALIZED}
2019-08-21 15:30:06,282 INFO [pool-5-thread-1] AmbariServerConfigurationProvider:122 - Loading ldap-configuration configuration data
2019-08-21 15:30:06,377 INFO [main] AbstractPoolBackedDataSource:212 - Initializing c3p0 pool... com.mchange.v2.c3p0.ComboPooledDataSource [ acquireIncrement -> 5, acquireRetryAttempts -> 30, acquireRetryDelay -> 1000, autoCommitOnClose -> false, automaticTestTable -> null, breakAfterAcquireFailure -> false, checkoutTimeout -> 0, connectionCustomizerClassName -> null, connectionTesterClassName -> com.mchange.v2.c3p0.impl.DefaultConnectionTester, contextClassLoaderSource -> caller, dataSourceName -> 2vsgnka41ekyetann3ejj|57bc27f5, debugUnreturnedConnectionStackTraces -> false, description -> null, driverClass -> com.mysql.jdbc.Driver, extensions -> {}, factoryClassLocation -> null, forceIgnoreUnresolvedTransactions -> false, forceSynchronousCheckins -> false, forceUseNamedDriverClass -> false, identityToken -> 2vsgnka41ekyetann3ejj|57bc27f5, idleConnectionTestPeriod -> 7200, initialPoolSize -> 5, jdbcUrl -> jdbc:mysql://msl-dpe-perf80-100g.msl.lab:3306/ambari, maxAdministrativeTaskTime -> 0, maxConnectionAge -> 0, maxIdleTime -> 14400, maxIdleTimeExcessConnections -> 0, maxPoolSize -> 32, maxStatements -> 0, maxStatementsPerConnection -> 0, minPoolSize -> 5, numHelperThreads -> 3, preferredTestQuery -> SELECT 1, privilegeSpawnedThreads -> false, properties -> {user=******, password=******}, propertyCycle -> 0, statementCacheNumDeferredCloseThreads -> 0, testConnectionOnCheckin -> false, testConnectionOnCheckout -> false, unreturnedConnectionTimeout -> 0, userOverrides -> {}, usesTraditionalReflectiveProxies -> false ]
2019-08-21 15:30:36,719 WARN [C3P0PooledConnectionPoolManager[identityToken->2vsgnka41ekyetann3ejj|57bc27f5]-HelperThread-#1] BasicResourcePool:223 - com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask@30120b4 -- Acquisition Attempt Failed!!! Clearing pending acquires. While trying to acquire a needed new resource, we failed to succeed more than the maximum number of allowed acquisition attempts (30). Last acquisition attempt exception:
com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Could not create connection to database server.
at sun.reflect.GeneratedConstructorAccessor48.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
at com.mysql.jdbc.Util.getInstance(Util.java:386)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1015)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:989)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:975)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:920)
at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2570)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2306)
at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:839)
at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:49)
at sun.reflect.GeneratedConstructorAccessor42.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:421)
at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:350)
... View more
- Tags:
- ambari-server
Labels:
- Labels:
-
Apache Ambari
08-21-2019
03:40 PM
I am trying to setup a cluster with HDP 3.1. I have a server with clean CentOS 7.4. I installed ambari 2.7.3 using this document. https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.3.0/bk_ambari-installation/content/set_up_the_ambari_server.html
After completed all steps, I got error when try to start ambari server.
[root@msl-dpe-perf80-100g resources]# ambari-server setup
Using python /usr/bin/python
Setup ambari-server
Checking SELinux...
SELinux status is 'disabled'
Customize user account for ambari-server daemon [y/n] (n)?
Adjusting ambari-server permissions and ownership...
Checking firewall status...
Checking JDK...
Do you want to change Oracle JDK [y/n] (n)? Check JDK version for Ambari Server...
JDK version found: 8
Minimum JDK version is 8 for Ambari. Skipping to setup different JDK for Ambari Server.
Checking GPL software agreement...
Completing setup...
Configuring database...
Enter advanced database configuration [y/n] (n)? y
Configuring database...
==============================================================================
Choose one of the following options:
[1] - PostgreSQL (Embedded)
[2] - Oracle
[3] - MySQL / MariaDB
[4] - PostgreSQL
[5] - Microsoft SQL Server (Tech Preview)
[6] - SQL Anywhere
[7] - BDB
==============================================================================
Enter choice (3): 3
Hostname (localhost): msl-dpe-perf80-100g.msl.lab
Port (3306):
Database name (ambari):
Username (ambari):
Enter Database Password (bigdata):
Configuring ambari database...
Configuring remote database connection properties...
WARNING: Before starting Ambari Server, you must run the following DDL directly from the database shell to create the schema: /var/lib/ambari-server/resources/Ambari-DDL-MySQL-CREATE.sql
Proceed with configuring remote database connection properties [y/n] (y)?
Extracting system views...
.....
Ambari repo file contains latest json url <a href="http://public-repo-1.hortonworks.com/HDP/hdp_urlinfo.json" target="_blank">http://public-repo-1.hortonworks.com/HDP/hdp_urlinfo.json</a>, updating stacks repoinfos with it...
Adjusting ambari-server permissions and ownership...
Ambari Server 'setup' completed successfully.
[root@msl-dpe-perf80-100g resources]# ambari-server start
Using python /usr/bin/python
Starting ambari-server
Ambari Server running with administrator privileges.
Organizing resource files at /var/lib/ambari-server/resources...
Ambari database consistency check started...
Server PID at: /var/run/ambari-server/ambari-server.pid
Server out at: /var/log/ambari-server/ambari-server.out
Server log at: /var/log/ambari-server/ambari-server.log
Waiting for server start........................................ERROR: Exiting with exit code -1.
REASON: Ambari Server java process has stopped. Please check the logs for more information.
[root@msl-dpe-perf80-100g resources]#
The error message reported in ambari-server.log is:
019-08-21 15:30:03,832 INFO [main] AuditLoggerModule:82 - Binding audit event creator class org.apache.ambari.server.audit.request.eventcreator.ViewInstanceEventCreator
2019-08-21 15:30:03,832 INFO [main] AuditLoggerModule:82 - Binding audit event creator class org.apache.ambari.server.audit.request.eventcreator.ViewPrivilegeEventCreator
2019-08-21 15:30:04,856 INFO [main] HostRoleCommandDAO:277 - Host role command status summary cache enabled !
2019-08-21 15:30:04,858 INFO [main] TransactionalLock$LockArea:121 - LockArea HRC_STATUS_CACHE is enabled
2019-08-21 15:30:05,081 INFO [main] AmbariServerConfigurationProvider:68 - Registered org.apache.ambari.server.ldap.service.AmbariLdapConfigurationProvider in event publisher
2019-08-21 15:30:05,584 INFO [main] CredentialStoreServiceImpl:60 - Initialized the temporary credential store. KeyStore entries will be retained for 90 minutes and will be actively purged
2019-08-21 15:30:05,593 INFO [main] LockFactory:54 - Lock profiling is disabled
2019-08-21 15:30:05,594 INFO [main] AmbariServer:1083 - Getting the controller
2019-08-21 15:30:06,282 INFO [pool-5-thread-1] AmbariServerConfigurationProvider:98 - JPA initialized event received: JpaInitializedEvent{eventType=JPA_INITIALIZED}
2019-08-21 15:30:06,282 INFO [pool-5-thread-1] AmbariServerConfigurationProvider:122 - Loading ldap-configuration configuration data
2019-08-21 15:30:06,377 INFO [main] AbstractPoolBackedDataSource:212 - Initializing c3p0 pool... com.mchange.v2.c3p0.ComboPooledDataSource [ acquireIncrement -> 5, acquireRetryAttempts -> 30, acquireRetryDelay -> 1000, autoCommitOnClose -> false, automaticTestTable -> null, breakAfterAcquireFailure -> false, checkoutTimeout -> 0, connectionCustomizerClassName -> null, connectionTesterClassName -> com.mchange.v2.c3p0.impl.DefaultConnectionTester, contextClassLoaderSource -> caller, dataSourceName -> 2vsgnka41ekyetann3ejj|57bc27f5, debugUnreturnedConnectionStackTraces -> false, description -> null, driverClass -> com.mysql.jdbc.Driver, extensions -> {}, factoryClassLocation -> null, forceIgnoreUnresolvedTransactions -> false, forceSynchronousCheckins -> false, forceUseNamedDriverClass -> false, identityToken -> 2vsgnka41ekyetann3ejj|57bc27f5, idleConnectionTestPeriod -> 7200, initialPoolSize -> 5, jdbcUrl -> jdbc:mysql://msl-dpe-perf80-100g.msl.lab:3306/ambari, maxAdministrativeTaskTime -> 0, maxConnectionAge -> 0, maxIdleTime -> 14400, maxIdleTimeExcessConnections -> 0, maxPoolSize -> 32, maxStatements -> 0, maxStatementsPerConnection -> 0, minPoolSize -> 5, numHelperThreads -> 3, preferredTestQuery -> SELECT 1, privilegeSpawnedThreads -> false, properties -> {user=******, password=******}, propertyCycle -> 0, statementCacheNumDeferredCloseThreads -> 0, testConnectionOnCheckin -> false, testConnectionOnCheckout -> false, unreturnedConnectionTimeout -> 0, userOverrides -> {}, usesTraditionalReflectiveProxies -> false ]
2019-08-21 15:30:36,719 WARN [C3P0PooledConnectionPoolManager[identityToken->2vsgnka41ekyetann3ejj|57bc27f5]-HelperThread-#1] BasicResourcePool:223 - com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask@30120b4 -- Acquisition Attempt Failed!!! Clearing pending acquires. While trying to acquire a needed new resource, we failed to succeed more than the maximum number of allowed acquisition attempts (30). Last acquisition attempt exception:
com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Could not create connection to database server.
at sun.reflect.GeneratedConstructorAccessor48.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
at com.mysql.jdbc.Util.getInstance(Util.java:386)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1015)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:989)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:975)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:920)
at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2570)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2306)
at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:839)
at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:49)
at sun.reflect.GeneratedConstructorAccessor42.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:421)
at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:350)
at com.mchange.v2.c3p0.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:175)
at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:220)
at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:206)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool$1PooledConnectionResourcePoolManager.acquireResource(C3P0PooledConnectionPool.java:203)
at com.mchange.v2.resourcepool.BasicResourcePool.doAcquire(BasicResourcePool.java:1138)
mysql connector was installed as:
harry.li@msl-dpe-perf80:~$ sudo yum install mysql-connector-java*
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* epel: d2lzkl7pfhq30w.cloudfront.net
Resolving Dependencies
--> Running transaction check
---> Package mysql-connector-java.noarch 1:5.1.25-3.el7 will be installed
--> Processing Dependency: jta >= 1.0 for package: 1:mysql-connector-java-5.1.25-3.el7.noarch
--> Processing Dependency: java >= 1:1.6.0 for package: 1:mysql-connector-java-5.1.25-3.el7.noarch
--> Processing Dependency: slf4j for package: 1:mysql-connector-java-5.1.25-3.el7.noarch
--> Processing Dependency: jpackage-utils for package: 1:mysql-connector-java-5.1.25-3.el7.noarch
--> Running transaction check
---> Package geronimo-jta.noarch 0:1.1.1-17.el7 will be installed
---> Package java-1.8.0-openjdk.x86_64 1:1.8.0.161-0.b14.el7_4 will be installed
--> Processing Dependency: java-1.8.0-openjdk-headless(x86-64) = 1:1.8.0.161-0.b14.el7_4 for package: 1:java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64
--> Processing Dependency: xorg-x11-fonts-Type1 for package: 1:java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64
--> Processing Dependency: libjvm.so(SUNWprivate_1.1)(64bit) for package: 1:java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64
--> Processing Dependency: libjli.so(SUNWprivate_1.1)(64bit) for package: 1:java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64
--> Processing Dependency: libjava.so(SUNWprivate_1.1)(64bit) for package: 1:java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64
--> Processing Dependency: libjvm.so()(64bit) for package: 1:java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64
--> Processing Dependency: libjli.so()(64bit) for package: 1:java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64
--> Processing Dependency: libjava.so()(64bit) for package: 1:java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64
--> Processing Dependency: libgif.so.4()(64bit) for package: 1:java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64
--> Processing Dependency: libawt.so()(64bit) for package: 1:java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64
---> Package javapackages-tools.noarch 0:3.4.1-11.el7 will be installed
--> Processing Dependency: python-javapackages = 3.4.1-11.el7 for package: javapackages-tools-3.4.1-11.el7.noarch
--> Processing Dependency: libxslt for package: javapackages-tools-3.4.1-11.el7.noarch
---> Package slf4j.noarch 0:1.7.4-4.el7_4 will be installed
--> Processing Dependency: mvn(log4j:log4j) for package: slf4j-1.7.4-4.el7_4.noarch
--> Processing Dependency: mvn(javassist:javassist) for package: slf4j-1.7.4-4.el7_4.noarch
--> Processing Dependency: mvn(commons-logging:commons-logging) for package: slf4j-1.7.4-4.el7_4.noarch
--> Processing Dependency: mvn(commons-lang:commons-lang) for package: slf4j-1.7.4-4.el7_4.noarch
--> Processing Dependency: mvn(ch.qos.cal10n:cal10n-api) for package: slf4j-1.7.4-4.el7_4.noarch
--> Running transaction check
---> Package apache-commons-lang.noarch 0:2.6-15.el7 will be installed
... View more
Labels:
- Labels:
-
Apache Ambari
05-11-2019
12:32 AM
This is when I try to start all services from Ambari. The status from Name Node show nothing started. How can I track where it got stuck?
... View more
05-10-2019
11:18 PM
This is what I had done. I had setup the cluster with 8 DataNodes and tested fine. I then decommissioned 4 DataNodes. The smaller cluster with 1 NameNode/4 Data Node works fine. I then brought back the 4 decommissioned DataNodes through Ambari WebUI "Recommission" command. The cluster works fine for few days until suddenly, it ran into this problem. Here is the output from curl root@msl-dpe-perf88:/home/harry.li# curl -i -H "X-Requested-By: ambari" -u admin:admin -X GET http://msl-dpe-perf88.msl.lab:8080/api/v1/hosts
HTTP/1.1 200 OK
X-Frame-Options: DENY
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Cache-Control: no-store
Pragma: no-cache
Set-Cookie: AMBARISESSIONID=onn7abudz0gc1fzd6hw0wp9nj;Path=/;HttpOnly
Expires: Thu, 01 Jan 1970 00:00:00 GMT
User: admin
Content-Type: text/plain
Vary: Accept-Encoding, User-Agent
Content-Length: 1940
{
"href" : "http://msl-dpe-perf88.msl.lab:8080/api/v1/hosts",
"items" : [
{
"href" : "http://msl-dpe-perf88.msl.lab:8080/api/v1/hosts/msl-dpe-d10.msl.lab",
"Hosts" : {
"cluster_name" : "HW8N",
"host_name" : "msl-dpe-d10.msl.lab"
}
},
{
"href" : "http://msl-dpe-perf88.msl.lab:8080/api/v1/hosts/msl-dpe-d9.msl.lab",
"Hosts" : {
"cluster_name" : "HW8N",
"host_name" : "msl-dpe-d9.msl.lab"
}
},
{
"href" : "http://msl-dpe-perf88.msl.lab:8080/api/v1/hosts/msl-dpe-perf82.msl.lab",
"Hosts" : {
"cluster_name" : "HW8N",
"host_name" : "msl-dpe-perf82.msl.lab"
}
},
{
"href" : "http://msl-dpe-perf88.msl.lab:8080/api/v1/hosts/msl-dpe-perf83.msl.lab",
"Hosts" : {
"cluster_name" : "HW8N",
"host_name" : "msl-dpe-perf83.msl.lab"
}
},
{
"href" : "http://msl-dpe-perf88.msl.lab:8080/api/v1/hosts/msl-dpe-perf84.msl.lab",
"Hosts" : {
"cluster_name" : "HW8N",
"host_name" : "msl-dpe-perf84.msl.lab"
}
},
{
"href" : "http://msl-dpe-perf88.msl.lab:8080/api/v1/hosts/msl-dpe-perf85.msl.lab",
"Hosts" : {
"cluster_name" : "HW8N",
"host_name" : "msl-dpe-perf85.msl.lab"
}
},
{
"href" : "http://msl-dpe-perf88.msl.lab:8080/api/v1/hosts/msl-dpe-perf86.msl.lab",
"Hosts" : {
"cluster_name" : "HW8N",
"host_name" : "msl-dpe-perf86.msl.lab"
}
},
{
"href" : "http://msl-dpe-perf88.msl.lab:8080/api/v1/hosts/msl-dpe-perf87.msl.lab",
"Hosts" : {
"cluster_name" : "HW8N",
"host_name" : "msl-dpe-perf87.msl.lab"
}
},
{
"href" : "http://msl-dpe-perf88.msl.lab:8080/api/v1/hosts/msl-dpe-perf88.msl.lab",
"Hosts" : {
"cluster_name" : "HW8N",
"host_name" : "msl-dpe-perf88.msl.lab"
}
}
]
}
root@msl-dpe-perf88:/home/harry.li#
... View more
05-10-2019
10:30 PM
Looks like the hosts table is empty. Also, I can see all hosts and heartbeats. mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| ambari |
| hive |
| mysql |
| performance_schema |
| sys |
+--------------------+
6 rows in set (0.00 sec)
mysql> select ipv4 from hosts;
Empty set (0.00 sec)
mysql> show tables;
+-------------------------------+
| Tables_in_ambari |
+-------------------------------+
| ClusterHostMapping |
| QRTZ_BLOB_TRIGGERS |
| QRTZ_CALENDARS |
| QRTZ_CRON_TRIGGERS |
| QRTZ_FIRED_TRIGGERS |
| QRTZ_JOB_DETAILS |
| QRTZ_LOCKS |
| QRTZ_PAUSED_TRIGGER_GRPS |
| QRTZ_SCHEDULER_STATE |
| QRTZ_SIMPLE_TRIGGERS |
| QRTZ_SIMPROP_TRIGGERS |
| QRTZ_TRIGGERS |
| adminpermission |
| adminprincipal |
| adminprincipaltype |
| adminprivilege |
| adminresource |
| adminresourcetype |
| alert_current |
| alert_definition |
| alert_group |
| alert_group_target |
| alert_grouping |
| alert_history |
| alert_notice |
| alert_target |
| alert_target_states |
| ambari_operation_history |
| ambari_sequences |
| artifact |
| blueprint |
| blueprint_configuration |
| blueprint_setting |
| clusterconfig |
| clusters |
| clusterservices |
| clusterstate |
| confgroupclusterconfigmapping |
| configgroup |
| configgrouphostmapping |
| execution_command |
| extension |
| extensionlink |
| groups |
| host_role_command |
| host_version |
| hostcomponentdesiredstate |
| hostcomponentstate |
| hostconfigmapping |
| hostgroup |
| hostgroup_component |
| hostgroup_configuration |
| hosts |
| hoststate |
| kerberos_descriptor |
| kerberos_principal |
| kerberos_principal_host |
| key_value_store |
| members |
| metainfo |
| permission_roleauthorization |
| remoteambaricluster |
| remoteambariclusterservice |
| repo_version |
| request |
| requestoperationlevel |
| requestresourcefilter |
| requestschedule |
| requestschedulebatchrequest |
| role_success_criteria |
| roleauthorization |
| servicecomponent_version |
| servicecomponentdesiredstate |
| serviceconfig |
| serviceconfighosts |
| serviceconfigmapping |
| servicedesiredstate |
| setting |
| stack |
| stage |
| topology_host_info |
| topology_host_request |
| topology_host_task |
| topology_hostgroup |
| topology_logical_request |
| topology_logical_task |
| topology_request |
| upgrade |
| upgrade_group |
| upgrade_history |
| upgrade_item |
| users |
| viewentity |
| viewinstance |
| viewinstancedata |
| viewinstanceproperty |
| viewmain |
| viewparameter |
| viewresource |
| viewurl |
| widget |
| widget_layout |
| widget_layout_user_widget |
+-------------------------------+
103 rows in set (0.01 sec)
mysql> select * from hosts;
Empty set (0.00 sec)
mysql>
... View more
05-10-2019
09:25 PM
Thanks for the command. I tried the command, but it looks like I am not using the MariaDB. Here is the output: spark@msl-dpe-perf88:/home/harry.li/TPC/benchmarks/tpcds-HW$ mysql -u ambari -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 4
Server version: 5.7.23-0ubuntu0.16.04.1 (Ubuntu)
Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> use ambari;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
mysql> select host_id,host_name from hosts;
Empty set (0.00 sec)
mysql>
... View more
05-08-2019
08:40 PM
How do I find the host_id? From the UI, I see host_name, not host_id
... View more
05-08-2019
06:09 PM
I have a cluster that was running fine. I decommissioned few Data Nodes and then recommissioned them back. Then, I cannot start services using Web UI. I verified that Ambari server is running. I also followed the suggestions from this post by restarting ambari agents for all cluster nodes. Rebooted all machines. And verified that my port 8080 is listening. root@server88:/home/user# netstat -anop | grep 8080
tcp6 0 0 :::8080 :::* LISTEN 20524/java off (0.00/0/0)
tcp6 0 0 10.1.30.180:8080 10.1.50.62:50358 ESTABLISHED 20524/java off (0.00/0/0)
tcp6 0 0 10.1.30.180:8080 10.1.50.62:50352 ESTABLISHED 20524/java off (0.00/0/0)
tcp6 0 0 10.1.30.180:8080 10.1.50.62:50360 ESTABLISHED 20524/java off (0.00/0/0)
I did not get errors from ambari-server log file neither.
... View more
Labels:
- Labels:
-
Apache Ambari
03-07-2019
05:47 PM
I have a cluster with 4 datanode and 1 namenode. The cluster has 2 zookeeper servers, 1 on namenode and 1 one on the datanode. When I run benchmark on the cluster, I noticed that the datanode with zookeeper is busier than other 3 data nodes. The extra workload is from org.apache.spark.deploy.yarn.ExecutorLauncher. /usr/jdk64/jdk1.8.0_112/bin/java -server -Xmx512m -Djava.io.tmpdir=/hadoop/yarn/local/usercache/spark/appcache/application_1551897449716_0001/container_e11_1551897449716_0001_01_000001/tmp -Dhdp.version=2.6.5.1050-37 -Dspark.yarn.app.container.log.dir=/hadoop/yarn/log/application_1551897449716_0001/container_e11_1551897449716_0001_01_000001 org.apache.spark.deploy.yarn.ExecutorLauncher --arg msl-dpe-d13.msl.lab:38313 --properties-file /hadoop/yarn/local/usercache/spark/appcache/application_1551897449716_0001/container_e11_1551897449716_0001_01_000001/__spark_conf__/__spark_conf__.properties My questions are: 1. Should org.apache.spark.deploy.yarn.ExecutorLauncher run on the namenode? 2. How to move org.apache.spark.deploy.yarn.ExecutorLauncher to run on namenode?
... View more
- Tags:
- YARN
Labels:
- Labels:
-
Apache YARN
02-04-2019
03:50 PM
Hi Geoffrey, I have 4 data nodes and no snapshots set. Here are the output from the commands hdfs@msl-dpe-perf88:/$ hdfs dfs -df -h
Filesystem Size Used Available Use%
hdfs://msl-dpe-perf88.msl.lab:8020 28.2 T 27.1 T 0 96%
hdfs@msl-dpe-perf88:/$ hdfs lsSnapshottableDir
hdfs@msl-dpe-perf88:/$
... View more
02-02-2019
12:28 AM
Hi Geoffrey, I had been using -skipTrash options when I deleting files and /user/hdfs/.Trash directory is empty. I had also used -expunge command 24 hours ago. I still did not see disk space being freed. Here is results from dfsadm command hdfs@msl-dpe-perf88:/$ hdfs dfs -ls /user/hdfs/.Trash
hdfs@msl-dpe-perf88:/$
hdfs@msl-dpe-perf88:/$ hdfs dfsadmin -report
Configured Capacity: 31048107810816 (28.24 TB)
Present Capacity: 29767722012672 (27.07 TB)
DFS Remaining: 0 (0 B)
DFS Used: 29767722012672 (27.07 TB)
DFS Used%: 100.00%
Under replicated blocks: 97449
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
-------------------------------------------------
... View more
02-01-2019
11:25 PM
My HDFS has total disk space of 28.2 TB, which I have 15.1TB useful data on it. After a while, Ambari reports the disk space is 75% full, so I started "Balance HDFS" from Ambari. Since then, the available disk space decrease slowly until they are all gone. Now I have no more useful disk space. How can I reclaim the unused disk space. hdfs@msl-dpe-perf88:/$ hdfs dfs -du -h -s /
15.1 T /
hdfs@msl-dpe-perf88:/$ hdfs dfs -df -h
Filesystem Size Used Available Use%
hdfs://msl-dpe-perf88.msl.lab:8020 28.2 T 27.1 T 0 96%
... View more
- Tags:
- Hadoop Core
- HDFS
Labels:
- Labels:
-
Apache Hadoop
10-17-2018
11:00 PM
I need to expand an existing cluster with additional data node. This question had been answered earlier, but the document link is broken. Can anyone resend the link or help answer the question again?
... View more
10-10-2018
01:27 AM
I have a cluster of 8 Data Nodes and total 42TB. Replication is set 2. After loaded ~10TB data, I found out that only ~7TB left. I then deleted 1TB data with "-skipTrash" option, but did not see extra disk space is freed. Following is my disk usage: hdfs@msl-dpe-perf87:/home/harry.li/tpcds_5.db$ hdfs dfs -df -h
Filesystem Size Used Available Use%
hdfs://msl-dpe-perf88.msl.lab:8020 42.4 T 32.7 T 7.3 T 77%
hdfs@msl-dpe-perf87:/home/harry.li/tpcds_5.db$ hdfs dfs -du -h /
9.0 T /TPCDS
979.2 M /app-logs
477.8 G /apps
0 /ats
918.2 M /hdp
0 /mapred
8.4 M /mr-history
0 /spark-history
5.7 M /spark2-history
2.4 K /tmp
105.7 G /user
Question: 1. The math here seems not add up. With ~10TB data (replication2), I should still have at least 20TB left. Why I have only 7TB left 2. Why deleting data did not free up the disk space?
... View more
- Tags:
- Hadoop Core
- HDFS
Labels:
- Labels:
-
Apache Hadoop
09-21-2018
06:18 PM
Yes. And I accepted the answer. Thanks.
... View more
09-18-2018
11:16 PM
I have a 2 Data Node cluster with each Data Node has a large capacity disk used for HDFS. This disk is mounted at /hadoop. Now I need add more storage to the cluster. According to the suggestion from this question, I need create a new mount point, for example /extraDisk and mount the disk here. Then, I nded create another direcotry /extraDisa/hdfsData. After that, add DataNode directories" field under HDFS -> Configs -> Settings tab, as a comma separated value. Here are my questions. 1. Since I have 2 Data Nodes, do I have to repeat above steps on both Data Nodes? 2. What if I have different sized disks on different Data Node, can I still use above steps? 3. How can I just add one disk to one of the Data Node?
... View more
- Tags:
- Hadoop Core
- HDFS
Labels:
- Labels:
-
Apache Hadoop
08-08-2018
10:13 PM
Similar issue still exist for mysql at https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.2.0/bk_ambari-administration/content/using_ambari_with_mysql.html for mysql connector 5.1.x
... View more
08-07-2018
11:38 PM
I started a fresh Ambari installation on Ubuntu 16 following https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.2.0/bk_ambari-installation/content/ch_Getting_Ready.html. All options are using default values and it completed successfully. In the process, it did not ask for mysql connector installation. At time of starting hive-metastore, it complains: SLF4J: Found binding in [jar:file:/usr/hdp/2.6.5.0-292/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Metastore connection URL: jdbc:mysql://msl-dpe-perf86.dhcp.msl.lab/hive?createDatabaseIfNotExist=true
Metastore Connection Driver : com.mysql.jdbc.Driver
Metastore connection User: hive
Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary.
org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version.
Underlying cause: com.mysql.cj.jdbc.exceptions.CommunicationsException : Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
SQL Error code: 0
org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version.
at org.apache.hive.beeline.HiveSchemaHelper.getConnectionToMetastore(HiveSchemaHelper.java:80)
at org.apache.hive.beeline.HiveSchemaTool.getConnectionToMetastore(HiveSchemaTool.java:133)
at org.apache.hive.beeline.HiveSchemaTool.testConnectionToMetastore(HiveSchemaTool.java:187)
at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:291)
at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:277)
at org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:526)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
Caused by: com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at com.mysql.cj.jdbc.exceptions.SQLError.createCommunicationsException(SQLError.java:174)
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:64)
at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:832)
at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:456)
at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:240)
at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:207)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:247)
at org.apache.hive.beeline.HiveSchemaHelper.getConnectionToMetastore(HiveSchemaHelper.java:76)
... 11 more
Caused by: com.mysql.cj.exceptions.CJCommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:61)
at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:105)
at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:151)
at com.mysql.cj.exceptions.ExceptionFactory.createCommunicationsException(ExceptionFactory.java:167)
at com.mysql.cj.protocol.a.NativeSocketConnection.connect(NativeSocketConnection.java:91)
at com.mysql.cj.NativeSession.connect(NativeSession.java:152)
at com.mysql.cj.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:952)
at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:822)
... 17 more
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at com.mysql.cj.protocol.StandardSocketFactory.connect(StandardSocketFactory.java:173)
at com.mysql.cj.protocol.a.NativeSocketConnection.connect(NativeSocketConnection.java:65)
... 20 more
*** schemaTool failed *** Based on my previous questions, it is pointing to missing mysql connector. I followed https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.2.0/bk_ambari-administration/content/using_ambari_with_mysql.html, downloaded and installed connector. But at step #2, I ran into following error. What should I do to make mysql running? Note, this machine has fresh Ubuntu 16.04 installation with only Ambari installed root@msl-dpe-perf87:/usr/share/java# ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java.jar
Using python /usr/bin/python
Setup ambari-server
Copying /usr/share/java/mysql-connector-java.jar to /var/lib/ambari-server/resources
If you are updating existing jdbc driver jar for mysql with mysql-connector-java.jar. Please remove the old driver jar, from all hosts. Restarting services that need the driver, will automatically copy the new jar to the hosts.
JDBC driver was successfully initialized.
Ambari Server 'setup' completed successfully.
root@msl-dpe-perf87:/usr/share/java# ls /usr/share/java/mysql-connector-java.jar
/usr/share/java/mysql-connector-java.jar
root@msl-dpe-perf87:/usr/share/java# mysql -u root -p
The program 'mysql' can be found in the following packages:
* mysql-client-core-5.7
* mariadb-client-core-10.0
Try: apt install <selected package>
root@msl-dpe-perf87:/usr/share/java#
... View more
Labels:
08-03-2018
07:34 PM
After connector installed, do I need restart the Ambari Server. Either way, I still have the same error 1. After installed connector following the instruction above, without restart Ambari Server, I still get same error 2. After I changed configuration database to [3] as in in step #4 in the instruction. I can no longer start Ambari server. I got mesage "Unable to determin server PID". At this point, Ambari server is down and in order to start it, I have to rerun "ambari-server setup" to change the configuration database back to [1] Another question, before I try to start hive metastore, is there a way to verify the sql connector root@msl-dpe-perf43:/var/lib/ambari-server/resources# ambari-server setup
Using python /usr/bin/python
Setup ambari-server
Checking SELinux...
WARNING: Could not run /usr/sbin/sestatus: OK
Customize user account for ambari-server daemon [y/n] (n)?
Adjusting ambari-server permissions and ownership...
Checking firewall status...
Checking JDK...
Do you want to change Oracle JDK [y/n] (n)? Checking GPL software agreement...
Completing setup...
Configuring database...
Enter advanced database configuration [y/n] (n)? y
Configuring database...
==============================================================================
Choose one of the following options:
[1] - PostgreSQL (Embedded)
[2] - Oracle
[3] - MySQL / MariaDB
[4] - PostgreSQL
[5] - Microsoft SQL Server (Tech Preview)
[6] - SQL Anywhere
[7] - BDB
==============================================================================
Enter choice (1): 3
Hostname (localhost):
Port (3306):
Database name (ambari):
Username (ambari):
Enter Database Password (ambari):
Configuring ambari database...
Configuring remote database connection properties...
WARNING: Before starting Ambari Server, you must run the following DDL against the database to create the schema: /var/lib/ambari-server/resources/Ambari-DDL-MySQL-CREATE.sql
Proceed with configuring remote database connection properties [y/n] (y)?
Extracting system views...
............
Adjusting ambari-server permissions and ownership...
Ambari Server 'setup' completed successfully.
root@msl-dpe-perf43:/var/lib/ambari-server/resources# ambari-server status
Using python /usr/bin/python
Ambari-server status
Ambari Server running
Found Ambari Server PID: 18700 at: /var/run/ambari-server/ambari-server.pid
root@msl-dpe-perf43:/var/lib/ambari-server/resources# ambari-server restart
Using python /usr/bin/python
Restarting ambari-server
Waiting for server stop...
Ambari Server stopped
Ambari Server running with administrator privileges.
Organizing resource files at /var/lib/ambari-server/resources...
Ambari database consistency check started...
Server PID at: /var/run/ambari-server/ambari-server.pid
Server out at: /var/log/ambari-server/ambari-server.out
Server log at: /var/log/ambari-server/ambari-server.log
Waiting for server start.........Unable to determine server PID. Retrying...
......Unable to determine server PID. Retrying...
......Unable to determine server PID. Retrying...
ERROR: Exiting with exit code -1.
REASON: Ambari Server java process died with exitcode 1. Check /var/log/ambari-server/ambari-server.out for more information. And here are some error message from .out file Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary.
Exception in thread "main" com.google.inject.CreationException: Guice creation errors:
1) Error injecting constructor, java.lang.RuntimeException: Error while creating database accessor
at org.apache.ambari.server.orm.DBAccessorImpl.<init>(DBAccessorImpl.java:87)
at org.apache.ambari.server.orm.DBAccessorImpl.class(DBAccessorImpl.java:75)
while locating org.apache.ambari.server.orm.DBAccessorImpl
while locating org.apache.ambari.server.orm.DBAccessor
for field at org.apache.ambari.server.orm.dao.DaoUtils.dbAccessor(DaoUtils.java:36)
at org.apache.ambari.server.orm.dao.DaoUtils.class(DaoUtils.java:36)
while locating org.apache.ambari.server.orm.dao.DaoUtils
for field at org.apache.ambari.server.orm.dao.UserDAO.daoUtils(UserDAO.java:45)
at org.apache.ambari.server.orm.dao.UserDAO.class(UserDAO.java:45)
while locating org.apache.ambari.server.orm.dao.UserDAO
for field at org.apache.ambari.server.controller.internal.ActiveWidgetLayoutResourceProvider.userDAO(ActiveWidgetLayoutResourceProvider.java:61)
Caused by: java.lang.RuntimeException: Error while creating database accessor
at org.apache.ambari.server.orm.DBAccessorImpl.<init>(DBAccessorImpl.java:120)
at org.apache.ambari.server.orm.DBAccessorImpl$FastClassByGuice$86dbc63e.newInstance(<generated>)
... View more
08-02-2018
11:55 PM
Hi Amarnath and Geoffrey, I cleaned up my system and reinstalled everything with Ambari. The installation went well, but I still cannot start Metastore. But this time, I got different error messages. I created a new ticket. Could you help check it out? Thanks, Harry
... View more
08-02-2018
11:51 PM
After Ambari installation completed successfully. It failed to start following servers History Server/MapReduce2 Hive Metastore/Hive HiveServer2/Hive Spark History Server/Spark Spark2 History Server/Spark2 All these servers are from the NameNode. 4 error messages complaining that cannot find the DataNode. I verified that DataNode is up and running. For Hive Metastore Following are the error messages for History Server/MapReduce2 Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/historyserver.py", line 129, in <module>
HistoryServer().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 375, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/historyserver.py", line 96, in start
skip=params.sysprep_skip_copy_tarballs_hdfs)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/copy_tarball.py", line 479, in copy_to_hdfs
replace_existing_files=replace_existing_files,
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 606, in action_create_on_execute
self.action_delayed("create")
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 603, in action_delayed
self.get_hdfs_resource_executor().action_delayed(action_name, self)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 338, in action_delayed
self._create_resource()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 354, in _create_resource
self._create_file(self.main_resource.resource.target, source=self.main_resource.resource.source, mode=self.mode)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 469, in _create_file
self.util.run_command(target, 'CREATE', method='PUT', overwrite=True, assertable_result=False, file_to_put=source, **kwargs)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 177, in run_command
return self._run_command(*args, **kwargs)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 250, in _run_command
raise WebHDFSCallException(err_msg, result_dict)
resource_management.libraries.providers.hdfs_resource.WebHDFSCallException: Execution of 'curl -sS -L -w '%{http_code}' -X PUT --data-binary @/usr/hdp/2.6.5.0-292/hadoop/mapreduce.tar.gz -H 'Content-Type: application/octet-stream' 'http://msl-dpe-perf43.msl.lab:50070/webhdfs/v1/hdp/apps/2.6.5.0-292/mapreduce/mapreduce.tar.gz?op=CREATE&user.name=hdfs&overwrite=True&permission=444'' returned status_code=403.
{
"RemoteException": {
"exception": "IOException",
"javaClassName": "java.io.IOException",
"message": "Failed to find datanode, suggest to check cluster health. excludeDatanodes=null"
}
} Here is the error message for Hive Metastore raceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 203, in <module>
HiveMetastore().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 375, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 54, in start
self.configure(env)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 120, in locking_configure
original_configure(obj, *args, **kw)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 72, in configure
hive(name = 'metastore')
File "/usr/lib/ambari-agent/lib/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive.py", line 310, in hive
jdbc_connector(params.hive_jdbc_target, params.hive_previous_jdbc_jar)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive.py", line 527, in jdbc_connector
content = DownloadSource(params.driver_curl_source))
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 123, in action_create
content = self._get_content()
File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 160, in _get_content
return content()
File "/usr/lib/ambari-agent/lib/resource_management/core/source.py", line 52, in __call__
return self.get_content()
File "/usr/lib/ambari-agent/lib/resource_management/core/source.py", line 197, in get_content
raise Fail("Failed to download file from {0} due to HTTP error: {1}".format(self.url, str(ex)))
resource_management.core.exceptions.Fail: Failed to download file from http://msl-dpe-perf43.msl.lab:8080/resources/mysql-connector-java.jar due to HTTP error: HTTP Error 404: Not Found Error message for HiveServer2 Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_server.py", line 161, in <module>
HiveServer().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 375, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_server.py", line 77, in start
self.configure(env) # FOR SECURITY
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 120, in locking_configure
original_configure(obj, *args, **kw)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_server.py", line 51, in configure
hive(name='hiveserver2')
File "/usr/lib/ambari-agent/lib/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive.py", line 145, in hive
copy_tarball.copy_to_hdfs("mapreduce", params.user_group, params.hdfs_user, skip=params.sysprep_skip_copy_tarballs_hdfs)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/copy_tarball.py", line 479, in copy_to_hdfs
replace_existing_files=replace_existing_files,
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 606, in action_create_on_execute
self.action_delayed("create")
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 603, in action_delayed
self.get_hdfs_resource_executor().action_delayed(action_name, self)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 338, in action_delayed
self._create_resource()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 354, in _create_resource
self._create_file(self.main_resource.resource.target, source=self.main_resource.resource.source, mode=self.mode)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 469, in _create_file
self.util.run_command(target, 'CREATE', method='PUT', overwrite=True, assertable_result=False, file_to_put=source, **kwargs)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 177, in run_command
return self._run_command(*args, **kwargs)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 250, in _run_command
raise WebHDFSCallException(err_msg, result_dict)
resource_management.libraries.providers.hdfs_resource.WebHDFSCallException: Execution of 'curl -sS -L -w '%{http_code}' -X PUT --data-binary @/usr/hdp/2.6.5.0-292/hadoop/mapreduce.tar.gz -H 'Content-Type: application/octet-stream' 'http://msl-dpe-perf43.msl.lab:50070/webhdfs/v1/hdp/apps/2.6.5.0-292/mapreduce/mapreduce.tar.gz?op=CREATE&user.name=hdfs&overwrite=True&permission=444'' returned status_code=403.
{
"RemoteException": {
"exception": "IOException",
"javaClassName": "java.io.IOException",
"message": "Failed to find datanode, suggest to check cluster health. excludeDatanodes=null"
}
} Error message for Spark history server Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/SPARK/1.2.1/package/scripts/job_history_server.py", line 98, in <module>
JobHistoryServer().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 375, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/SPARK/1.2.1/package/scripts/job_history_server.py", line 55, in start
spark_service('jobhistoryserver', upgrade_type=upgrade_type, action='start')
File "/var/lib/ambari-agent/cache/common-services/SPARK/1.2.1/package/scripts/spark_service.py", line 43, in spark_service
copy_to_hdfs("spark", params.user_group, params.hdfs_user, skip=params.sysprep_skip_copy_tarballs_hdfs)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/copy_tarball.py", line 479, in copy_to_hdfs
replace_existing_files=replace_existing_files,
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 606, in action_create_on_execute
self.action_delayed("create")
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 603, in action_delayed
self.get_hdfs_resource_executor().action_delayed(action_name, self)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 338, in action_delayed
self._create_resource()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 354, in _create_resource
self._create_file(self.main_resource.resource.target, source=self.main_resource.resource.source, mode=self.mode)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 469, in _create_file
self.util.run_command(target, 'CREATE', method='PUT', overwrite=True, assertable_result=False, file_to_put=source, **kwargs)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 177, in run_command
return self._run_command(*args, **kwargs)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 250, in _run_command
raise WebHDFSCallException(err_msg, result_dict)
resource_management.libraries.providers.hdfs_resource.WebHDFSCallException: Execution of 'curl -sS -L -w '%{http_code}' -X PUT --data-binary @/usr/hdp/2.6.5.0-292/spark/lib/spark-hdp-assembly.jar -H 'Content-Type: application/octet-stream' 'http://msl-dpe-perf43.msl.lab:50070/webhdfs/v1/hdp/apps/2.6.5.0-292/spark/spark-hdp-assembly.jar?op=CREATE&user.name=hdfs&overwrite=True&permission=444'' returned status_code=403.
{
"RemoteException": {
"exception": "IOException",
"javaClassName": "java.io.IOException",
"message": "Failed to find datanode, suggest to check cluster health. excludeDatanodes=null"
}
} Error message for Spark2 history server Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/SPARK/1.2.1/package/scripts/job_history_server.py", line 98, in <module>
JobHistoryServer().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 375, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/SPARK/1.2.1/package/scripts/job_history_server.py", line 55, in start
spark_service('jobhistoryserver', upgrade_type=upgrade_type, action='start')
File "/var/lib/ambari-agent/cache/common-services/SPARK/1.2.1/package/scripts/spark_service.py", line 43, in spark_service
copy_to_hdfs("spark", params.user_group, params.hdfs_user, skip=params.sysprep_skip_copy_tarballs_hdfs)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/copy_tarball.py", line 479, in copy_to_hdfs
replace_existing_files=replace_existing_files,
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 606, in action_create_on_execute
self.action_delayed("create")
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 603, in action_delayed
self.get_hdfs_resource_executor().action_delayed(action_name, self)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 338, in action_delayed
self._create_resource()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 354, in _create_resource
self._create_file(self.main_resource.resource.target, source=self.main_resource.resource.source, mode=self.mode)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 469, in _create_file
self.util.run_command(target, 'CREATE', method='PUT', overwrite=True, assertable_result=False, file_to_put=source, **kwargs)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 177, in run_command
return self._run_command(*args, **kwargs)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 250, in _run_command
raise WebHDFSCallException(err_msg, result_dict)
resource_management.libraries.providers.hdfs_resource.WebHDFSCallException: Execution of 'curl -sS -L -w '%{http_code}' -X PUT --data-binary @/usr/hdp/2.6.5.0-292/spark/lib/spark-hdp-assembly.jar -H 'Content-Type: application/octet-stream' 'http://msl-dpe-perf43.msl.lab:50070/webhdfs/v1/hdp/apps/2.6.5.0-292/spark/spark-hdp-assembly.jar?op=CREATE&user.name=hdfs&overwrite=True&permission=444'' returned status_code=403.
{
"RemoteException": {
"exception": "IOException",
"javaClassName": "java.io.IOException",
"message": "Failed to find datanode, suggest to check cluster health. excludeDatanodes=null"
}
}
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Spark
08-02-2018
12:31 AM
Hi Amarnath, A question regarding the JDBC test program you recommended. When run the test, why I only success with 'localhost' but failed with FQDN name? Could you indicating that somewhere, the setting is wrong? Here is what i got: harry.li@msl-dpe-perf76:/usr/local/JDBC-test$ java -Djava.ext.dirs=/usr/local/JDBC-test/jline_sqlline__mysql_connector/ sqlline.SqlLine
sqlline version 1.0.2 by Marc Prud'hommeaux
sqlline> !connect jdbc:mysql://msl-dpe-perf74.msl.lab:3306/hive hive hive
Connecting to jdbc:mysql://msl-dpe-perf74.msl.lab:3306/hive
Error: Access denied for user 'hive'@'msl-dpe-perf76.msl.lab' (using password: YES) (state=28000,code=1045)
0: jdbc:mysql://msl-dpe-perf74.msl.lab:3306/h> !connect jdbc:mysql://localhost:3306/hive hive hive
Connecting to jdbc:mysql://localhost:3306/hive
Error: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. (state=08S01,code=0)
1: jdbc:mysql://localhost:3306/hive>
... View more
08-02-2018
12:16 AM
Here is the content of /etc/hosts and /etc/hostname. My cluster contains 3 nodes. perf74 is Name Node, perf75 is secondary Name Node and Perf76 is Data node All 3 machines has very similar setup harry.li@msl-dpe-perf74:/etc/mysql/mysql.conf.d$ cat /etc/hostname
msl-dpe-perf74.msl.lab
harry.li@msl-dpe-perf74:/etc/mysql/mysql.conf.d$ cat /etc/hosts
127.0.0.1localhost
#127.0.1.1msl-dpe-perf74.msl.lab
10.1.30.221msl-dpe-perf74.msl.lab
10.1.30.223msl-dpe-perf75.msl.lab
10.1.30.192msl-dpe-perf76.msl.lab
10.10.98.64 perf74
10.10.98.43 perf75
10.10.98.51 perf76
10.10.98.67 perf77
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
... View more
08-02-2018
12:12 AM
Tried, no help. I also listed all users available from mysql below harry.li@msl-dpe-perf74:/etc/mysql/mysql.conf.d$ sudo mysql -u root -h localhost
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 30
Server version: 5.7.23-0ubuntu0.16.04.1 (Ubuntu)
Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> GRANT ALL PRIVILEGES ON hive.* TO 'hive'@'localhost' IDENTIFIED BY 'hive';
Query OK, 0 rows affected, 1 warning (0.00 sec)
mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)
mysql> quit;
Bye
harry.li@msl-dpe-perf74:/etc/mysql/mysql.conf.d$ sudo /etc/init.d/mysql restart
[ ok ] Restarting mysql (via systemctl): mysql.service.
harry.li@msl-dpe-perf74:/etc/mysql/mysql.conf.d$ harry.li@msl-dpe-perf74:/etc/mysql/mysql.conf.d$ mysql -u hive -phive -h msl-dpe-perf74.msl.lab
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 29
Server version: 5.7.23-0ubuntu0.16.04.1 (Ubuntu)
Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| hive |
+--------------------+
2 rows in set (0.00 sec)
mysql> quit;
Bye
... View more
08-01-2018
11:13 PM
Hi Amarnath, With your suggested changes, I still cannot start hive metastore with SQL error code 1045. I also tried to comment out bind-address or set it to 0.0.0.0. Do you have more suggestions? Thanks!
... View more
08-01-2018
11:08 PM
Hi Geoffrey, With changes, I can not access mysql using FQDN, but still failed with hive metastore. The error message is a little different than the original with SQL error code 1045. Please advise, Thanks. Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 203, in <module>
HiveMetastore().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 375, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 56, in start
create_metastore_schema()
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive.py", line 417, in create_metastore_schema
user = params.hive_user
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 262, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 303, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; /usr/hdp/current/hive-server2-hive2/bin/schematool -initSchema -dbType mysql -userName hive -passWord [PROTECTED] -verbose' returned 1. SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.6.5.0-292/hive2/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.6.5.0-292/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Metastore connection URL: jdbc:mysql://msl-dpe-perf74.msl.lab/hive?createDatabaseIfNotExist=true
Metastore Connection Driver : com.mysql.jdbc.Driver
Metastore connection User: hive
Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary.
Wed Aug 01 16:01:39 PDT 2018 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version.
Underlying cause: java.sql.SQLException : Access denied for user 'hive'@'localhost' (using password: YES)
SQL Error code: 1045
org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version.
at org.apache.hive.beeline.HiveSchemaHelper.getConnectionToMetastore(HiveSchemaHelper.java:80)
at org.apache.hive.beeline.HiveSchemaTool.getConnectionToMetastore(HiveSchemaTool.java:133)
at org.apache.hive.beeline.HiveSchemaTool.testConnectionToMetastore(HiveSchemaTool.java:187)
at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:291)
at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:277)
at org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:526)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
Caused by: java.sql.SQLException: Access denied for user 'hive'@'localhost' (using password: YES)
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:129)
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97)
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:832)
at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:456)
at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:240)
at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:207)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:247)
at org.apache.hive.beeline.HiveSchemaHelper.getConnectionToMetastore(HiveSchemaHelper.java:76)
... 11 more
*** schemaTool failed ***
... View more