Member since
08-19-2019
150
Posts
1
Kudos Received
0
Solutions
12-14-2020
02:37 AM
echo "scan 'emp'" | $HBASE_HOME/bin/hbase shell | awk -F'=' '{print $2}' | awk -F ':' '{print $2}'|awk -F ',' '{print $1}'
... View more
12-08-2020
03:47 AM
backup command running with hbase super user hbase backup create full hdfs://hostname:port/backup -t table_name restore command hbase restore hdfs://hostname:port/backup -t table_name we did this on same cluster
... View more
12-08-2020
02:43 AM
i created a table in phoenix create table t2(pk varchar primary key, col1 varchar); upsert into T2 values('1','abcd'); upsert into T2 values('2','123'); select * from T2; +-----+--------+ | PK | COL1 | +-----+--------+ | 1 | abcd | | 2 | 123 | +-----+--------+ table format in hbase hbase(main):024:0> scan 'T2' ROW COLUMN+CELL 1 column=0:\x00\x00\x00\x00, timestamp=1607423302365, value=x 1 column=0:\x80\x0B, timestamp=1607423302365, value=abcd 2 column=0:\x00\x00\x00\x00, timestamp=1607423295436, value=x 2 column=0:\x80\x0B, timestamp=1607423295436, value=123 2 row(s) i backed up above table and restored in hbase. now i want to create phoenix table on top of that so i used below command to create table create table t2(pk varchar primary key, col1 varchar); 2 rows affected (0.016 seconds) select * from T2; +-----+-------+ | PK | COL1 | +-----+-------+ | 1 | | | 2 | | +-----+-------+ after restored and created table in phoenix the primary key values are coming other values are not showing. please help me to solve this issue
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Phoenix
12-01-2020
10:03 PM
Hi i got my backup table in list_namespace list_namespace NAMESPACE backup default hbase 3 row(s) list_namespace_tables 'hbase' in this i dont have backup table list_namespace_tables 'hbase' TABLE meta namespace 2 row(s) is it ok or it will throw an error totake backup i am getting below error 2020-12-02 05:54:53,105 WARN [main] impl.BackupManager: Waiting to acquire backup exclusive lock for 3547s 2020-12-02 05:55:46,187 ERROR [main] impl.BackupAdminImpl: There is an active session already running Backup session finished. Status: FAILURE 2020-12-02 05:55:46,189 ERROR [main] backup.BackupDriver: Error running command-line tool java.io.IOException: Failed to acquire backup system table exclusive lock after 3600s at org.apache.hadoop.hbase.backup.impl.BackupManager.startBackupSession(BackupManager.java:415) at org.apache.hadoop.hbase.backup.impl.TableBackupClient.init(TableBackupClient.java:104) at org.apache.hadoop.hbase.backup.impl.TableBackupClient.(TableBackupClient.java:81) at org.apache.hadoop.hbase.backup.impl.FullTableBackupClient.(FullTableBackupClient.java:62) at org.apache.hadoop.hbase.backup.BackupClientFactory.create(BackupClientFactory.java:51) at org.apache.hadoop.hbase.backup.impl.BackupAdminImpl.backupTables(BackupAdminImpl.java:595) at org.apache.hadoop.hbase.backup.impl.BackupCommands$CreateCommand.execute(BackupCommands.java:347) at org.apache.hadoop.hbase.backup.BackupDriver.parseAndRun(BackupDriver.java:138) at org.apache.hadoop.hbase.backup.BackupDriver.doWork(BackupDriver.java:171) at org.apache.hadoop.hbase.backup.BackupDriver.run(BackupDriver.java:204) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.hbase.backup.BackupDriver.main(BackupDriver.java:179)
... View more
12-01-2020
09:41 PM
2020-12-02 05:37:06,411 WARN [main] impl.BackupManager: Waiting to acquire backup exclusive lock for 3487s 2020-12-02 05:38:06,508 WARN [main] impl.BackupManager: Waiting to acquire backup exclusive lock for 3547s 2020-12-02 05:38:59,592 ERROR [main] impl.BackupAdminImpl: There is an active session already running Backup session finished. Status: FAILURE 2020-12-02 05:38:59,594 ERROR [main] backup.BackupDriver: Error running command-line tool java.io.IOException: Failed to acquire backup system table exclusive lock after 3600s at org.apache.hadoop.hbase.backup.impl.BackupManager.startBackupSession(BackupManager.java:415) at org.apache.hadoop.hbase.backup.impl.TableBackupClient.init(TableBackupClient.java:104) at org.apache.hadoop.hbase.backup.impl.TableBackupClient.<init>(TableBackupClient.java:81) at org.apache.hadoop.hbase.backup.impl.FullTableBackupClient.<init>(FullTableBackupClient.java:62) at org.apache.hadoop.hbase.backup.BackupClientFactory.create(BackupClientFactory.java:51) at org.apache.hadoop.hbase.backup.impl.BackupAdminImpl.backupTables(BackupAdminImpl.java:595) at org.apache.hadoop.hbase.backup.impl.BackupCommands$CreateCommand.execute(BackupCommands.java:347) at org.apache.hadoop.hbase.backup.BackupDriver.parseAndRun(BackupDriver.java:138) at org.apache.hadoop.hbase.backup.BackupDriver.doWork(BackupDriver.java:171) at org.apache.hadoop.hbase.backup.BackupDriver.run(BackupDriver.java:204) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.hbase.backup.BackupDriver.main(BackupDriver.java:179)
... View more
11-30-2020
09:28 PM
hi I also got same issue. did you find any solution for that
... View more
11-30-2020
09:25 PM
I need to take hbase full backup. i fired below command hbase backup create full http://hostname:8020/hbase-backup i am getting below error 2020-12-01 05:23:44,635 WARN [main] impl.BackupManager: Waiting to acquire backup exclusive lock for 1s 2020-12-01 05:24:44,818 WARN [main] impl.BackupManager: Waiting to acquire backup exclusive lock for 61s
... View more
Labels:
- Labels:
-
Apache HBase
11-22-2020
09:12 PM
How can i increase the region server count to ~300-400 Region/RegionServer.
... View more
11-01-2020
10:43 PM
memstore size i reduced to 32MB. still RIT count not reduced. let me know how to increase region count in region server.
... View more
10-07-2020
01:42 AM
Hi all i have 1 Hbase master and 3 region servers. I got a huge number (7000+) in regions in transition state. In hbase master logs 2020-10-07 08:37:02,745 WARN [ProcExecTimeout] assignment.AssignmentManager: STUCK Region-In-Transition rit=OPENING, location=hostname,16020,1601150311831, table=table_name, region=region_id. for that i executed below command assign 'region_id' count is decreased and when i restart hbase again count increased. so i tried hbase hbck -repair ----------------------------------------------------------------------- NOTE: As of HBase version 2.0, the hbck tool is significantly changed. In general, all Read-Only options are supported and can be be used safely. Most -fix/ -repair options are NOT supported. Please see usage below for details on which options are not supported. i got this error. so i need help to reduce regions in transition count or upgrade hbase in ambari to execute repair command.
... View more
- Tags:
- HBase
- Hive
- master logs
Labels:
- Labels:
-
Apache Ambari
-
Apache HBase
07-23-2020
02:38 AM
Can we delete kafka consumer group data? not the consumer group need to delete group data?
... View more
07-21-2020
10:15 PM
yesterday i deleted some topics in my kafka with below command ./kafka-topics.sh — zookeeper localhost:2181 — delete — topic <topic_name> with the configuration of delete.topic.enable=true after 20 hrs i checked those topics still not deleted. how can we force delete those topics. and why it will take this much time to delete the topic?
... View more
Labels:
05-14-2020
10:45 PM
Did you resolve the issue. what are the steps you follow. Help me with the steps
... View more
05-14-2020
10:44 PM
How can i unlink the conf folder. Can you tell me the command
... View more
05-06-2020
02:22 AM
I am getting this error in log file 2020-05-06 04:47:53,605 INFO [AlertNoticeDispatchService RUNNING] AlertNoticeDispatchService:279 - There are 28 pending alert notices about to be dispatched... 2020-05-06 04:47:53,627 INFO [alert-dispatch-27] EmailDispatcher:94 - Sending email: Notification{ type=ALERT, subject=Alert Summary: OK[14], Warning[0], Critical[0], Unknown[0]} 2020-05-06 04:48:03,962 ERROR [alert-dispatch-27] EmailDispatcher:172 - Unable to dispatch notification via Email javax.mail.MessagingException: Could not connect to SMTP host: smtp.gmail.com, port: 465, response: -1 at com.sun.mail.smtp.SMTPTransport.openServer(SMTPTransport.java:2041) at com.sun.mail.smtp.SMTPTransport.protocolConnect(SMTPTransport.java:697) at javax.mail.Service.connect(Service.java:386) at javax.mail.Service.connect(Service.java:245) at javax.mail.Service.connect(Service.java:194) at javax.mail.Transport.send0(Transport.java:253) at javax.mail.Transport.send(Transport.java:124) at org.apache.ambari.server.notifications.dispatchers.EmailDispatcher.dispatch(EmailDispatcher.java:160) at org.apache.ambari.server.notifications.DispatchRunnable.run(DispatchRunnable.java:58) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)
... View more
05-06-2020
12:27 AM
Name: Test Email Critical Groups: All (of your choice) Severity: All (of your choice) Method: EMAIL Email To: Your@companyEmail.com SMTP Server: smtp.gmail.com SMTP Port: 465 Email From : yourGmailId@gmail.com Use authentication: YES (checkbox) Username: yourGmailId@gmail.com Password: $YOUR_GMAIL_PWD Start TLS: YES (checkbox) i gave all the above information for alert setup. Still i i dint get emails.
... View more
Labels:
- Labels:
-
Apache Ambari
04-08-2020
11:02 PM
i am trying to update kafka connect config properties via API using PUT and POST methods. but its failing and got error (method not allowed), How can i update that config file?
... View more
Labels:
- Labels:
-
Apache Kafka
02-26-2020
04:51 AM
Hi goto cd /kafka-logs under the kafka-logs goto vi meta.properties in that change broker.id=1001 to 1 then restart the kafka
... View more
01-31-2020
02:39 AM
\ZooKeeper Server Process Connection failed: Expected response imok, Actual response ruok is not executed because it is not in the whitelist. to XXXXXXXXXXX:2181 when i start zookeeper on ambari its successfully stated but service is stopped immediately. its showing above error /
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Zookeeper
01-13-2020
08:57 PM
Hey Lewis This is Kafka installation on ambari. but i need kafka connect on ambari.
... View more
01-13-2020
03:43 AM
Hi all
How to setup kafka connect setup on ambari. If there is any procedure please let me know. my ambari version 3.1 and kafka i already installed in ambari with the 2.0 version. so now i need to setup kafka connect in ambari.
Thanks in advance
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Kafka
12-04-2019
01:22 AM
Yes i did it manually. so how did i do it from ambari can you tell me the steps
... View more
12-04-2019
12:30 AM
/Still facing the same issue Traceback (most recent call last):
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 995, in restart
self.status(env)
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_metastore.py", line 87, in status
check_process_status(status_params.hive_metastore_pid)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/check_process_status.py", line 43, in check_process_status
raise ComponentIsNotRunning()
ComponentIsNotRunning
The above exception was the cause of the following exception:
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_metastore.py", line 201, in <module>
HiveMetastore().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute
method(env)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 1006, in restart
self.start(env, upgrade_type=upgrade_type)
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_metastore.py", line 61, in start
create_metastore_schema() # execute without config lock
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive.py", line 487, in create_metastore_schema
user = params.hive_user
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 263, in action_run
returns=self.resource.returns)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/ ; /usr/hdp/current/hive-server2/bin/schematool -initSchema -dbType mysql -userName ambari -passWord [PROTECTED] -verbose' returned 1. SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/3.1.0.0-78/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/3.1.0.0-78/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Metastore connection URL: jdbc:mysql://gaian-lap386.com/ambari
Metastore Connection Driver : com.mysql.jdbc.Driver
Metastore connection User: ambari
Starting metastore schema initialization to 3.1.0
org.apache.hadoop.hive.metastore.HiveMetaException: Unknown version specified for initialization: 3.1.0
org.apache.hadoop.hive.metastore.HiveMetaException: Unknown version specified for initialization: 3.1.0
at org.apache.hadoop.hive.metastore.MetaStoreSchemaInfo.generateInitFileName(MetaStoreSchemaInfo.java:137)
at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:585)
at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:567)
at org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:1539)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
at org.apache.hadoop.util.RunJar.main(RunJar.java:232)
*** schemaTool failed ***
... View more
12-02-2019
10:29 PM
/stderr : /var/lib/ambari-agent/data/errors-92.txt
Traceback (most recent call last):
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 995, in restart
self.status(env)
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_metastore.py", line 87, in status
check_process_status(status_params.hive_metastore_pid)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/check_process_status.py", line 43, in check_process_status
raise ComponentIsNotRunning()
ComponentIsNotRunning
The above exception was the cause of the following exception:
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_metastore.py", line 201, in <module>
HiveMetastore().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute
method(env)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 1006, in restart
self.start(env, upgrade_type=upgrade_type)
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive_metastore.py", line 61, in start
create_metastore_schema() # execute without config lock
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HIVE/package/scripts/hive.py", line 487, in create_metastore_schema
user = params.hive_user
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 263, in action_run
returns=self.resource.returns)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/ ; /usr/hdp/current/hive-server2/bin/schematool -initSchema -dbType mysql -userName ambari -passWord [PROTECTED] -verbose' returned 1. SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/3.1.0.0-78/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/3.1.0.0-78/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Metastore connection URL: jdbc:mysql://gaian-lap386.com/ambari?createDatabaseIfNotExist=true
Metastore Connection Driver : com.mysql.jdbc.Driver
Metastore connection User: ambari
Starting metastore schema initialization to 3.1.0
org.apache.hadoop.hive.metastore.HiveMetaException: Unknown version specified for initialization: 3.1.0
org.apache.hadoop.hive.metastore.HiveMetaException: Unknown version specified for initialization: 3.1.0
at org.apache.hadoop.hive.metastore.MetaStoreSchemaInfo.generateInitFileName(MetaStoreSchemaInfo.java:137)
at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:585)
at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:567)
at org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:1539)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
at org.apache.hadoop.util.RunJar.main(RunJar.java:232)
*** schemaTool failed ***
stdout : /var/lib/ambari-agent/data/output-92.txt
2019-12-03 11:51:21,682 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78
2019-12-03 11:51:21,702 - Using hadoop conf dir: /usr/hdp/3.1.0.0-78/hadoop/conf
2019-12-03 11:51:21,937 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78
2019-12-03 11:51:21,942 - Using hadoop conf dir: /usr/hdp/3.1.0.0-78/hadoop/conf
2019-12-03 11:51:21,943 - Group['hdfs'] {}
2019-12-03 11:51:21,944 - Group['hadoop'] {}
2019-12-03 11:51:21,944 - Group['users'] {}
2019-12-03 11:51:21,944 - User['yarn-ats'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-12-03 11:51:21,945 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-12-03 11:51:21,947 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-12-03 11:51:21,949 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-12-03 11:51:21,951 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2019-12-03 11:51:21,952 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-12-03 11:51:21,953 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-12-03 11:51:21,954 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2019-12-03 11:51:21,955 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2019-12-03 11:51:21,956 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-12-03 11:51:21,956 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop'], 'uid': None}
2019-12-03 11:51:21,957 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-12-03 11:51:21,958 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-12-03 11:51:21,959 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-12-03 11:51:21,959 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2019-12-03 11:51:21,960 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2019-12-03 11:51:21,968 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2019-12-03 11:51:21,969 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2019-12-03 11:51:21,971 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2019-12-03 11:51:21,971 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2019-12-03 11:51:21,972 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {}
2019-12-03 11:51:21,981 - call returned (0, '1015')
2019-12-03 11:51:21,982 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1015'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2019-12-03 11:51:21,990 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1015'] due to not_if
2019-12-03 11:51:21,991 - Group['hdfs'] {}
2019-12-03 11:51:21,991 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop', u'hdfs']}
2019-12-03 11:51:21,992 - FS Type: HDFS
2019-12-03 11:51:21,992 - Directory['/etc/hadoop'] {'mode': 0755}
2019-12-03 11:51:22,012 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2019-12-03 11:51:22,014 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2019-12-03 11:51:22,041 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2019-12-03 11:51:22,045 - Skipping Execute[('setenforce', '0')] due to not_if
2019-12-03 11:51:22,046 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2019-12-03 11:51:22,049 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2019-12-03 11:51:22,050 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'cd_access': 'a'}
2019-12-03 11:51:22,051 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2019-12-03 11:51:22,057 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2019-12-03 11:51:22,058 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2019-12-03 11:51:22,065 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2019-12-03 11:51:22,077 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2019-12-03 11:51:22,078 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2019-12-03 11:51:22,078 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2019-12-03 11:51:22,084 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644}
2019-12-03 11:51:22,090 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2019-12-03 11:51:22,094 - Skipping unlimited key JCE policy check and setup since it is not required
2019-12-03 11:51:22,518 - Using hadoop conf dir: /usr/hdp/3.1.0.0-78/hadoop/conf
2019-12-03 11:51:22,527 - call['ambari-python-wrap /usr/bin/hdp-select status hive-server2'] {'timeout': 20}
2019-12-03 11:51:22,557 - call returned (0, 'hive-server2 - 3.1.0.0-78')
2019-12-03 11:51:22,558 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78
2019-12-03 11:51:22,581 - File['/var/lib/ambari-agent/cred/lib/CredentialUtil.jar'] {'content': DownloadSource('http://gaian-lap386.com:8080/resources/CredentialUtil.jar'), 'mode': 0755}
2019-12-03 11:51:22,582 - Not downloading the file from http://gaian-lap386.com:8080/resources/CredentialUtil.jar, because /var/lib/ambari-agent/tmp/CredentialUtil.jar already exists
2019-12-03 11:51:23,692 - call['ambari-sudo.sh su hive -l -s /bin/bash -c 'cat /var/run/hive/hive.pid 1>/tmp/tmpEuZOE_ 2>/tmp/tmp6DIlHs''] {'quiet': False}
2019-12-03 11:51:23,712 - call returned (1, '')
2019-12-03 11:51:23,712 - Execution of 'cat /var/run/hive/hive.pid 1>/tmp/tmpEuZOE_ 2>/tmp/tmp6DIlHs' returned 1. cat: /var/run/hive/hive.pid: No such file or directory
2019-12-03 11:51:23,712 - get_user_call_output returned (1, u'', u'cat: /var/run/hive/hive.pid: No such file or directory')
2019-12-03 11:51:23,719 - Execute['ambari-sudo.sh kill '] {'not_if': '! (ls /var/run/hive/hive.pid >/dev/null 2>&1 && ps -p >/dev/null 2>&1)'}
2019-12-03 11:51:23,725 - Skipping Execute['ambari-sudo.sh kill '] due to not_if
2019-12-03 11:51:23,726 - Execute['ambari-sudo.sh kill -9 '] {'not_if': '! (ls /var/run/hive/hive.pid >/dev/null 2>&1 && ps -p >/dev/null 2>&1) || ( sleep 5 && ! (ls /var/run/hive/hive.pid >/dev/null 2>&1 && ps -p >/dev/null 2>&1) )', 'ignore_failures': True}
2019-12-03 11:51:23,733 - Skipping Execute['ambari-sudo.sh kill -9 '] due to not_if
2019-12-03 11:51:23,734 - Execute['! (ls /var/run/hive/hive.pid >/dev/null 2>&1 && ps -p >/dev/null 2>&1)'] {'tries': 20, 'try_sleep': 3}
2019-12-03 11:51:23,742 - File['/var/run/hive/hive.pid'] {'action': ['delete']}
2019-12-03 11:51:23,742 - Pid file /var/run/hive/hive.pid is empty or does not exist
2019-12-03 11:51:23,806 - Yarn already refreshed
2019-12-03 11:51:23,806 - HdfsResource['/user/hive'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://gaian-lap386.com:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hive', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0755}
2019-12-03 11:51:23,808 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://gaian-lap386.com:50070/webhdfs/v1/user/hive?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpqGCB6j 2>/tmp/tmpDTJzTj''] {'logoutput': None, 'quiet': False}
2019-12-03 11:51:23,845 - call returned (0, '')
2019-12-03 11:51:23,845 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"aclBit":true,"blockSize":0,"childrenNum":3,"fileId":17313,"group":"hdfs","length":0,"modificationTime":1573191541866,"owner":"hive","pathSuffix":"","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'')
2019-12-03 11:51:23,945 - HdfsResource['/warehouse/tablespace/external/hive'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://gaian-lap386.com:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hive', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 01777}
2019-12-03 11:51:23,947 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://gaian-lap386.com:50070/webhdfs/v1/warehouse/tablespace/external/hive?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpZcQxXj 2>/tmp/tmpmj94Od''] {'logoutput': None, 'quiet': False}
2019-12-03 11:51:23,985 - call returned (0, '')
2019-12-03 11:51:23,986 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"aclBit":true,"blockSize":0,"childrenNum":4,"fileId":17317,"group":"hadoop","length":0,"modificationTime":1568029134636,"owner":"hive","pathSuffix":"","permission":"1777","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'')
2019-12-03 11:51:24,019 - Skipping the operation for not managed DFS directory /warehouse/tablespace/external/hive since immutable_paths contains it.
2019-12-03 11:51:24,020 - HdfsResource['/warehouse/tablespace/managed/hive'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://gaian-lap386.com:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hive', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp'], 'mode': 0700}
2019-12-03 11:51:24,021 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET -d '"'"''"'"' -H '"'"'Content-Length: 0'"'"' '"'"'http://gaian-lap386.com:50070/webhdfs/v1/warehouse/tablespace/managed/hive?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmp8Hn96g 2>/tmp/tmpntXrlQ''] {'logoutput': None, 'quiet': False}
2019-12-03 11:51:24,059 - call returned (0, '')
2019-12-03 11:51:24,059 - get_user_call_output returned (0, u'{"FileStatus":{"accessTime":0,"aclBit":true,"blockSize":0,"childrenNum":5,"fileId":17319,"group":"hadoop","length":0,"modificationTime":1568029134518,"owner":"hive","pathSuffix":"","permission":"700","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}200', u'')
2019-12-03 11:51:24,094 - Skipping the operation for not managed DFS directory /warehouse/tablespace/managed/hive since immutable_paths contains it.
2019-12-03 11:51:24,094 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'hdfs getconf -confKey dfs.namenode.acls.enabled 1>/tmp/tmp53ms1a 2>/tmp/tmpzF_mRj''] {'quiet': False}
2019-12-03 11:51:26,861 - call returned (0, '')
2019-12-03 11:51:26,861 - get_user_call_output returned (0, u'true', u'')
2019-12-03 11:51:26,862 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'hdfs getconf -confKey dfs.namenode.posix.acl.inheritance.enabled 1>/tmp/tmp0Mdu7T 2>/tmp/tmpM5e6c0''] {'quiet': False}
2019-12-03 11:51:29,162 - call returned (0, '')
2019-12-03 11:51:29,163 - get_user_call_output returned (0, u'true', u'')
2019-12-03 11:51:29,166 - Execute['hdfs dfs -setfacl -m default:user:hive:rwx /warehouse/tablespace/external/hive'] {'user': 'hdfs'}
2019-12-03 11:51:34,151 - Execute['hdfs dfs -setfacl -m default:user:hive:rwx /warehouse/tablespace/managed/hive'] {'user': 'hdfs'}
2019-12-03 11:51:37,928 - HdfsResource[None] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://gaian-lap386.com:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp']}
2019-12-03 11:51:37,931 - Directories to fill with configs: [u'/usr/hdp/current/hive-metastore/conf', u'/usr/hdp/current/hive-metastore/conf/']
2019-12-03 11:51:37,931 - Directory['/etc/hive/3.1.0.0-78/0'] {'owner': 'hive', 'group': 'hadoop', 'create_parents': True, 'mode': 0755}
2019-12-03 11:51:37,933 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/etc/hive/3.1.0.0-78/0', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...}
2019-12-03 11:51:37,944 - Generating config: /etc/hive/3.1.0.0-78/0/mapred-site.xml
2019-12-03 11:51:37,944 - File['/etc/hive/3.1.0.0-78/0/mapred-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2019-12-03 11:51:38,005 - File['/etc/hive/3.1.0.0-78/0/hive-default.xml.template'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2019-12-03 11:51:38,005 - File['/etc/hive/3.1.0.0-78/0/hive-env.sh.template'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0755}
2019-12-03 11:51:38,009 - File['/etc/hive/3.1.0.0-78/0/llap-daemon-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2019-12-03 11:51:38,012 - File['/etc/hive/3.1.0.0-78/0/llap-cli-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2019-12-03 11:51:38,020 - File['/etc/hive/3.1.0.0-78/0/hive-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2019-12-03 11:51:38,023 - File['/etc/hive/3.1.0.0-78/0/hive-exec-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2019-12-03 11:51:38,027 - File['/etc/hive/3.1.0.0-78/0/beeline-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2019-12-03 11:51:38,028 - XmlConfig['beeline-site.xml'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0644, 'conf_dir': '/etc/hive/3.1.0.0-78/0', 'configurations': {'beeline.hs2.jdbc.url.container': u'jdbc:hive2://gaian-lap386.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2', 'beeline.hs2.jdbc.url.default': u'container'}}
2019-12-03 11:51:38,040 - Generating config: /etc/hive/3.1.0.0-78/0/beeline-site.xml
2019-12-03 11:51:38,040 - File['/etc/hive/3.1.0.0-78/0/beeline-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2019-12-03 11:51:38,043 - File['/etc/hive/3.1.0.0-78/0/parquet-logging.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2019-12-03 11:51:38,044 - Directory['/etc/hive/3.1.0.0-78/0'] {'owner': 'hive', 'group': 'hadoop', 'create_parents': True, 'mode': 0755}
2019-12-03 11:51:38,045 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/etc/hive/3.1.0.0-78/0', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...}
2019-12-03 11:51:38,060 - Generating config: /etc/hive/3.1.0.0-78/0/mapred-site.xml
2019-12-03 11:51:38,061 - File['/etc/hive/3.1.0.0-78/0/mapred-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2019-12-03 11:51:38,173 - File['/etc/hive/3.1.0.0-78/0/hive-default.xml.template'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2019-12-03 11:51:38,174 - File['/etc/hive/3.1.0.0-78/0/hive-env.sh.template'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0755}
2019-12-03 11:51:38,180 - File['/etc/hive/3.1.0.0-78/0/llap-daemon-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2019-12-03 11:51:38,187 - File['/etc/hive/3.1.0.0-78/0/llap-cli-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2019-12-03 11:51:38,193 - File['/etc/hive/3.1.0.0-78/0/hive-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2019-12-03 11:51:38,198 - File['/etc/hive/3.1.0.0-78/0/hive-exec-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2019-12-03 11:51:38,206 - File['/etc/hive/3.1.0.0-78/0/beeline-log4j2.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2019-12-03 11:51:38,207 - XmlConfig['beeline-site.xml'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0644, 'conf_dir': '/etc/hive/3.1.0.0-78/0', 'configurations': {'beeline.hs2.jdbc.url.container': u'jdbc:hive2://gaian-lap386.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2', 'beeline.hs2.jdbc.url.default': u'container'}}
2019-12-03 11:51:38,223 - Generating config: /etc/hive/3.1.0.0-78/0/beeline-site.xml
2019-12-03 11:51:38,224 - File['/etc/hive/3.1.0.0-78/0/beeline-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2019-12-03 11:51:38,227 - File['/etc/hive/3.1.0.0-78/0/parquet-logging.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2019-12-03 11:51:38,228 - File['/usr/hdp/current/hive-metastore/conf/hive-site.jceks'] {'content': StaticFile('/var/lib/ambari-agent/cred/conf/hive_metastore/hive-site.jceks'), 'owner': 'hive', 'group': 'hadoop', 'mode': 0640}
2019-12-03 11:51:38,229 - Writing File['/usr/hdp/current/hive-metastore/conf/hive-site.jceks'] because contents don't match
2019-12-03 11:51:38,230 - XmlConfig['hive-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hive-metastore/conf/', 'mode': 0644, 'configuration_attributes': {u'hidden': {u'javax.jdo.option.ConnectionPassword': u'HIVE_CLIENT,CONFIG_DOWNLOAD'}}, 'owner': 'hive', 'configurations': ...}
2019-12-03 11:51:38,246 - Generating config: /usr/hdp/current/hive-metastore/conf/hive-site.xml
2019-12-03 11:51:38,247 - File['/usr/hdp/current/hive-metastore/conf/hive-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2019-12-03 11:51:38,634 - Writing File['/usr/hdp/current/hive-metastore/conf/hive-site.xml'] because contents don't match
2019-12-03 11:51:38,635 - Generating Atlas Hook config file /usr/hdp/current/hive-metastore/conf/atlas-application.properties
2019-12-03 11:51:38,636 - PropertiesFile['/usr/hdp/current/hive-metastore/conf/atlas-application.properties'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0644, 'properties': ...}
2019-12-03 11:51:38,642 - Generating properties file: /usr/hdp/current/hive-metastore/conf/atlas-application.properties
2019-12-03 11:51:38,642 - File['/usr/hdp/current/hive-metastore/conf/atlas-application.properties'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2019-12-03 11:51:38,693 - Writing File['/usr/hdp/current/hive-metastore/conf/atlas-application.properties'] because contents don't match
2019-12-03 11:51:38,709 - File['/usr/hdp/current/hive-metastore/conf//hive-env.sh'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0755}
2019-12-03 11:51:38,710 - Writing File['/usr/hdp/current/hive-metastore/conf//hive-env.sh'] because contents don't match
2019-12-03 11:51:38,710 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'}
2019-12-03 11:51:38,723 - File['/etc/security/limits.d/hive.conf'] {'content': Template('hive.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2019-12-03 11:51:38,724 - File['/usr/lib/ambari-agent/DBConnectionVerification.jar'] {'content': DownloadSource('http://gaian-lap386.com:8080/resources/DBConnectionVerification.jar'), 'mode': 0644}
2019-12-03 11:51:38,724 - Not downloading the file from http://gaian-lap386.com:8080/resources/DBConnectionVerification.jar, because /var/lib/ambari-agent/tmp/DBConnectionVerification.jar already exists
2019-12-03 11:51:38,725 - Directory['/var/run/hive'] {'owner': 'hive', 'create_parents': True, 'group': 'hadoop', 'mode': 0755, 'cd_access': 'a'}
2019-12-03 11:51:38,739 - Directory['/var/log/hive'] {'owner': 'hive', 'create_parents': True, 'group': 'hadoop', 'mode': 0755, 'cd_access': 'a'}
2019-12-03 11:51:38,742 - Directory['/var/lib/hive'] {'owner': 'hive', 'create_parents': True, 'group': 'hadoop', 'mode': 0755, 'cd_access': 'a'}
2019-12-03 11:51:38,744 - XmlConfig['hivemetastore-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hive-metastore/conf/', 'mode': 0600, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...}
2019-12-03 11:51:38,787 - Generating config: /usr/hdp/current/hive-metastore/conf/hivemetastore-site.xml
2019-12-03 11:51:38,787 - File['/usr/hdp/current/hive-metastore/conf/hivemetastore-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0600, 'encoding': 'UTF-8'}
2019-12-03 11:51:38,838 - File['/usr/hdp/current/hive-metastore/conf/hadoop-metrics2-hivemetastore.properties'] {'content': Template('hadoop-metrics2-hivemetastore.properties.j2'), 'owner': 'hive', 'group': 'hadoop', 'mode': 0600}
2019-12-03 11:51:38,858 - File['/var/lib/ambari-agent/tmp/start_metastore_script'] {'content': StaticFile('startMetastore.sh'), 'mode': 0755}
2019-12-03 11:51:38,879 - HdfsResource[None] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/3.1.0.0-78/hadoop/bin', 'keytab': [EMPTY], 'dfs_type': 'HDFS', 'default_fs': 'hdfs://gaian-lap386.com:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp']}
2019-12-03 11:51:38,889 - Directory['/usr/lib/ambari-logsearch-logfeeder/conf'] {'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2019-12-03 11:51:38,889 - Generate Log Feeder config file: /usr/lib/ambari-logsearch-logfeeder/conf/input.config-hive.json
2019-12-03 11:51:38,889 - File['/usr/lib/ambari-logsearch-logfeeder/conf/input.config-hive.json'] {'content': Template('input.config-hive.json.j2'), 'mode': 0644}
2019-12-03 11:51:38,891 - Execute['export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/ ; /usr/hdp/current/hive-server2/bin/schematool -initSchema -dbType mysql -userName ambari -passWord [PROTECTED] -verbose'] {'not_if': "ambari-sudo.sh su hive -l -s /bin/bash -c 'export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/ ; /usr/hdp/current/hive-server2/bin/schematool -info -dbType mysql -userName ambari -passWord [PROTECTED] -verbose'", 'user': 'hive'}
Command failed after 1 tries
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hive
12-01-2019
09:32 PM
2019-12-02 10:57:06,580 ERROR [main] AmbariServer:1114 - Failed to run the Ambari Server javax.persistence.PersistenceException: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException Internal Exception: java.sql.SQLException: Connections could not be acquired from the underlying database! Error Code: 0 at org.eclipse.persistence.internal.jpa.EntityManagerSetupImpl.deploy(EntityManagerSetupImpl.java:815) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryDelegate.getAbstractSession(EntityManagerFactoryDelegate.java:205) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryDelegate.createEntityManagerImpl(EntityManagerFactoryDelegate.java:305) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryImpl.createEntityManagerImpl(EntityManagerFactoryImpl.java:337) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:303) at com.google.inject.persist.jpa.JpaPersistService.begin(JpaPersistService.java:77) at com.google.inject.persist.jpa.AmbariJpaPersistService.begin(AmbariJpaPersistService.java:28) at org.apache.ambari.server.orm.AmbariLocalSessionInterceptor.invoke(AmbariLocalSessionInterceptor.java:40) at org.apache.ambari.server.checks.DatabaseConsistencyCheckHelper.checkDBVersionCompatible(DatabaseConsistencyCheckHelper.java:222) at org.apache.ambari.server.controller.AmbariServer.main(AmbariServer.java:1099) Caused by: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException Internal Exception: java.sql.SQLException: Connections could not be acquired from the underlying database! Error Code: 0 at org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:316) at org.eclipse.persistence.sessions.JNDIConnector.connect(JNDIConnector.java:147) at org.eclipse.persistence.sessions.DatasourceLogin.connectToDatasource(DatasourceLogin.java:162) at org.eclipse.persistence.internal.sessions.DatabaseSessionImpl.setOrDetectDatasource(DatabaseSessionImpl.java:207) at org.eclipse.persistence.internal.sessions.DatabaseSessionImpl.loginAndDetectDatasource(DatabaseSessionImpl.java:760) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryProvider.login(EntityManagerFactoryProvider.java:265) at org.eclipse.persistence.internal.jpa.EntityManagerSetupImpl.deploy(EntityManagerSetupImpl.java:731) ... 9 more Caused by: java.sql.SQLException: Connections could not be acquired from the underlying database! at com.mchange.v2.sql.SqlUtils.toSQLException(SqlUtils.java:118) at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutPooledConnection(C3P0PooledConnectionPool.java:692) at com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource.getConnection(AbstractPoolBackedDataSource.java:146) at org.eclipse.persistence.sessions.JNDIConnector.connect(JNDIConnector.java:144) ... 14 more Caused by: com.mchange.v2.resourcepool.CannotAcquireResourceException: A ResourcePool could not acquire a resource from its primary factory or source. at com.mchange.v2.resourcepool.BasicResourcePool.awaitAvailable(BasicResourcePool.java:1469) at com.mchange.v2.resourcepool.BasicResourcePool.prelimCheckoutResource(BasicResourcePool.java:644) at com.mchange.v2.resourcepool.BasicResourcePool.checkoutResource(BasicResourcePool.java:554) at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutAndMarkConnectionInUse(C3P0PooledConnectionPool.java:758) at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutPooledConnection(C3P0PooledConnectionPool.java:685) ... 16 more Caused by: java.sql.SQLException: Access denied for user 'ambari'@'xxxxxxxxx' (using password: YES) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:959) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3870) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3806) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:871) at com.mysql.jdbc.MysqlIO.proceedHandshakeWithPluggableAuthentication(MysqlIO.java:1686) at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1207) at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2254) at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2285) at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2084) at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:795) at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:44) at sun.reflect.GeneratedConstructorAccessor43.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at com.mysql.jdbc.Util.handleNewInstance(Util.java:404) at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:400) at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:327) at com.mchange.v2.c3p0.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:175) at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:220) at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:206) at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool$1PooledConnectionResourcePoolManager.acquireResource(C3P0PooledConnectionPool.java:203) at com.mchange.v2.resourcepool.BasicResourcePool.doAcquire(BasicResourcePool.java:1138) at com.mchange.v2.resourcepool.BasicResourcePool.doAcquireAndDecrementPendingAcquiresWithinLockOnSuccess(BasicResourcePool.java:1125) at com.mchange.v2.resourcepool.BasicResourcePool.access$700(BasicResourcePool.java:44) at com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask.run(BasicResourcePool.java:1870) at com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:696)
... View more
11-29-2019
12:12 AM
#/usr/jdk64/jdk1.8.0_112/bin/java -cp /usr/lib/ambari-agent/DBConnectionVerification.jar:/usr/share/java/mysql-connector-java-5.1.37.jar org.apache.ambari.server.DBConnectionVerification "jdbc:mysql:// xxxxxxx:3306/ambari" "ambari" "xxxxx" com.mysql.jdbc.Driver ERROR: Unable to connect to the DB. Please check DB connection properties. com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. #ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java-5.1.37.jar Using python /usr/bin/python Setup ambari-server Copying /usr/share/java/mysql-connector-java-5.1.37.jar to /var/lib/ambari-server/resources/mysql-connector-java-5.1.37.jar Creating symlink /var/lib/ambari-server/resources/mysql-connector-java-5.1.37.jar to /var/lib/ambari-server/resources/mysql-connector-java.jar If you are updating existing jdbc driver jar for mysql with mysql-connector-java-5.1.37.jar. Please remove the old driver jar, from all hosts. Restarting services that need the driver, will automatically copy the new jar to the hosts. JDBC driver was successfully initialized. Ambari Server 'setup' completed successfully.
... View more
Labels:
- Labels:
-
Apache Ambari
11-28-2019
11:05 PM
It gets error because i already created this user. mysql> CREATE USER 'ambari'@'%' IDENTIFIED BY 'xxxxxxx'; ERROR 1396 (HY000): Operation CREATE USER failed for 'ambari'@'%' mysql> GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'%'; Query OK, 0 rows affected (0.00 sec) mysql> CREATE USER 'ambari'@'localhost' IDENTIFIED BY 'xxxxxx'; ERROR 1396 (HY000): Operation CREATE USER failed for 'ambari'@'localhost' mysql> GRANT ALL PRIVILEGES ON *.* TO 'xxxxxx'@'localhost'; ERROR 1819 (HY000): Your password does not satisfy the current policy requirements mysql> CREATE USER 'ambari'@'xxxxxx' IDENTIFIED BY 'xxxxxx'; ERROR 1396 (HY000): Operation CREATE USER failed for 'ambari'@'xxxxxxx' mysql> GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'xxxxxxx'; Query OK, 0 rows affected (0.00 sec) mysql> FLUSH PRIVILEGES; Query OK, 0 rows affected (0.00 sec) mysql> CREATE USER 'ambari'@'%' IDENTIFIED BY 'xxxxxxxx'; ERROR 1396 (HY000): Operation CREATE USER failed for 'ambari'@'%' mysql> GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'%'; Query OK, 0 rows affected (0.00 sec) mysql> mysql> mysql> GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'localhost'; Query OK, 0 rows affected (0.00 sec) mysql> CREATE USER 'ambari'@'xxxxx' IDENTIFIED BY 'xxxx'; ERROR 1396 (HY000): Operation CREATE USER failed for 'ambari'@'xxxxxx' mysql> GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'xxxxxxxx'; Query OK, 0 rows affected (0.00 sec) mysql> FLUSH PRIVILEGES; Query OK, 0 rows affected (0.01 sec)
... View more
11-28-2019
10:20 PM
Again its getting the same error #/usr/jdk64/jdk1.8.0_112/bin/java -cp /usr/lib/ambari-agent/DBConnectionVerification.jar:/usr/share/java/mysql-connector-java-5.1.37.jar org.apache.ambari.server.DBConnectionVerification "jdbc:mysql:// xxxxxxx:3306/ambari" "ambari" "xxxxx" com.mysql.jdbc.Driver ERROR: Unable to connect to the DB. Please check DB connection properties. com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. #ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java-5.1.37.jar Using python /usr/bin/python Setup ambari-server Copying /usr/share/java/mysql-connector-java-5.1.37.jar to /var/lib/ambari-server/resources/mysql-connector-java-5.1.37.jar Creating symlink /var/lib/ambari-server/resources/mysql-connector-java-5.1.37.jar to /var/lib/ambari-server/resources/mysql-connector-java.jar If you are updating existing jdbc driver jar for mysql with mysql-connector-java-5.1.37.jar. Please remove the old driver jar, from all hosts. Restarting services that need the driver, will automatically copy the new jar to the hosts. JDBC driver was successfully initialized. Ambari Server 'setup' completed successfully.
... View more
11-28-2019
03:00 AM
For localhost i am able to connect for my host i am unable to connect mysql -u ambari -p ambari -h xxxxxxxxxx -P 3306 Enter password: ERROR 2003 (HY000): Can't connect to MySQL server on 'xxxxxxxxx' (111)
... View more