Member since
08-10-2017
31
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2663 | 06-02-2020 12:04 PM |
07-07-2020
04:06 AM
@Madhur Thanks for the update. So I can conclude that HA must work in both the below 2 cases 1. HA must work whenever active namenode daemon goes down. 2. HA must work whenever active namenode server goes down. Please note that i am mentioning two thing i,e active namenode daemon and active namenode server
... View more
07-06-2020
02:16 AM
@Madhur : Thanks for the update!! Thanks for sharing links. But I am looking for answers for my below questions 1. When active namenode server is rebooted will the standby namenode will not become active? Is this something expected? Or Did the HA did not work in our cluster 2. Whether the HA is expected to work only between the active and standby namenode daemons?
... View more
07-05-2020
11:19 AM
HI Team, I came across an scenario where HA failed when active namenode server was rebooted. Accidentally the master node which runs active namenode got rebooted. when it got rebooted i believed the standby namenode will become active namenode and will continue the operations but standby namenode did not became active and HDFS was completely down. So my question is 1. When active namenode server is rebooted will the standby namenode will not become active? Is this something expected? Or Did the HA did not work in our cluster 2. Whether the HA is expected to work only between the active and standby namenode daemons? Thanking in Advance!!
... View more
Labels:
07-05-2020
11:01 AM
@Scharan , after making the changes to true still i am facing the same issue. Can you please help me with good article or document to install and configure kerberos and enable kerberos from ambari.
... View more
06-26-2020
08:41 AM
I am not getting any help on this post. Any reason? Is there anything missing in my post? Please help me in providing some solution. @Scharan
... View more
06-05-2020
06:50 AM
@Scharan, java version: 1.8.0_252 java path in .bashrc export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk/
... View more
06-04-2020
08:41 AM
Hi Team,
I am currently using HDP3.0 and ambari 2.7.3. I have enabled Kerberos from ambari.
KDC is installed and configured. Able to kinit and create principal.
I tried below thing,
kinit -kt /etc/security/keytabs/nn.service.keytab nn/mastern1.bms.com@BMS.COM [root@mastern1 ~]# klist -e Ticket cache: FILE:/tmp/krb5cc_0 Default principal: nn/mastern1.bms.com@BMS.COM
Valid starting Expires Service principal 06/04/2020 20:19:07 06/05/2020 20:19:07 krbtgt/BMS.COM@BMS.COM Etype (skey, tkt): aes256-cts-hmac-sha1-96, aes256-cts-hmac-sha1-96
[root@mastern1 ~]# cat /etc/krb5.conf
[libdefaults] renew_lifetime = 7d forwardable = true default_realm = BMS.COM ticket_lifetime = 24h dns_lookup_realm = false dns_lookup_kdc = false default_ccache_name = /tmp/krb5cc_%{uid} #default_tgs_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5 #default_tkt_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5 udp_preference_limit = 1
[domain_realm] bms.com = BMS.COM
[logging] default = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log kdc = FILE:/var/log/krb5kdc.log
[realms] BMS.COM = { admin_server = mastern1.bms.com kdc = mastern1.bms.com }
ERROR that i see below is
STARTUP_MSG: java = 1.8.0_252 ************************************************************/ 2020-06-04 20:13:01,750 INFO namenode.NameNode (LogAdapter.java:info(51)) - registered UNIX signal handlers for [TERM, HUP, INT] 2020-06-04 20:13:02,322 INFO namenode.NameNode (NameNode.java:createNameNode(1583)) - createNameNode [] 2020-06-04 20:13:03,145 INFO impl.MetricsConfig (MetricsConfig.java:loadFirst(118)) - Loaded properties from hadoop-metrics2.properties 2020-06-04 20:13:05,390 INFO timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(85)) - Initializing Timeline metrics sink. 2020-06-04 20:13:05,390 INFO timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(105)) - Identified hostname = mastern1.bms.com, serviceName = namenode 2020-06-04 20:13:06,516 WARN availability.MetricCollectorHAHelper (MetricCollectorHAHelper.java:findLiveCollectorHostsFromZNode(90)) - Unable to connect to zookeeper. org.apache.ambari.metrics.sink.relocated.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /ambari-metrics-cluster at org.apache.ambari.metrics.sink.relocated.zookeeper.KeeperException.create(KeeperException.java:99) at org.apache.ambari.metrics.sink.relocated.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.ambari.metrics.sink.relocated.zookeeper.ZooKeeper.exists(ZooKeeper.java:1909) at org.apache.ambari.metrics.sink.relocated.zookeeper.ZooKeeper.exists(ZooKeeper.java:1937) at org.apache.hadoop.metrics2.sink.timeline.availability.MetricCollectorHAHelper.findLiveCollectorHostsFromZNode(MetricCollectorHAHelper.java:77) at org.apache.hadoop.metrics2.sink.timeline.AbstractTimelineMetricsSink.findPreferredCollectHost(AbstractTimelineMetricsSink.java:540) at org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink.init(HadoopTimelineMetricsSink.java:125) at org.apache.hadoop.metrics2.impl.MetricsConfig.getPlugin(MetricsConfig.java:207) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.newSink(MetricsSystemImpl.java:531) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configureSinks(MetricsSystemImpl.java:503) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configure(MetricsSystemImpl.java:479) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.start(MetricsSystemImpl.java:188) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.init(MetricsSystemImpl.java:163) at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.init(DefaultMetricsSystem.java:62) at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.initialize(DefaultMetricsSystem.java:58) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1642) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710) 2020-06-04 20:13:07,547 INFO timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(133)) - No suitable collector found. 2020-06-04 20:13:07,551 INFO timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(185)) - RPC port properties configured: {8020=client} 2020-06-04 20:13:07,618 INFO impl.MetricsSinkAdapter (MetricsSinkAdapter.java:start(204)) - Sink timeline started 2020-06-04 20:13:08,229 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:startTimer(374)) - Scheduled Metric snapshot period at 10 second(s). 2020-06-04 20:13:08,229 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:start(191)) - NameNode metrics system started 2020-06-04 20:13:08,480 INFO namenode.NameNodeUtils (NameNodeUtils.java:getClientNamenodeAddress(79)) - fs.defaultFS is hdfs://mastern1.bms.com:8020 2020-06-04 20:13:08,480 INFO namenode.NameNode (NameNode.java:<init>(928)) - Clients should use mastern1.bms.com:8020 to access this namenode/service. 2020-06-04 20:13:10,005 ERROR namenode.NameNode (NameNode.java:main(1715)) - Failed to start namenode. org.apache.hadoop.security.KerberosAuthException: failure to login: for principal: nn/mastern1.bms.com@BMS.COM from keytab /etc/security/keytabs/nn.service.keytab javax.security.auth.login.LoginException: Message stream modified (41) at org.apache.hadoop.security.UserGroupInformation.doSubjectLogin(UserGroupInformation.java:1847) at org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytabAndReturnUGI(UserGroupInformation.java:1215) at org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:1008) at org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:313) at org.apache.hadoop.hdfs.server.namenode.NameNode.loginAsNameNodeUser(NameNode.java:661) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:680) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:937) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:910) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710) Caused by: javax.security.auth.login.LoginException: Message stream modified (41) at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:808) at com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:618) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at javax.security.auth.login.LoginContext.invoke(LoginContext.java:755) at javax.security.auth.login.LoginContext.access$000(LoginContext.java:195) at javax.security.auth.login.LoginContext$4.run(LoginContext.java:682) at javax.security.auth.login.LoginContext$4.run(LoginContext.java:680) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680) at javax.security.auth.login.LoginContext.login(LoginContext.java:587) at org.apache.hadoop.security.UserGroupInformation$HadoopLoginContext.login(UserGroupInformation.java:1926) at org.apache.hadoop.security.UserGroupInformation.doSubjectLogin(UserGroupInformation.java:1837) ... 9 more Caused by: KrbException: Message stream modified (41) at sun.security.krb5.KrbKdcRep.check(KrbKdcRep.java:101) at sun.security.krb5.KrbAsRep.decrypt(KrbAsRep.java:159) at sun.security.krb5.KrbAsRep.decryptUsingKeyTab(KrbAsRep.java:121) at sun.security.krb5.KrbAsReqBuilder.resolve(KrbAsReqBuilder.java:308) at sun.security.krb5.KrbAsReqBuilder.action(KrbAsReqBuilder.java:447) at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:780) ... 23 more 2020-06-04 20:13:10,011 INFO util.ExitUtil (ExitUtil.java:terminate(210)) - Exiting with status 1: org.apache.hadoop.security.KerberosAuthException: failure to login: for principal: nn/mastern1.bms.com@BMS.COM from keytab /etc/security/keytabs/nn.service.keytab javax.security.auth.login.LoginException: Message stream modified (41) 2020-06-04 20:13:10,132 INFO namenode.NameNode (LogAdapter.java:info(51)) - SHUTDOWN_MSG:
When i start namenode service from ambari below is the message that i see
2020-06-04 20:13:01,851 - Waiting for this NameNode to leave Safemode due to the following conditions: HA: False, isActive: True, upgradeType: None
2020-06-04 20:13:01,852 - Waiting up to 19 minutes for the NameNode to leave Safemode...
2020-06-04 20:13:01,852 - Execute['/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://mastern1.bms.com:8020 -safemode get | grep 'Safe mode is OFF''] {'logoutput': True, 'tries': 115, 'user': 'hdfs', 'try_sleep': 10}
safemode: Call From mastern1.bms.com/192.168.0.109 to mastern1.bms.com:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
2020-06-04 20:13:14,639 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://mastern1.bms.com:8020 -safemode get | grep 'Safe mode is OFF'' returned 1. safemode: Call From mastern1.bms.com/192.168.0.109 to mastern1.bms.com:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
safemode: Call From mastern1.bms.com/192.168.0.109 to mastern1.bms.com:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
2 Please help me in fixing the issue
... View more
Labels:
06-02-2020
12:04 PM
@stevenmatison : Thanks for the update!! This issue got fixed by adding an new line in pg_hba.conf file host all hive 192.168.0.109/32 trust Restart postgresql
... View more
05-25-2020
11:45 AM
Hi Team,
Using Ambari; 2.7.3
HDP: 3.0
I am trying to install hive service with existing postgresql but it is failing with below ERROR and Exception.
Can anyone advice me as where did i go wrong?
Created user 'hive' with password as 'admin'
created databases 'hive'
GRANT ALL PRIVILEGES ON DATABASE hive TO hive
Downloaded jar file and ran setup
ambari-server setup --jdbc-db=postgres --jdbc-driver=/usr/share/java/postgresql-jdbc.jar
Below is the ERROR message
Traceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 211, in <module> HiveMetastore().execute() File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute method(env) File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 61, in start create_metastore_schema() # execute without config lock File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive.py", line 378, in create_metastore_schema user = params.hive_user File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__ self.env.run() File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run self.run_action(resource, action) File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action provider_action() File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 263, in action_run returns=self.resource.returns) File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner result = function(command, **kwargs) File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns) File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper result = _call(command, **kwargs_copy) File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call raise ExecutionFailed(err_msg, code, out, err) resource_management.core.exceptions.ExecutionFailed: Execution of 'export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; /usr/hdp/current/hive-server2-hive2/bin/schematool -initSchema -dbType postgres -userName hive -passWord [PROTECTED] -verbose' returned 1. SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/hdp/2.6.5.1175-1/hive2/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/hdp/2.6.5.1175-1/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Metastore connection URL: jdbc:postgresql://mastern1.bms.com:5432/hive Metastore Connection Driver : org.postgresql.Driver Metastore connection User: hive org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version. Underlying cause: org.postgresql.util.PSQLException : FATAL: no pg_hba.conf entry for host "192.168.0.109", user "hive", database "hive", SSL off SQL Error code: 0 org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version. at org.apache.hive.beeline.HiveSchemaHelper.getConnectionToMetastore(HiveSchemaHelper.java:80) at org.apache.hive.beeline.HiveSchemaTool.getConnectionToMetastore(HiveSchemaTool.java:133) at org.apache.hive.beeline.HiveSchemaTool.testConnectionToMetastore(HiveSchemaTool.java:187) at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:291) at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:277) at org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:526) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:233) at org.apache.hadoop.util.RunJar.main(RunJar.java:148) Caused by: org.postgresql.util.PSQLException: FATAL: no pg_hba.conf entry for host "192.168.0.109", user "hive", database "hive", SSL off at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:525) at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:146) at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:197) at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49) at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:211) at org.postgresql.Driver.makeConnection(Driver.java:459) at org.postgresql.Driver.connect(Driver.java:261) at java.sql.DriverManager.getConnection(DriverManager.java:664) at java.sql.DriverManager.getConnection(DriverManager.java:247) at org.apache.hive.beeline.HiveSchemaHelper.getConnectionToMetastore(HiveSchemaHelper.java:76) ... 11 more Suppressed: org.postgresql.util.PSQLException: FATAL: no pg_hba.conf entry for host "192.168.0.109", user "hive", database "hive", SSL off at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:525) at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:146) at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:206) ... 18 more *** schemaTool failed *** stdout: /var/lib/ambari-agent/data/output-150.txt 2020-05-25 23:30:45,946 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.5.1175-1 -> 2.6.5.1175-1 2020-05-25 23:30:46,079 - Using hadoop conf dir: /usr/hdp/2.6.5.1175-1/hadoop/conf 2020-05-25 23:30:47,387 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.5.1175-1 -> 2.6.5.1175-1 2020-05-25 23:30:47,452 - Using hadoop conf dir: /usr/hdp/2.6.5.1175-1/hadoop/conf 2020-05-25 23:30:47,456 - Group['hdfs'] {} 2020-05-25 23:30:47,459 - Group['hadoop'] {} 2020-05-25 23:30:47,460 - Group['users'] {} 2020-05-25 23:30:47,533 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2020-05-25 23:30:47,536 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2020-05-25 23:30:47,540 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2020-05-25 23:30:47,542 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None} 2020-05-25 23:30:47,544 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None} 2020-05-25 23:30:47,572 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop'], 'uid': None} 2020-05-25 23:30:47,574 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2020-05-25 23:30:47,576 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2020-05-25 23:30:47,579 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2020-05-25 23:30:47,581 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2020-05-25 23:30:47,632 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'} 2020-05-25 23:30:47,742 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if 2020-05-25 23:30:47,743 - Group['hdfs'] {} 2020-05-25 23:30:47,744 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop', u'hdfs']} 2020-05-25 23:30:47,746 - FS Type: HDFS 2020-05-25 23:30:47,747 - Directory['/etc/hadoop'] {'mode': 0755} 2020-05-25 23:30:47,890 - File['/usr/hdp/2.6.5.1175-1/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'} 2020-05-25 23:30:47,891 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777} 2020-05-25 23:30:48,009 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'} 2020-05-25 23:30:48,149 - Skipping Execute[('setenforce', '0')] due to not_if 2020-05-25 23:30:48,150 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'} 2020-05-25 23:30:48,153 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'} 2020-05-25 23:30:48,153 - Changing owner for /var/run/hadoop from 1004 to root 2020-05-25 23:30:48,154 - Changing group for /var/run/hadoop from 1002 to root 2020-05-25 23:30:48,154 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'cd_access': 'a'} 2020-05-25 23:30:48,155 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'} 2020-05-25 23:30:48,187 - File['/usr/hdp/2.6.5.1175-1/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'} 2020-05-25 23:30:48,191 - File['/usr/hdp/2.6.5.1175-1/hadoop/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'} 2020-05-25 23:30:48,259 - File['/usr/hdp/2.6.5.1175-1/hadoop/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644} 2020-05-25 23:30:48,335 - File['/usr/hdp/2.6.5.1175-1/hadoop/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs', 'group': 'hadoop'} 2020-05-25 23:30:48,336 - File['/usr/hdp/2.6.5.1175-1/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755} 2020-05-25 23:30:48,339 - File['/usr/hdp/2.6.5.1175-1/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'} 2020-05-25 23:30:48,367 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644} 2020-05-25 23:30:48,396 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755} 2020-05-25 23:30:48,423 - Skipping unlimited key JCE policy check and setup since the Java VM is not managed by Ambari 2020-05-25 23:30:49,053 - Using hadoop conf dir: /usr/hdp/2.6.5.1175-1/hadoop/conf 2020-05-25 23:30:49,098 - call['ambari-python-wrap /usr/bin/hdp-select status hive-server2'] {'timeout': 20} 2020-05-25 23:30:49,171 - call returned (0, 'hive-server2 - 2.6.5.1175-1') 2020-05-25 23:30:49,173 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.5.1175-1 -> 2.6.5.1175-1 2020-05-25 23:30:49,305 - File['/var/lib/ambari-agent/cred/lib/CredentialUtil.jar'] {'content': DownloadSource('http://mastern1.bms.com:8080/resources/CredentialUtil.jar'), 'mode': 0755} 2020-05-25 23:30:49,307 - Not downloading the file from http://mastern1.bms.com:8080/resources/CredentialUtil.jar, because /var/lib/ambari-agent/tmp/CredentialUtil.jar already exists 2020-05-25 23:30:51,924 - Directory['/etc/hive'] {'mode': 0755} 2020-05-25 23:30:51,924 - Directories to fill with configs: [u'/usr/hdp/current/hive-metastore/conf', u'/usr/hdp/current/hive-metastore/conf/conf.server'] 2020-05-25 23:30:51,925 - Directory['/etc/hive/2.6.5.1175-1/0'] {'owner': 'hive', 'group': 'hadoop', 'create_parents': True, 'mode': 0755} 2020-05-25 23:30:51,926 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/etc/hive/2.6.5.1175-1/0', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...} 2020-05-25 23:30:51,959 - Generating config: /etc/hive/2.6.5.1175-1/0/mapred-site.xml 2020-05-25 23:30:51,960 - File['/etc/hive/2.6.5.1175-1/0/mapred-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2020-05-25 23:30:52,086 - File['/etc/hive/2.6.5.1175-1/0/hive-default.xml.template'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0644} 2020-05-25 23:30:52,086 - File['/etc/hive/2.6.5.1175-1/0/hive-env.sh.template'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0644} 2020-05-25 23:30:52,090 - File['/etc/hive/2.6.5.1175-1/0/hive-exec-log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644} 2020-05-25 23:30:52,096 - File['/etc/hive/2.6.5.1175-1/0/hive-log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644} 2020-05-25 23:30:52,098 - File['/etc/hive/2.6.5.1175-1/0/parquet-logging.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0644} 2020-05-25 23:30:52,099 - Directory['/etc/hive/2.6.5.1175-1/0/conf.server'] {'owner': 'hive', 'group': 'hadoop', 'create_parents': True, 'mode': 0700} 2020-05-25 23:30:52,100 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/etc/hive/2.6.5.1175-1/0/conf.server', 'mode': 0600, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...} 2020-05-25 23:30:52,126 - Generating config: /etc/hive/2.6.5.1175-1/0/conf.server/mapred-site.xml 2020-05-25 23:30:52,127 - File['/etc/hive/2.6.5.1175-1/0/conf.server/mapred-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0600, 'encoding': 'UTF-8'} 2020-05-25 23:30:52,289 - File['/etc/hive/2.6.5.1175-1/0/conf.server/hive-default.xml.template'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0600} 2020-05-25 23:30:52,289 - File['/etc/hive/2.6.5.1175-1/0/conf.server/hive-env.sh.template'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0600} 2020-05-25 23:30:52,293 - File['/etc/hive/2.6.5.1175-1/0/conf.server/hive-exec-log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0600} 2020-05-25 23:30:52,304 - File['/etc/hive/2.6.5.1175-1/0/conf.server/hive-log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0600} 2020-05-25 23:30:52,305 - File['/etc/hive/2.6.5.1175-1/0/conf.server/parquet-logging.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0600} 2020-05-25 23:30:52,306 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-site.jceks'] {'content': StaticFile('/var/lib/ambari-agent/cred/conf/hive_metastore/hive-site.jceks'), 'owner': 'hive', 'group': 'hadoop', 'mode': 0640} 2020-05-25 23:30:52,307 - Writing File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-site.jceks'] because contents don't match 2020-05-25 23:30:52,308 - XmlConfig['hive-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hive-metastore/conf/conf.server', 'mode': 0600, 'configuration_attributes': {u'hidden': {u'javax.jdo.option.ConnectionPassword': u'HIVE_CLIENT,WEBHCAT_SERVER,HCAT,CONFIG_DOWNLOAD'}}, 'owner': 'hive', 'configurations': ...} 2020-05-25 23:30:52,326 - Generating config: /usr/hdp/current/hive-metastore/conf/conf.server/hive-site.xml 2020-05-25 23:30:52,328 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0600, 'encoding': 'UTF-8'} 2020-05-25 23:30:52,698 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-env.sh'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0600} 2020-05-25 23:30:52,700 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'} 2020-05-25 23:30:52,710 - File['/etc/security/limits.d/hive.conf'] {'content': Template('hive.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644} 2020-05-25 23:30:52,711 - File['/usr/lib/ambari-agent/DBConnectionVerification.jar'] {'content': DownloadSource('http://mastern1.bms.com:8080/resources/DBConnectionVerification.jar'), 'mode': 0644} 2020-05-25 23:30:52,711 - Not downloading the file from http://mastern1.bms.com:8080/resources/DBConnectionVerification.jar, because /var/lib/ambari-agent/tmp/DBConnectionVerification.jar already exists 2020-05-25 23:30:52,712 - Directory['/var/run/hive'] {'owner': 'hive', 'create_parents': True, 'group': 'hadoop', 'mode': 0755, 'cd_access': 'a'} 2020-05-25 23:30:52,713 - Directory['/var/log/hive'] {'owner': 'hive', 'create_parents': True, 'group': 'hadoop', 'mode': 0755, 'cd_access': 'a'} 2020-05-25 23:30:52,714 - Directory['/var/lib/hive'] {'owner': 'hive', 'create_parents': True, 'group': 'hadoop', 'mode': 0755, 'cd_access': 'a'} 2020-05-25 23:30:52,715 - XmlConfig['hivemetastore-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hive-metastore/conf/conf.server', 'mode': 0600, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': {u'hive.metastore.metrics.enabled': u'true', u'hive.server2.metrics.enabled': u'true', u'hive.service.metrics.hadoop2.component': u'hivemetastore', u'hive.service.metrics.reporter': u'HADOOP2'}} 2020-05-25 23:30:52,750 - Generating config: /usr/hdp/current/hive-metastore/conf/conf.server/hivemetastore-site.xml 2020-05-25 23:30:52,757 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hivemetastore-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0600, 'encoding': 'UTF-8'} 2020-05-25 23:30:52,792 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hadoop-metrics2-hivemetastore.properties'] {'content': Template('hadoop-metrics2-hivemetastore.properties.j2'), 'owner': 'hive', 'group': 'hadoop', 'mode': 0600} 2020-05-25 23:30:52,793 - File['/var/lib/ambari-agent/tmp/start_metastore_script'] {'content': StaticFile('startMetastore.sh'), 'mode': 0755} 2020-05-25 23:30:52,796 - Directory['/tmp/hive'] {'owner': 'hive', 'create_parents': True, 'mode': 0777} 2020-05-25 23:30:52,797 - Execute['export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; /usr/hdp/current/hive-server2-hive2/bin/schematool -initSchema -dbType postgres -userName hive -passWord [PROTECTED] -verbose'] {'not_if': u"ambari-sudo.sh su hive -l -s /bin/bash -c 'export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; /usr/hdp/current/hive-server2-hive2/bin/schematool -info -dbType postgres -userName hive -passWord [PROTECTED] -verbose'", 'user': 'hive'}
Command failed after 1 tries
Thanking in Advance!!
... View more
Labels:
04-11-2019
01:01 PM
@Jay Kumar SenSharma I have figured it out. Thank you!! I also need to know one more thing. Using the feature Enable Service Auto start is it a safe way to configure it in Ambari in any environment? Is there any Cons having it enabled? If we enable auto start, will there be affect on service which will ask for restarting the service? Thanking in Advance!!!
... View more