Member since
11-08-2017
14
Posts
1
Kudos Received
0
Solutions
06-11-2019
01:17 PM
@Mushtaq Rizvi I hope what ever you're doing above is just replacing with "None" which is a string which consumes memory. Let's I've a scenario. I wanted to replace the blank spaces like below with null values. Can you suggest something on how to do this. Because the whitespaces consume memory where as null values doesn't. |Extension| |gif | | | |gif | | | |html | I wanted it like this. |Extension| |gif | |null| |gif | |null| |html |
... View more
06-21-2018
02:28 PM
I removed kerberos completely from ambari but when I tried to restart the services throws error some. Checked logs and found that it's failing due to performing 'kinit". My question is since I removed kerberos it's not supposed to generate ticket automatically. There's some configuration which is triggering this. Can anyone help on this issue. I restarted vm's and performed KDESTROY also, no luck. Below is error when I tried to restart services. File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 303, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'kinit -kt /etc/security/keytabs/smokeuser.headless.keytab ambari-qa-hwx_tvx@FHILLS.LOCAL;' returned 127. bash: kinit: command not found
... View more
Labels:
- Labels:
-
Apache Ambari
-
Kerberos
-
Security
06-21-2018
01:59 PM
@Jay Kumar SenSharma I've a question which is quite opposite to this. I removed kerberos completely from ambari but when I tried to restart the services throws error some. Checked logs and found that it's failing due to performing 'kinit". My question is since I removed kerberos it's not supposed to generate ticket automatically. There's some configuration which is triggering this. Can you help on this issue. Below is error when I tried to restart services. File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 303, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'kinit -kt /etc/security/keytabs/smokeuser.headless.keytab ambari-qa-hwx_tvx@FHILLS.LOCAL;' returned 127. bash: kinit: command not found
... View more
04-04-2018
04:20 PM
Hello i've performend all steps creating and granting privileges. But it still says this. But if I try to login using that user 'hive'@'metastore_fqdn' I'm able to login using password provided. But if I restart the hive it still says same error. Please some help. File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; /usr/hdp/current/hive-server2-hive2/bin/schematool -initSchema -dbType mysql -userName hive -passWord [PROTECTED] -verbose' returned 1. SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.6.2.0-205/hive2/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.6.2.0-205/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Metastore connection URL: jdbc:mysql://vm-hadoop-x1.fhills.local/hive
Metastore Connection Driver : com.mysql.jdbc.Driver
Metastore connection User: hive
org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version.
Underlying cause: java.sql.SQLException : Access denied for user 'hive'@'vm-hadoop-s3.fhills.local' (using password: YES)
SQL Error code: 1045
org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version.
at org.apache.hive.beeline.HiveSchemaHelper.getConnectionToMetastore(HiveSchemaHelper.java:80)
at org.apache.hive.beeline.HiveSchemaTool.getConnectionToMetastore(HiveSchemaTool.java:133)
at org.apache.hive.beeline.HiveSchemaTool.testConnectionToMetastore(HiveSchemaTool.java:187)
at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:291)
at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:277)
at org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:526)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
Caused by: java.sql.SQLException: Access denied for user 'hive'@'vm-hadoop-s3.fhills.local' (using password: YES)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1078)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4187)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4119)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:927)
at com.mysql.jdbc.MysqlIO.proceedHandshakeWithPluggableAuthentication(MysqlIO.java:1709)
at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1252)
at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2488)
at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2521)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2306)
at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:839)
at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:49)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:421)
at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:350)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:247)
at org.apache.hive.beeline.HiveSchemaHelper.getConnectionToMetastore(HiveSchemaHelper.java:76)
... 11 more
... View more
04-03-2018
09:18 PM
@PRAFUL DASH Thanks it works for me.
... View more
03-17-2018
06:11 PM
Thank you @emaxwell it worked for me.
... View more
03-13-2018
06:54 PM
1 Kudo
Thanks, @Aditya Sirna. I've just tried that. It worked. Thanks @bmasna it worked for me.
... View more
03-13-2018
02:23 PM
@Aditya Sirna Here I've similar issue but with zookeeper. I've tried the yum clean all and tried to install zookeeper on the node again using yum. But dint work for me. Please some one help. I can't install zookeeper-client on one of the nodes. stderr: /var/lib/ambari-agent/data/errors-631.txt Traceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/ZOOKEEPER/3.4.5/package/scripts/zookeeper_client.py", line 79, in <module> ZookeeperClient().execute() File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 375, in execute method(env) File "/var/lib/ambari-agent/cache/common-services/ZOOKEEPER/3.4.5/package/scripts/zookeeper_client.py", line 59, in install self.install_packages(env) File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 811, in install_packages name = self.format_package_name(package['name']) File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 546, in format_package_name raise Fail("Cannot match package for regexp name {0}. Available packages: {1}".format(name, self.available_packages_in_repos)) resource_management.core.exceptions.Fail: Cannot match package for regexp name zookeeper_${stack_version}-server. Available packages: ['accumulo', 'accumulo-conf-standalone', 'accumulo-source', 'accumulo-test', 'accumulo_2_5_3_0_37', 'accumulo_2_5_3_0_37-conf-standalone', 'accumulo_2_5_3_0_37-source', 'accumulo_2_5_3_0_37-test', 'atlas-metadata', 'atlas-metadata-hive-plugin', 'atlas-metadata_2_5_3_0_37', 'bigtop-tomcat', 'datafu', 'falcon', 'falcon-doc', 'falcon_2_5_3_0_37', 'falcon_2_5_3_0_37-doc', 'flume', 'flume-agent', 'flume_2_5_3_0_37', 'flume_2_5_3_0_37-agent', 'hadoop', 'hadoop-client', 'hadoop-conf-pseudo', 'hadoop-doc', 'hadoop-hdfs', 'hadoop-hdfs-datanode', 'hadoop-hdfs-fuse', 'hadoop-hdfs-journalnode', 'hadoop-hdfs-namenode', 'hadoop-hdfs-secondarynamenode', 'hadoop-hdfs-zkfc', 'hadoop-httpfs', 'hadoop-httpfs-server', 'hadoop-libhdfs', 'hadoop-mapreduce', 'hadoop-mapreduce-historyserver', 'hadoop-source', 'hadoop-yarn', 'hadoop-yarn-nodemanager', 'hadoop-yarn-proxyserver', 'hadoop-yarn-resourcemanager', 'hadoop-yarn-timelineserver', 'hadoop_2_5_3_0_37-conf-pseudo', 'hadoop_2_5_3_0_37-doc', 'hadoop_2_5_3_0_37-hdfs-datanode', 'hadoop_2_5_3_0_37-hdfs-fuse', 'hadoop_2_5_3_0_37-hdfs-journalnode', 'hadoop_2_5_3_0_37-hdfs-namenode', 'hadoop_2_5_3_0_37-hdfs-secondarynamenode', 'hadoop_2_5_3_0_37-hdfs-zkfc', 'hadoop_2_5_3_0_37-httpfs', 'hadoop_2_5_3_0_37-httpfs-server', 'hadoop_2_5_3_0_37-mapreduce-historyserver', 'hadoop_2_5_3_0_37-source', 'hadoop_2_5_3_0_37-yarn-nodemanager', 'hadoop_2_5_3_0_37-yarn-proxyserver', 'hadoop_2_5_3_0_37-yarn-resourcemanager', 'hadoop_2_5_3_0_37-yarn-timelineserver', 'hadooplzo', 'hadooplzo-native', 'hadooplzo_2_5_3_0_37', 'hadooplzo_2_5_3_0_37-native', 'hbase', 'hbase-doc', 'hbase-master', 'hbase-regionserver', 'hbase-rest', 'hbase-thrift', 'hbase-thrift2', 'hbase_2_5_3_0_37', 'hbase_2_5_3_0_37-doc', 'hbase_2_5_3_0_37-master', 'hbase_2_5_3_0_37-regionserver', 'hbase_2_5_3_0_37-rest', 'hbase_2_5_3_0_37-thrift', 'hbase_2_5_3_0_37-thrift2', 'hive', 'hive-hcatalog', 'hive-hcatalog-server', 'hive-jdbc', 'hive-metastore', 'hive-server', 'hive-server2', 'hive-webhcat', 'hive-webhcat-server', 'hive2', 'hive2-jdbc', 'hive_2_5_3_0_37-hcatalog-server', 'hive_2_5_3_0_37-metastore', 'hive_2_5_3_0_37-server', 'hive_2_5_3_0_37-server2', 'hive_2_5_3_0_37-webhcat-server', 'hue', 'hue-beeswax', 'hue-common', 'hue-hcatalog', 'hue-oozie', 'hue-pig', 'hue-server', 'kafka', 'kafka_2_5_3_0_37', 'knox', 'knox_2_5_3_0_37', 'livy', 'livy_2_5_3_0_37', 'mahout', 'mahout-doc', 'mahout_2_5_3_0_37', 'mahout_2_5_3_0_37-doc', 'oozie', 'oozie-client', 'oozie_2_5_3_0_37', 'oozie_2_5_3_0_37-client', 'phoenix', 'phoenix_2_5_3_0_37', 'pig', 'ranger-admin', 'ranger-atlas-plugin', 'ranger-hbase-plugin', 'ranger-hdfs-plugin', 'ranger-hive-plugin', 'ranger-kafka-plugin', 'ranger-kms', 'ranger-knox-plugin', 'ranger-solr-plugin', 'ranger-storm-plugin', 'ranger-tagsync', 'ranger-usersync', 'ranger-yarn-plugin', 'ranger_2_5_3_0_37-admin', 'ranger_2_5_3_0_37-atlas-plugin', 'ranger_2_5_3_0_37-hbase-plugin', 'ranger_2_5_3_0_37-kafka-plugin', 'ranger_2_5_3_0_37-kms', 'ranger_2_5_3_0_37-knox-plugin', 'ranger_2_5_3_0_37-solr-plugin', 'ranger_2_5_3_0_37-storm-plugin', 'ranger_2_5_3_0_37-tagsync', 'ranger_2_5_3_0_37-usersync', 'slider', 'spark', 'spark-master', 'spark-python', 'spark-worker', 'spark-yarn-shuffle', 'spark2', 'spark2-master', 'spark2-python', 'spark2-worker', 'spark2-yarn-shuffle', 'spark2_2_5_3_0_37', 'spark2_2_5_3_0_37-master', 'spark2_2_5_3_0_37-python', 'spark2_2_5_3_0_37-worker', 'spark_2_5_3_0_37', 'spark_2_5_3_0_37-master', 'spark_2_5_3_0_37-python', 'spark_2_5_3_0_37-worker', 'sqoop', 'sqoop-metastore', 'sqoop_2_5_3_0_37', 'sqoop_2_5_3_0_37-metastore', 'storm', 'storm-slider-client', 'storm_2_5_3_0_37', 'tez', 'tez_hive2', 'zeppelin', 'zeppelin_2_5_3_0_37', 'zookeeper', 'zookeeper-server', 'R', 'R-core', 'R-core-devel', 'R-devel', 'R-java', 'R-java-devel', 'compat-readline5', 'epel-release', 'extjs', 'fping', 'ganglia-debuginfo', 'ganglia-devel', 'ganglia-gmetad', 'ganglia-gmond', 'ganglia-gmond-modules-python', 'ganglia-web', 'hadoop-lzo', 'hadoop-lzo-native', 'libRmath', 'libRmath-devel', 'libconfuse', 'libganglia', 'libgenders', 'lua-rrdtool', 'lucidworks-hdpsearch', 'lzo-debuginfo', 'lzo-devel', 'mysql-community-release', 'mysql-connector-java', 'nagios', 'nagios-debuginfo', 'nagios-devel', 'nagios-plugins', 'nagios-plugins-debuginfo', 'nagios-www', 'openblas', 'openblas-Rblas', 'openblas-devel', 'openblas-openmp', 'openblas-openmp64', 'openblas-openmp64_', 'openblas-serial64', 'openblas-serial64_', 'openblas-static', 'openblas-threads', 'openblas-threads64', 'openblas-threads64_', 'pdsh', 'perl-Crypt-DES', 'perl-Net-SNMP', 'perl-rrdtool', 'python-rrdtool', 'rrdtool', 'rrdtool-debuginfo', 'rrdtool-devel', 'ruby-rrdtool', 'snappy', 'snappy-devel', 'tcl-rrdtool', 'accumulo', 'accumulo-conf-standalone', 'accumulo-source', 'accumulo-test', 'accumulo_2_5_3_0_37', 'accumulo_2_5_3_0_37-conf-standalone', 'accumulo_2_5_3_0_37-source', 'accumulo_2_5_3_0_37-test', 'atlas-metadata', 'atlas-metadata-hive-plugin', 'atlas-metadata_2_5_3_0_37', 'bigtop-tomcat', 'datafu', 'falcon', 'falcon-doc', 'falcon_2_5_3_0_37', 'falcon_2_5_3_0_37-doc', 'flume', 'flume-agent', 'flume_2_5_3_0_37', 'flume_2_5_3_0_37-agent', 'hadoop', 'hadoop-client', 'hadoop-conf-pseudo', 'hadoop-doc', 'hadoop-hdfs', 'hadoop-hdfs-datanode', 'hadoop-hdfs-fuse', 'hadoop-hdfs-journalnode', 'hadoop-hdfs-namenode', 'hadoop-hdfs-secondarynamenode', 'hadoop-hdfs-zkfc', 'hadoop-httpfs', 'hadoop-httpfs-server', 'hadoop-libhdfs', 'hadoop-mapreduce', 'hadoop-mapreduce-historyserver', 'hadoop-source', 'hadoop-yarn', 'hadoop-yarn-nodemanager', 'hadoop-yarn-proxyserver', 'hadoop-yarn-resourcemanager', 'hadoop-yarn-timelineserver', 'hadoop_2_5_3_0_37-conf-pseudo', 'hadoop_2_5_3_0_37-doc', 'hadoop_2_5_3_0_37-hdfs-datanode', 'hadoop_2_5_3_0_37-hdfs-fuse', 'hadoop_2_5_3_0_37-hdfs-journalnode', 'hadoop_2_5_3_0_37-hdfs-namenode', 'hadoop_2_5_3_0_37-hdfs-secondarynamenode', 'hadoop_2_5_3_0_37-hdfs-zkfc', 'hadoop_2_5_3_0_37-httpfs', 'hadoop_2_5_3_0_37-httpfs-server', 'hadoop_2_5_3_0_37-mapreduce-historyserver', 'hadoop_2_5_3_0_37-source', 'hadoop_2_5_3_0_37-yarn-nodemanager', 'hadoop_2_5_3_0_37-yarn-proxyserver', 'hadoop_2_5_3_0_37-yarn-resourcemanager', 'hadoop_2_5_3_0_37-yarn-timelineserver', 'hadooplzo', 'hadooplzo-native', 'hadooplzo_2_5_3_0_37', 'hadooplzo_2_5_3_0_37-native', 'hbase', 'hbase-doc', 'hbase-master', 'hbase-regionserver', 'hbase-rest', 'hbase-thrift', 'hbase-thrift2', 'hbase_2_5_3_0_37', 'hbase_2_5_3_0_37-doc', 'hbase_2_5_3_0_37-master', 'hbase_2_5_3_0_37-regionserver', 'hbase_2_5_3_0_37-rest', 'hbase_2_5_3_0_37-thrift', 'hbase_2_5_3_0_37-thrift2', 'hive', 'hive-hcatalog', 'hive-hcatalog-server', 'hive-jdbc', 'hive-metastore', 'hive-server', 'hive-server2', 'hive-webhcat', 'hive-webhcat-server', 'hive2', 'hive2-jdbc', 'hive_2_5_3_0_37-hcatalog-server', 'hive_2_5_3_0_37-metastore', 'hive_2_5_3_0_37-server', 'hive_2_5_3_0_37-server2', 'hive_2_5_3_0_37-webhcat-server', 'hue', 'hue-beeswax', 'hue-common', 'hue-hcatalog', 'hue-oozie', 'hue-pig', 'hue-server', 'kafka', 'kafka_2_5_3_0_37', 'knox', 'knox_2_5_3_0_37', 'livy', 'livy_2_5_3_0_37', 'mahout', 'mahout-doc', 'mahout_2_5_3_0_37', 'mahout_2_5_3_0_37-doc', 'oozie', 'oozie-client', 'oozie_2_5_3_0_37', 'oozie_2_5_3_0_37-client', 'phoenix', 'phoenix_2_5_3_0_37', 'pig', 'ranger-admin', 'ranger-atlas-plugin', 'ranger-hbase-plugin', 'ranger-hdfs-plugin', 'ranger-hive-plugin', 'ranger-kafka-plugin', 'ranger-kms', 'ranger-knox-plugin', 'ranger-solr-plugin', 'ranger-storm-plugin', 'ranger-tagsync', 'ranger-usersync', 'ranger-yarn-plugin', 'ranger_2_5_3_0_37-admin', 'ranger_2_5_3_0_37-atlas-plugin', 'ranger_2_5_3_0_37-hbase-plugin', 'ranger_2_5_3_0_37-kafka-plugin', 'ranger_2_5_3_0_37-kms', 'ranger_2_5_3_0_37-knox-plugin', 'ranger_2_5_3_0_37-solr-plugin', 'ranger_2_5_3_0_37-storm-plugin', 'ranger_2_5_3_0_37-tagsync', 'ranger_2_5_3_0_37-usersync', 'slider', 'spark', 'spark-master', 'spark-python', 'spark-worker', 'spark-yarn-shuffle', 'spark2', 'spark2-master', 'spark2-python', 'spark2-worker', 'spark2-yarn-shuffle', 'spark2_2_5_3_0_37', 'spark2_2_5_3_0_37-master', 'spark2_2_5_3_0_37-python', 'spark2_2_5_3_0_37-worker', 'spark_2_5_3_0_37', 'spark_2_5_3_0_37-master', 'spark_2_5_3_0_37-python', 'spark_2_5_3_0_37-worker', 'sqoop', 'sqoop-metastore', 'sqoop_2_5_3_0_37', 'sqoop_2_5_3_0_37-metastore', 'storm', 'storm-slider-client', 'storm_2_5_3_0_37', 'tez', 'tez_hive2', 'zeppelin', 'zeppelin_2_5_3_0_37', 'zookeeper', 'zookeeper-server', 'R', 'R-core', 'R-core-devel', 'R-devel', 'R-java', 'R-java-devel', 'compat-readline5', 'epel-release', 'extjs', 'fping', 'ganglia-debuginfo', 'ganglia-devel', 'ganglia-gmetad', 'ganglia-gmond', 'ganglia-gmond-modules-python', 'ganglia-web', 'hadoop-lzo', 'hadoop-lzo-native', 'libRmath', 'libRmath-devel', 'libconfuse', 'libganglia', 'libgenders', 'lua-rrdtool', 'lucidworks-hdpsearch', 'lzo-debuginfo', 'lzo-devel', 'mysql-community-release', 'mysql-connector-java', 'nagios', 'nagios-debuginfo', 'nagios-devel', 'nagios-plugins', 'nagios-plugins-debuginfo', 'nagios-www', 'openblas', 'openblas-Rblas', 'openblas-devel', 'openblas-openmp', 'openblas-openmp64', 'openblas-openmp64_', 'openblas-serial64', 'openblas-serial64_', 'openblas-static', 'openblas-threads', 'openblas-threads64', 'openblas-threads64_', 'pdsh', 'perl-Crypt-DES', 'perl-Net-SNMP', 'perl-rrdtool', 'python-rrdtool', 'rrdtool', 'rrdtool-debuginfo', 'rrdtool-devel', 'ruby-rrdtool', 'snappy', 'snappy-devel', 'tcl-rrdtool', 'atlas-metadata_2_5_3_0_37-hive-plugin', 'bigtop-jsvc', 'datafu_2_5_3_0_37', 'hadoop_2_5_3_0_37', 'hadoop_2_5_3_0_37-client', 'hadoop_2_5_3_0_37-hdfs', 'hadoop_2_5_3_0_37-libhdfs', 'hadoop_2_5_3_0_37-mapreduce', 'hadoop_2_5_3_0_37-yarn', 'hdp-select', 'hive2_2_5_3_0_37', 'hive2_2_5_3_0_37-jdbc', 'hive_2_5_3_0_37', 'hive_2_5_3_0_37-hcatalog', 'hive_2_5_3_0_37-jdbc', 'hive_2_5_3_0_37-webhcat', 'pig_2_5_3_0_37', 'ranger_2_5_3_0_37-hdfs-plugin', 'ranger_2_5_3_0_37-hive-plugin', 'ranger_2_5_3_0_37-yarn-plugin', 'slider_2_5_3_0_37', 'spark2_2_5_3_0_37-yarn-shuffle', 'spark_2_5_3_0_37-yarn-shuffle', 'storm_2_5_3_0_37-slider-client', 'tez_2_5_3_0_37', 'tez_hive2_2_5_3_0_37', 'zookeeper_2_5_3_0_37', 'snappy-devel']stdout: /var/lib/ambari-agent/data/output-631.txt 2018-03-13 09:24:57,026 - Stack Feature Version Info: Cluster Stack=2.5, Command Stack=None, Command Version=None -> 2.5 2018-03-13 09:24:57,037 - Using hadoop conf dir: /usr/hdp/2.5.3.0-37/hadoop/conf 2018-03-13 09:24:57,040 - Group['hdfs'] {} 2018-03-13 09:24:57,042 - Group['hadoop'] {} 2018-03-13 09:24:57,043 - Group['users'] {} 2018-03-13 09:24:57,044 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None} 2018-03-13 09:24:57,046 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None} 2018-03-13 09:24:57,048 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None} 2018-03-13 09:24:57,049 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None} 2018-03-13 09:24:57,052 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None} 2018-03-13 09:24:57,055 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs'], 'uid': None} 2018-03-13 09:24:57,059 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None} 2018-03-13 09:24:57,062 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None} 2018-03-13 09:24:57,065 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None} 2018-03-13 09:24:57,067 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2018-03-13 09:24:57,072 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'} 2018-03-13 09:24:57,082 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if 2018-03-13 09:24:57,083 - Group['hdfs'] {} 2018-03-13 09:24:57,084 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', u'hdfs']} 2018-03-13 09:24:57,086 - FS Type: 2018-03-13 09:24:57,086 - Directory['/etc/hadoop'] {'mode': 0755} 2018-03-13 09:24:57,134 - File['/usr/hdp/2.5.3.0-37/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'} 2018-03-13 09:24:57,136 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777} 2018-03-13 09:24:57,168 - Repository['HDP-2.5-repo-2'] {'append_to_file': False, 'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.3.0', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-2', 'mirror_list': None} 2018-03-13 09:24:57,190 - File['/etc/yum.repos.d/ambari-hdp-2.repo'] {'content': '[HDP-2.5-repo-2]\nname=HDP-2.5-repo-2\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.3.0\n\npath=/\nenabled=1\ngpgcheck=0'} 2018-03-13 09:24:57,192 - Writing File['/etc/yum.repos.d/ambari-hdp-2.repo'] because contents don't match 2018-03-13 09:24:57,193 - Repository['HDP-UTILS-1.1.0.21-repo-2'] {'append_to_file': True, 'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-2', 'mirror_list': None} 2018-03-13 09:24:57,204 - File['/etc/yum.repos.d/ambari-hdp-2.repo'] {'content': '[HDP-2.5-repo-2]\nname=HDP-2.5-repo-2\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.3.0\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-UTILS-1.1.0.21-repo-2]\nname=HDP-UTILS-1.1.0.21-repo-2\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7\n\npath=/\nenabled=1\ngpgcheck=0'} 2018-03-13 09:24:57,205 - Writing File['/etc/yum.repos.d/ambari-hdp-2.repo'] because contents don't match 2018-03-13 09:24:57,217 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2018-03-13 09:24:57,494 - Skipping installation of existing package unzip 2018-03-13 09:24:57,495 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2018-03-13 09:24:57,527 - Skipping installation of existing package curl 2018-03-13 09:24:57,527 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2018-03-13 09:24:57,566 - Skipping installation of existing package hdp-select 2018-03-13 09:24:57,574 - The repository with version 2.5.3.0-37 for this command has been marked as resolved. It will be used to report the version of the component which was installed 2018-03-13 09:24:57,899 - Command repositories: HDP-2.5-repo-2, HDP-UTILS-1.1.0.21-repo-2 2018-03-13 09:24:57,899 - Applicable repositories: HDP-2.5-repo-2, HDP-UTILS-1.1.0.21-repo-2 2018-03-13 09:24:57,903 - Looking for matching packages in the following repositories: HDP-2.5-repo-2, HDP-UTILS-1.1.0.21-repo-2 2018-03-13 09:25:00,329 - Adding fallback repositories: HDP-2.5-repo-1, HDP-UTILS-1.1.0.21-repo-1 2018-03-13 09:25:03,636 - Package['zookeeper_2_5_3_0_37'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2018-03-13 09:25:03,783 - Skipping installation of existing package zookeeper_2_5_3_0_37 2018-03-13 09:25:03,787 - No package found for zookeeper_${stack_version}-server(zookeeper_(\d|_)+-server$) 2018-03-13 09:25:03,793 - The repository with version 2.5.3.0-37 for this command has been marked as resolved. It will be used to report the version of the component which was installed Command failed after 1 tries
... View more
12-08-2017
10:01 PM
@vpoornalingam Can I know why do you want to install clients in the Node1. It's master node right. From node4-8 if you install on these node it's fine so that clients can access the services from there. But why specifically on master node was my question. Could you provide reason for it if possible. Thanks
... View more
11-19-2017
01:04 AM
Even I've got the same issue. I'm practicing the hortonworks partner courses HDP administration lab. I'm trying to add node2 and node3 using "Add hosts" under Hosts tab in ambari. In the "Confirm Hosts" Step it's failing. Can someone help. Here is the link for my question: https://community.hortonworks.com/questions/148076/failing-to-add-hosts-using-ambari-ui.html
... View more