Member since
09-23-2016
18
Posts
3
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
8843 | 03-21-2018 09:07 AM | |
3879 | 01-12-2017 02:55 AM |
03-21-2018
09:07 AM
1 Kudo
OK,I had resolved by myself. But I not found the really reason. Just run below actions: 1. Clean yum all: yum clean all 2. Update yum repo: yum update repo 3. Remove packages which were throws an error. Kind of like my problem there need uninstall HBase client package without deps. 4. Click retry button. Just FYI, congratulation for you.
... View more
03-20-2018
08:43 AM
There were some strange failed message happened when I use the lastest Ambari install HDP services with local repos way.
1. At first there use python simple http as a web server, the tree desc like below:
HDP-2.6
├── ambari
│ └── centos7
├── HDP
│ └── centos7
├── HDP-GPL
│ └── centos7
└── HDP-UTILS
├── openblas
├── repodata
├── RPM-GPG-KEY
└── snappy
2. Auto generate local repo file like this:
[HDP-2.6-repo-1]
name=HDP-2.6-repo-1
baseurl=http://centos7-001/HDP/centos7
path=/
enabled=1
gpgcheck=0
[HDP-2.6-GPL-repo-1]
name=HDP-2.6-GPL-repo-1
baseurl=http://centos7-001/HDP-GPL/centos7
path=/
enabled=1
gpgcheck=0
[HDP-UTILS-1.1.0.22-repo-1]
name=HDP-UTILS-1.1.0.22-repo-1
baseurl=http://centos7-001/HDP-UTILS
path=/
enabled=1
gpgcheck=0
3. Then seems all action is success until Install, Start and Test step, throws an error said HBase Client install failed, error message like this: Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/hbase_client.py", line 67, in <module>
HbaseClient().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 375, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/hbase_client.py", line 35, in install
self.install_packages(env)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 811, in install_packages
name = self.format_package_name(package['name'])
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 546, in format_package_name
raise Fail("Cannot match package for regexp name {0}. Available packages: {1}".format(name, self.available_packages_in_repos))
resource_management.core.exceptions.Fail: Cannot match package for regexp name hbase_${stack_version}. Available packages: ['accumulo', 'accumulo-conf-standalone', 'accumulo-source', 'accumulo_2_6_4_0_91', 'accumulo_2_6_4_0_91-conf-standalone', 'accumulo_2_6_4_0_91-source', 'atlas-metadata', 'atlas-metadata-falcon-plugin', 'atlas-metadata-hive-plugin', 'atlas-metadata-sqoop-plugin', 'atlas-metadata-storm-plugin', 'atlas-metadata_2_6_4_0_91', 'atlas-metadata_2_6_4_0_91-falcon-plugin', 'atlas-metadata_2_6_4_0_91-hive-plugin', 'atlas-metadata_2_6_4_0_91-sqoop-plugin', 'atlas-metadata_2_6_4_0_91-storm-plugin', 'bigtop-tomcat', 'datafu', 'datafu_2_6_4_0_91', 'druid', 'druid_2_6_4_0_91', 'falcon', 'falcon-doc', 'falcon_2_6_4_0_91', 'falcon_2_6_4_0_91-doc', 'flume', 'flume-agent', 'flume_2_6_4_0_91', 'flume_2_6_4_0_91-agent', 'hadoop', 'hadoop-client', 'hadoop-conf-pseudo', 'hadoop-doc', 'hadoop-hdfs', 'hadoop-hdfs-datanode', 'hadoop-hdfs-fuse', 'hadoop-hdfs-journalnode', 'hadoop-hdfs-namenode', 'hadoop-hdfs-secondarynamenode', 'hadoop-hdfs-zkfc', 'hadoop-httpfs', 'hadoop-httpfs-server', 'hadoop-libhdfs', 'hadoop-mapreduce', 'hadoop-mapreduce-historyserver', 'hadoop-source', 'hadoop-yarn', 'hadoop-yarn-nodemanager', 'hadoop-yarn-proxyserver', 'hadoop-yarn-resourcemanager', 'hadoop-yarn-timelineserver', 'hadoop_2_6_4_0_91-conf-pseudo', 'hadoop_2_6_4_0_91-doc', 'hadoop_2_6_4_0_91-hdfs-datanode', 'hadoop_2_6_4_0_91-hdfs-fuse', 'hadoop_2_6_4_0_91-hdfs-journalnode', 'hadoop_2_6_4_0_91-hdfs-namenode', 'hadoop_2_6_4_0_91-hdfs-secondarynamenode', 'hadoop_2_6_4_0_91-hdfs-zkfc', 'hadoop_2_6_4_0_91-httpfs', 'hadoop_2_6_4_0_91-httpfs-server', 'hadoop_2_6_4_0_91-mapreduce-historyserver', 'hadoop_2_6_4_0_91-source', 'hadoop_2_6_4_0_91-yarn-nodemanager', 'hadoop_2_6_4_0_91-yarn-proxyserver', 'hadoop_2_6_4_0_91-yarn-resourcemanager', 'hadoop_2_6_4_0_91-yarn-timelineserver', 'hbase', 'hbase-doc', 'hbase-master', 'hbase-regionserver', 'hbase-rest', 'hbase-thrift', 'hbase-thrift2', 'hbase_2_6_4_0_91-doc', 'hbase_2_6_4_0_91-master', 'hbase_2_6_4_0_91-regionserver', 'hbase_2_6_4_0_91-rest', 'hbase_2_6_4_0_91-thrift', 'hbase_2_6_4_0_91-thrift2', 'hive', 'hive-hcatalog', 'hive-hcatalog-server', 'hive-jdbc', 'hive-metastore', 'hive-server', 'hive-server2', 'hive-webhcat', 'hive-webhcat-server', 'hive2', 'hive2-jdbc', 'hive2_2_6_4_0_91', 'hive2_2_6_4_0_91-jdbc', 'hive_2_6_4_0_91', 'hive_2_6_4_0_91-hcatalog', 'hive_2_6_4_0_91-hcatalog-server', 'hive_2_6_4_0_91-jdbc', 'hive_2_6_4_0_91-metastore', 'hive_2_6_4_0_91-server', 'hive_2_6_4_0_91-server2', 'hive_2_6_4_0_91-webhcat', 'hive_2_6_4_0_91-webhcat-server', 'hue', 'hue-beeswax', 'hue-common', 'hue-hcatalog', 'hue-oozie', 'hue-pig', 'hue-server', 'kafka', 'kafka_2_6_4_0_91', 'knox', 'knox_2_6_4_0_91', 'livy', 'livy2', 'livy2_2_6_4_0_91', 'livy_2_6_4_0_91', 'mahout', 'mahout-doc', 'mahout_2_6_4_0_91', 'mahout_2_6_4_0_91-doc', 'oozie', 'oozie-client', 'oozie-common', 'oozie-sharelib', 'oozie-sharelib-distcp', 'oozie-sharelib-hcatalog', 'oozie-sharelib-hive', 'oozie-sharelib-hive2', 'oozie-sharelib-mapreduce-streaming', 'oozie-sharelib-pig', 'oozie-sharelib-spark', 'oozie-sharelib-sqoop', 'oozie-webapp', 'oozie_2_6_4_0_91', 'oozie_2_6_4_0_91-client', 'oozie_2_6_4_0_91-common', 'oozie_2_6_4_0_91-sharelib', 'oozie_2_6_4_0_91-sharelib-distcp', 'oozie_2_6_4_0_91-sharelib-hcatalog', 'oozie_2_6_4_0_91-sharelib-hive', 'oozie_2_6_4_0_91-sharelib-hive2', 'oozie_2_6_4_0_91-sharelib-mapreduce-streaming', 'oozie_2_6_4_0_91-sharelib-pig', 'oozie_2_6_4_0_91-sharelib-spark', 'oozie_2_6_4_0_91-sharelib-sqoop', 'oozie_2_6_4_0_91-webapp', 'phoenix', 'pig', 'pig_2_6_4_0_91', 'ranger-admin', 'ranger-atlas-plugin', 'ranger-hbase-plugin', 'ranger-hdfs-plugin', 'ranger-hive-plugin', 'ranger-kafka-plugin', 'ranger-kms', 'ranger-knox-plugin', 'ranger-solr-plugin', 'ranger-storm-plugin', 'ranger-tagsync', 'ranger-usersync', 'ranger-yarn-plugin', 'ranger_2_6_4_0_91-admin', 'ranger_2_6_4_0_91-atlas-plugin', 'ranger_2_6_4_0_91-hive-plugin', 'ranger_2_6_4_0_91-kafka-plugin', 'ranger_2_6_4_0_91-kms', 'ranger_2_6_4_0_91-knox-plugin', 'ranger_2_6_4_0_91-solr-plugin', 'ranger_2_6_4_0_91-storm-plugin', 'ranger_2_6_4_0_91-tagsync', 'ranger_2_6_4_0_91-usersync', 'shc', 'shc_2_6_4_0_91', 'slider', 'slider_2_6_4_0_91', 'spark', 'spark-master', 'spark-python', 'spark-worker', 'spark-yarn-shuffle', 'spark2', 'spark2-master', 'spark2-python', 'spark2-worker', 'spark2-yarn-shuffle', 'spark2_2_6_4_0_91', 'spark2_2_6_4_0_91-master', 'spark2_2_6_4_0_91-python', 'spark2_2_6_4_0_91-worker', 'spark_2_6_4_0_91', 'spark_2_6_4_0_91-master', 'spark_2_6_4_0_91-python', 'spark_2_6_4_0_91-worker', 'spark_llap', 'spark_llap_2_6_4_0_91', 'sqoop', 'sqoop-metastore', 'sqoop_2_6_4_0_91', 'sqoop_2_6_4_0_91-metastore', 'storm', 'storm-slider-client', 'storm_2_6_4_0_91', 'storm_2_6_4_0_91-slider-client', 'superset', 'superset_2_6_4_0_91', 'tez', 'tez_2_6_4_0_91', 'tez_hive2', 'tez_hive2_2_6_4_0_91', 'zeppelin', 'zeppelin_2_6_4_0_91', 'zookeeper', 'zookeeper-server', 'zookeeper_2_6_4_0_91-server', 'hadooplzo', 'hadooplzo-native', 'hadooplzo_2_6_4_0_91', 'hadooplzo_2_6_4_0_91-native', 'openblas', 'openblas-Rblas', 'openblas-devel', 'openblas-openmp', 'openblas-openmp64', 'openblas-openmp64_', 'openblas-serial64', 'openblas-serial64_', 'openblas-static', 'openblas-threads', 'openblas-threads64', 'openblas-threads64_', 'snappy', 'snappy-devel', 'openblas', 'openblas-Rblas', 'openblas-devel', 'openblas-openmp', 'openblas-openmp64', 'openblas-openmp64_', 'openblas-serial64', 'openblas-serial64_', 'openblas-static', 'openblas-threads', 'openblas-threads64', 'openblas-threads64_', 'snappy', 'snappy-devel', 'accumulo', 'accumulo-conf-standalone', 'accumulo-source', 'accumulo_2_6_4_0_91', 'accumulo_2_6_4_0_91-conf-standalone', 'accumulo_2_6_4_0_91-source', 'atlas-metadata', 'atlas-metadata-falcon-plugin', 'atlas-metadata-hive-plugin', 'atlas-metadata-sqoop-plugin', 'atlas-metadata-storm-plugin', 'atlas-metadata_2_6_4_0_91', 'atlas-metadata_2_6_4_0_91-falcon-plugin', 'atlas-metadata_2_6_4_0_91-hive-plugin', 'atlas-metadata_2_6_4_0_91-sqoop-plugin', 'atlas-metadata_2_6_4_0_91-storm-plugin', 'bigtop-tomcat', 'datafu', 'datafu_2_6_4_0_91', 'druid', 'druid_2_6_4_0_91', 'falcon', 'falcon-doc', 'falcon_2_6_4_0_91', 'falcon_2_6_4_0_91-doc', 'flume', 'flume-agent', 'flume_2_6_4_0_91', 'flume_2_6_4_0_91-agent', 'hadoop', 'hadoop-client', 'hadoop-conf-pseudo', 'hadoop-doc', 'hadoop-hdfs', 'hadoop-hdfs-datanode', 'hadoop-hdfs-fuse', 'hadoop-hdfs-journalnode', 'hadoop-hdfs-namenode', 'hadoop-hdfs-secondarynamenode', 'hadoop-hdfs-zkfc', 'hadoop-httpfs', 'hadoop-httpfs-server', 'hadoop-libhdfs', 'hadoop-mapreduce', 'hadoop-mapreduce-historyserver', 'hadoop-source', 'hadoop-yarn', 'hadoop-yarn-nodemanager', 'hadoop-yarn-proxyserver', 'hadoop-yarn-resourcemanager', 'hadoop-yarn-timelineserver', 'hadoop_2_6_4_0_91-conf-pseudo', 'hadoop_2_6_4_0_91-doc', 'hadoop_2_6_4_0_91-hdfs-datanode', 'hadoop_2_6_4_0_91-hdfs-fuse', 'hadoop_2_6_4_0_91-hdfs-journalnode', 'hadoop_2_6_4_0_91-hdfs-namenode', 'hadoop_2_6_4_0_91-hdfs-secondarynamenode', 'hadoop_2_6_4_0_91-hdfs-zkfc', 'hadoop_2_6_4_0_91-httpfs', 'hadoop_2_6_4_0_91-httpfs-server', 'hadoop_2_6_4_0_91-mapreduce-historyserver', 'hadoop_2_6_4_0_91-source', 'hadoop_2_6_4_0_91-yarn-nodemanager', 'hadoop_2_6_4_0_91-yarn-proxyserver', 'hadoop_2_6_4_0_91-yarn-resourcemanager', 'hadoop_2_6_4_0_91-yarn-timelineserver', 'hbase', 'hbase-doc', 'hbase-master', 'hbase-regionserver', 'hbase-rest', 'hbase-thrift', 'hbase-thrift2', 'hbase_2_6_4_0_91-doc', 'hbase_2_6_4_0_91-master', 'hbase_2_6_4_0_91-regionserver', 'hbase_2_6_4_0_91-rest', 'hbase_2_6_4_0_91-thrift', 'hbase_2_6_4_0_91-thrift2', 'hive', 'hive-hcatalog', 'hive-hcatalog-server', 'hive-jdbc', 'hive-metastore', 'hive-server', 'hive-server2', 'hive-webhcat', 'hive-webhcat-server', 'hive2', 'hive2-jdbc', 'hive2_2_6_4_0_91', 'hive2_2_6_4_0_91-jdbc', 'hive_2_6_4_0_91', 'hive_2_6_4_0_91-hcatalog', 'hive_2_6_4_0_91-hcatalog-server', 'hive_2_6_4_0_91-jdbc', 'hive_2_6_4_0_91-metastore', 'hive_2_6_4_0_91-server', 'hive_2_6_4_0_91-server2', 'hive_2_6_4_0_91-webhcat', 'hive_2_6_4_0_91-webhcat-server', 'hue', 'hue-beeswax', 'hue-common', 'hue-hcatalog', 'hue-oozie', 'hue-pig', 'hue-server', 'kafka', 'kafka_2_6_4_0_91', 'knox', 'knox_2_6_4_0_91', 'livy', 'livy2', 'livy2_2_6_4_0_91', 'livy_2_6_4_0_91', 'mahout', 'mahout-doc', 'mahout_2_6_4_0_91', 'mahout_2_6_4_0_91-doc', 'oozie', 'oozie-client', 'oozie-common', 'oozie-sharelib', 'oozie-sharelib-distcp', 'oozie-sharelib-hcatalog', 'oozie-sharelib-hive', 'oozie-sharelib-hive2', 'oozie-sharelib-mapreduce-streaming', 'oozie-sharelib-pig', 'oozie-sharelib-spark', 'oozie-sharelib-sqoop', 'oozie-webapp', 'oozie_2_6_4_0_91', 'oozie_2_6_4_0_91-client', 'oozie_2_6_4_0_91-common', 'oozie_2_6_4_0_91-sharelib', 'oozie_2_6_4_0_91-sharelib-distcp', 'oozie_2_6_4_0_91-sharelib-hcatalog', 'oozie_2_6_4_0_91-sharelib-hive', 'oozie_2_6_4_0_91-sharelib-hive2', 'oozie_2_6_4_0_91-sharelib-mapreduce-streaming', 'oozie_2_6_4_0_91-sharelib-pig', 'oozie_2_6_4_0_91-sharelib-spark', 'oozie_2_6_4_0_91-sharelib-sqoop', 'oozie_2_6_4_0_91-webapp', 'phoenix', 'pig', 'pig_2_6_4_0_91', 'ranger-admin', 'ranger-atlas-plugin', 'ranger-hbase-plugin', 'ranger-hdfs-plugin', 'ranger-hive-plugin', 'ranger-kafka-plugin', 'ranger-kms', 'ranger-knox-plugin', 'ranger-solr-plugin', 'ranger-storm-plugin', 'ranger-tagsync', 'ranger-usersync', 'ranger-yarn-plugin', 'ranger_2_6_4_0_91-admin', 'ranger_2_6_4_0_91-atlas-plugin', 'ranger_2_6_4_0_91-hive-plugin', 'ranger_2_6_4_0_91-kafka-plugin', 'ranger_2_6_4_0_91-kms', 'ranger_2_6_4_0_91-knox-plugin', 'ranger_2_6_4_0_91-solr-plugin', 'ranger_2_6_4_0_91-storm-plugin', 'ranger_2_6_4_0_91-tagsync', 'ranger_2_6_4_0_91-usersync', 'shc', 'shc_2_6_4_0_91', 'slider', 'slider_2_6_4_0_91', 'spark', 'spark-master', 'spark-python', 'spark-worker', 'spark-yarn-shuffle', 'spark2', 'spark2-master', 'spark2-python', 'spark2-worker', 'spark2-yarn-shuffle', 'spark2_2_6_4_0_91', 'spark2_2_6_4_0_91-master', 'spark2_2_6_4_0_91-python', 'spark2_2_6_4_0_91-worker', 'spark_2_6_4_0_91', 'spark_2_6_4_0_91-master', 'spark_2_6_4_0_91-python', 'spark_2_6_4_0_91-worker', 'spark_llap', 'spark_llap_2_6_4_0_91', 'sqoop', 'sqoop-metastore', 'sqoop_2_6_4_0_91', 'sqoop_2_6_4_0_91-metastore', 'storm', 'storm-slider-client', 'storm_2_6_4_0_91', 'storm_2_6_4_0_91-slider-client', 'superset', 'superset_2_6_4_0_91', 'tez', 'tez_2_6_4_0_91', 'tez_hive2', 'tez_hive2_2_6_4_0_91', 'zeppelin', 'zeppelin_2_6_4_0_91', 'zookeeper', 'zookeeper-server', 'zookeeper_2_6_4_0_91-server', 'hadooplzo', 'hadooplzo-native', 'hadooplzo_2_6_4_0_91', 'hadooplzo_2_6_4_0_91-native'] 2018-03-20 16:06:55,286 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=None -> 2.6
2018-03-20 16:06:55,305 - Using hadoop conf dir: /usr/hdp/2.6.4.0-91/hadoop/conf
2018-03-20 16:06:55,308 - Group['hdfs'] {}
2018-03-20 16:06:55,312 - Group['hadoop'] {}
2018-03-20 16:06:55,313 - Group['users'] {}
2018-03-20 16:06:55,314 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-03-20 16:06:55,317 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-03-20 16:06:55,319 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-03-20 16:06:55,321 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-03-20 16:06:55,323 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2018-03-20 16:06:55,326 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2018-03-20 16:06:55,328 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs'], 'uid': None}
2018-03-20 16:06:55,330 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-03-20 16:06:55,332 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-03-20 16:06:55,334 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-03-20 16:06:55,337 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-03-20 16:06:55,339 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-03-20 16:06:55,340 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-03-20 16:06:55,344 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2018-03-20 16:06:55,358 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2018-03-20 16:06:55,359 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2018-03-20 16:06:55,362 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-03-20 16:06:55,366 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-03-20 16:06:55,368 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {}
2018-03-20 16:06:55,389 - call returned (0, '1009')
2018-03-20 16:06:55,390 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1009'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2018-03-20 16:06:55,403 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1009'] due to not_if
2018-03-20 16:06:55,405 - Group['hdfs'] {}
2018-03-20 16:06:55,406 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', u'hdfs']}
2018-03-20 16:06:55,407 - FS Type:
2018-03-20 16:06:55,408 - Directory['/etc/hadoop'] {'mode': 0755}
2018-03-20 16:06:55,452 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2018-03-20 16:06:55,455 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2018-03-20 16:06:55,490 - Repository['HDP-2.6-repo-1'] {'append_to_file': False, 'base_url': 'http://centos7-001/HDP/centos7', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None}
2018-03-20 16:06:55,508 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': '[HDP-2.6-repo-1]\nname=HDP-2.6-repo-1\nbaseurl=http://centos7-001/HDP/centos7\n\npath=/\nenabled=1\ngpgcheck=0'}
2018-03-20 16:06:55,510 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match
2018-03-20 16:06:55,512 - Repository['HDP-2.6-GPL-repo-1'] {'append_to_file': True, 'base_url': 'http://centos7-001/HDP-GPL/centos7', 'action': ['create'], 'components': [u'HDP-GPL', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None}
2018-03-20 16:06:55,520 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': '[HDP-2.6-repo-1]\nname=HDP-2.6-repo-1\nbaseurl=http://centos7-001/HDP/centos7\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-2.6-GPL-repo-1]\nname=HDP-2.6-GPL-repo-1\nbaseurl=http://centos7-001/HDP-GPL/centos7\n\npath=/\nenabled=1\ngpgcheck=0'}
2018-03-20 16:06:55,520 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match
2018-03-20 16:06:55,534 - Repository['HDP-UTILS-1.1.0.22-repo-1'] {'append_to_file': True, 'base_url': 'http://centos7-001/HDP-UTILS', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None}
2018-03-20 16:06:55,542 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': '[HDP-2.6-repo-1]\nname=HDP-2.6-repo-1\nbaseurl=http://centos7-001/HDP/centos7\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-2.6-GPL-repo-1]\nname=HDP-2.6-GPL-repo-1\nbaseurl=http://centos7-001/HDP-GPL/centos7\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-UTILS-1.1.0.22-repo-1]\nname=HDP-UTILS-1.1.0.22-repo-1\nbaseurl=http://centos7-001/HDP-UTILS\n\npath=/\nenabled=1\ngpgcheck=0'}
2018-03-20 16:06:55,542 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match
2018-03-20 16:06:55,556 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-03-20 16:06:55,756 - Skipping installation of existing package unzip
2018-03-20 16:06:55,757 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-03-20 16:06:55,778 - Skipping installation of existing package curl
2018-03-20 16:06:55,779 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-03-20 16:06:55,799 - Skipping installation of existing package hdp-select
2018-03-20 16:06:55,812 - The repository with version 2.6.4.0-91 for this command has been marked as resolved. It will be used to report the version of the component which was installed
2018-03-20 16:06:56,419 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=None -> 2.6
2018-03-20 16:06:56,452 - Using hadoop conf dir: /usr/hdp/2.6.4.0-91/hadoop/conf
2018-03-20 16:06:56,464 - checked_call['hostid'] {}
2018-03-20 16:06:56,472 - checked_call returned (0, '10acab0a')
2018-03-20 16:06:56,493 - Command repositories: HDP-2.6-repo-1, HDP-2.6-GPL-repo-1, HDP-UTILS-1.1.0.22-repo-1
2018-03-20 16:06:56,494 - Applicable repositories: HDP-2.6-repo-1, HDP-2.6-GPL-repo-1, HDP-UTILS-1.1.0.22-repo-1
2018-03-20 16:06:56,498 - Looking for matching packages in the following repositories: HDP-2.6-repo-1, HDP-2.6-GPL-repo-1, HDP-UTILS-1.1.0.22-repo-1
2018-03-20 16:07:01,619 - Adding fallback repositories: HDP-UTILS-1.1.0.22-repo-51, HDP-2.6-repo-51, HDP-2.6-GPL-repo-51
2018-03-20 16:07:05,698 - No package found for hbase_${stack_version}(hbase_(\d|_)+$)
2018-03-20 16:07:05,708 - The repository with version 2.6.4.0-91 for this command has been marked as resolved. It will be used to report the version of the component which was installed 4. But I had check every cluster node and master , there all had installed HBase components successfully. Like this: root@centos7-001 Downloads]# yum list | grep hbase\*
hbase_2_6_4_0_91.noarch 1.1.2.2.6.4.0-91 @HDP-2.6-repo-1
ranger_2_6_4_0_91-hbase-plugin.x86_64 0.7.0.2.6.4.0-91 @HDP-2.6-repo-1
hbase.noarch 1.1.2.2.6.4.0-91 HDP-2.6-repo-1
hbase-doc.noarch 1.1.2.2.6.4.0-91 HDP-2.6-repo-1
hbase-master.noarch 1.1.2.2.6.4.0-91 HDP-2.6-repo-1
hbase-regionserver.noarch 1.1.2.2.6.4.0-91 HDP-2.6-repo-1
hbase-rest.noarch 1.1.2.2.6.4.0-91 HDP-2.6-repo-1
hbase-thrift.noarch 1.1.2.2.6.4.0-91 HDP-2.6-repo-1
hbase-thrift2.noarch 1.1.2.2.6.4.0-91 HDP-2.6-repo-1
hbase_2_6_4_0_91-doc.noarch 1.1.2.2.6.4.0-91 HDP-2.6-repo-1
hbase_2_6_4_0_91-master.noarch 1.1.2.2.6.4.0-91 HDP-2.6-repo-1
hbase_2_6_4_0_91-regionserver.noarch 1.1.2.2.6.4.0-91 HDP-2.6-repo-1
hbase_2_6_4_0_91-rest.noarch 1.1.2.2.6.4.0-91 HDP-2.6-repo-1
hbase_2_6_4_0_91-thrift.noarch 1.1.2.2.6.4.0-91 HDP-2.6-repo-1
hbase_2_6_4_0_91-thrift2.noarch 1.1.2.2.6.4.0-91 HDP-2.6-repo-1
ranger-hbase-plugin.noarch 0.7.0.2.6.4.0-91 HDP-2.6-repo-1
Then I had try some way which found on internet search engine, just like update yum repos, use manual install components, reinstall ambari. But the end result was same. It's failed. These action I had build on the HDP2.5 product in past time were not meet those error. So what can I do now? Someone can give me suggest , Thanks.
... View more
Labels:
06-26-2017
06:49 AM
I think you need check the folder access. There had two place you need check: `/var/log/hbase` and `/hadoop/hbase/local/jars/tmp/`. Also I had chown those folders under hbase the region start success. Try it and congratulate。
... View more
03-09-2017
07:46 AM
It's seems not really resolve problem. It's still show up next time. What's the final way?
... View more
01-12-2017
02:55 AM
@rguruvannagari Thank for your point out. I had check that my yarn config that found it doesn't enable using cgroups, then I turn it on. Then the cli can login and execute map reduce jobs. There some resolve steps for other guys information: 1. umount /sys/fs/cgroup/cpu,cpuacct 2. mkdir /cgroup/cpu 3. Disable the property CPU Scheduling & CPU Isolation 4. Ensure the property yarn.nodemanager.container-executor.class=org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor
yarn.nodemanager.linux-container-executor.resources-handler.class=org.apache.hadoop.yarn.server.nodemanager.util.DefaultLCEResourcesHandler
yarn.nodemanager.linux-container-executor.cgroups.hierarchy=/yarn
yarn.nodemanager.linux-container-executor.cgroups.mount = true
yarn.nodemanager.linux-container-executor.cgroups.mount-path=/cgroup
yarn.nodemanager.linux-container-executor.group=hadoop 5. Add the yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user property and set it to the desired user. 6. Configure the LinuxContainerExecutor to run jobs as the user submitting the job by adding property yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users and setting it to false 7. set min.user.id to a lower value in /etc/hadoop/conf/container-executor.cfg in all NodeManagers. 8. Rest Yarn service. References: https://hadoop.apache.org/docs/r2.7.2/hadoop-yarn/hadoop-yarn-site/NodeManagerCgroups.html https://www.ibm.com/support/knowledgecenter/SSPT3X_4.2.0/com.ibm.swg.im.infosphere.biginsights.admin.doc/doc/admin_yarn_cgroups.html
... View more
01-11-2017
02:55 PM
Thank you for reply. I had try to use hive user login , the exception still show up.Also check the group there all right. I don't had any idea. Can you find that had error message , said "main : run as user is nobody" , it very strange. So seems
... View more
01-11-2017
02:11 PM
There after I install HDP2.5 on my server which environment is Centos 7, all service running well the dashboard show green status for them. But when I try to login hive CLI there throws an exception. Also I can sure that had use hdfs user to execute hive command.The command like this: su hdfs --> hive --service cli Diagnostics: Application application_1484105570599_0005 initialization failed (exitCode=255) with output: main : command provided 0
main : run as user is nobody
main : requested yarn user is hdfs
Requested user nobody is not whitelisted and has id 99,which is below the minimum allowed 1000
Failing this attempt. Failing the application.
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:556)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:625)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
Caused by: org.apache.tez.dag.api.SessionNotRunning: TezSession has already shutdown. Application application_1484105570599_0005 failed 2 times due to AM Container for appattempt_1484105570599_0005_000002 exited with exitCode: -1000
For more detailed output, check the application tracking page: http://master01.office.sao.so:8088/cluster/app/application_1484105570599_0005 Then click on links to logs of each attempt.
yarn-yarn-nodemanager-master01log.tar.gz
... View more
Labels:
- Labels:
-
Apache Hive
01-06-2017
10:04 AM
Also I had meet this problems, and try to increase the capacity size, it still can't work. There throws an exception: org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationException: ExitCodeException exitCode=255.yarn-master-log.tar.gz You can get the detail info from attachment content. It's there have other solve way. Thanks.
... View more
09-23-2016
09:01 AM
Also I had meet the second problem? How can I update the connect timeout setting? There were test environment not enough resources so I don't mind it.
... View more