Member since
09-23-2016
18
Posts
3
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
7782 | 03-21-2018 09:07 AM | |
2494 | 01-12-2017 02:55 AM |
08-17-2018
04:03 AM
Hi HCC guys: Is there any update in Sandbox2.6.5 HBase component? In the pass time I had used HDP2.6.4 then can create table in HBase run as root account(I don't change any settings), But by now it can't do the same action in lastest Sandbox. Then I try to grant the permissions for root user that get an exception, see below: hbase(main):001:0> grant 'root','RWCA'
ERROR: org.apache.hadoop.hbase.coprocessor.CoprocessorException: java.net.ConnectException: Connection refused (Connection refused)
at org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor.grant(RangerAuthorizationCoprocessor.java:1236)
at org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService$1.grant(AccessControlProtos.java:9933)
at org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService.callMethod(AccessControlProtos.java:10097)
at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7857) There know that only hbase user can create table, but also not grant user permission. What can I do?
... View more
Labels:
03-21-2018
09:07 AM
1 Kudo
OK,I had resolved by myself. But I not found the really reason. Just run below actions: 1. Clean yum all: yum clean all 2. Update yum repo: yum update repo 3. Remove packages which were throws an error. Kind of like my problem there need uninstall HBase client package without deps. 4. Click retry button. Just FYI, congratulation for you.
... View more
03-20-2018
08:43 AM
There were some strange failed message happened when I use the lastest Ambari install HDP services with local repos way.
1. At first there use python simple http as a web server, the tree desc like below:
HDP-2.6
├── ambari
│ └── centos7
├── HDP
│ └── centos7
├── HDP-GPL
│ └── centos7
└── HDP-UTILS
├── openblas
├── repodata
├── RPM-GPG-KEY
└── snappy
2. Auto generate local repo file like this:
[HDP-2.6-repo-1]
name=HDP-2.6-repo-1
baseurl=http://centos7-001/HDP/centos7
path=/
enabled=1
gpgcheck=0
[HDP-2.6-GPL-repo-1]
name=HDP-2.6-GPL-repo-1
baseurl=http://centos7-001/HDP-GPL/centos7
path=/
enabled=1
gpgcheck=0
[HDP-UTILS-1.1.0.22-repo-1]
name=HDP-UTILS-1.1.0.22-repo-1
baseurl=http://centos7-001/HDP-UTILS
path=/
enabled=1
gpgcheck=0
3. Then seems all action is success until Install, Start and Test step, throws an error said HBase Client install failed, error message like this: Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/hbase_client.py", line 67, in <module>
HbaseClient().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 375, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/hbase_client.py", line 35, in install
self.install_packages(env)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 811, in install_packages
name = self.format_package_name(package['name'])
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 546, in format_package_name
raise Fail("Cannot match package for regexp name {0}. Available packages: {1}".format(name, self.available_packages_in_repos))
resource_management.core.exceptions.Fail: Cannot match package for regexp name hbase_${stack_version}. Available packages: ['accumulo', 'accumulo-conf-standalone', 'accumulo-source', 'accumulo_2_6_4_0_91', 'accumulo_2_6_4_0_91-conf-standalone', 'accumulo_2_6_4_0_91-source', 'atlas-metadata', 'atlas-metadata-falcon-plugin', 'atlas-metadata-hive-plugin', 'atlas-metadata-sqoop-plugin', 'atlas-metadata-storm-plugin', 'atlas-metadata_2_6_4_0_91', 'atlas-metadata_2_6_4_0_91-falcon-plugin', 'atlas-metadata_2_6_4_0_91-hive-plugin', 'atlas-metadata_2_6_4_0_91-sqoop-plugin', 'atlas-metadata_2_6_4_0_91-storm-plugin', 'bigtop-tomcat', 'datafu', 'datafu_2_6_4_0_91', 'druid', 'druid_2_6_4_0_91', 'falcon', 'falcon-doc', 'falcon_2_6_4_0_91', 'falcon_2_6_4_0_91-doc', 'flume', 'flume-agent', 'flume_2_6_4_0_91', 'flume_2_6_4_0_91-agent', 'hadoop', 'hadoop-client', 'hadoop-conf-pseudo', 'hadoop-doc', 'hadoop-hdfs', 'hadoop-hdfs-datanode', 'hadoop-hdfs-fuse', 'hadoop-hdfs-journalnode', 'hadoop-hdfs-namenode', 'hadoop-hdfs-secondarynamenode', 'hadoop-hdfs-zkfc', 'hadoop-httpfs', 'hadoop-httpfs-server', 'hadoop-libhdfs', 'hadoop-mapreduce', 'hadoop-mapreduce-historyserver', 'hadoop-source', 'hadoop-yarn', 'hadoop-yarn-nodemanager', 'hadoop-yarn-proxyserver', 'hadoop-yarn-resourcemanager', 'hadoop-yarn-timelineserver', 'hadoop_2_6_4_0_91-conf-pseudo', 'hadoop_2_6_4_0_91-doc', 'hadoop_2_6_4_0_91-hdfs-datanode', 'hadoop_2_6_4_0_91-hdfs-fuse', 'hadoop_2_6_4_0_91-hdfs-journalnode', 'hadoop_2_6_4_0_91-hdfs-namenode', 'hadoop_2_6_4_0_91-hdfs-secondarynamenode', 'hadoop_2_6_4_0_91-hdfs-zkfc', 'hadoop_2_6_4_0_91-httpfs', 'hadoop_2_6_4_0_91-httpfs-server', 'hadoop_2_6_4_0_91-mapreduce-historyserver', 'hadoop_2_6_4_0_91-source', 'hadoop_2_6_4_0_91-yarn-nodemanager', 'hadoop_2_6_4_0_91-yarn-proxyserver', 'hadoop_2_6_4_0_91-yarn-resourcemanager', 'hadoop_2_6_4_0_91-yarn-timelineserver', 'hbase', 'hbase-doc', 'hbase-master', 'hbase-regionserver', 'hbase-rest', 'hbase-thrift', 'hbase-thrift2', 'hbase_2_6_4_0_91-doc', 'hbase_2_6_4_0_91-master', 'hbase_2_6_4_0_91-regionserver', 'hbase_2_6_4_0_91-rest', 'hbase_2_6_4_0_91-thrift', 'hbase_2_6_4_0_91-thrift2', 'hive', 'hive-hcatalog', 'hive-hcatalog-server', 'hive-jdbc', 'hive-metastore', 'hive-server', 'hive-server2', 'hive-webhcat', 'hive-webhcat-server', 'hive2', 'hive2-jdbc', 'hive2_2_6_4_0_91', 'hive2_2_6_4_0_91-jdbc', 'hive_2_6_4_0_91', 'hive_2_6_4_0_91-hcatalog', 'hive_2_6_4_0_91-hcatalog-server', 'hive_2_6_4_0_91-jdbc', 'hive_2_6_4_0_91-metastore', 'hive_2_6_4_0_91-server', 'hive_2_6_4_0_91-server2', 'hive_2_6_4_0_91-webhcat', 'hive_2_6_4_0_91-webhcat-server', 'hue', 'hue-beeswax', 'hue-common', 'hue-hcatalog', 'hue-oozie', 'hue-pig', 'hue-server', 'kafka', 'kafka_2_6_4_0_91', 'knox', 'knox_2_6_4_0_91', 'livy', 'livy2', 'livy2_2_6_4_0_91', 'livy_2_6_4_0_91', 'mahout', 'mahout-doc', 'mahout_2_6_4_0_91', 'mahout_2_6_4_0_91-doc', 'oozie', 'oozie-client', 'oozie-common', 'oozie-sharelib', 'oozie-sharelib-distcp', 'oozie-sharelib-hcatalog', 'oozie-sharelib-hive', 'oozie-sharelib-hive2', 'oozie-sharelib-mapreduce-streaming', 'oozie-sharelib-pig', 'oozie-sharelib-spark', 'oozie-sharelib-sqoop', 'oozie-webapp', 'oozie_2_6_4_0_91', 'oozie_2_6_4_0_91-client', 'oozie_2_6_4_0_91-common', 'oozie_2_6_4_0_91-sharelib', 'oozie_2_6_4_0_91-sharelib-distcp', 'oozie_2_6_4_0_91-sharelib-hcatalog', 'oozie_2_6_4_0_91-sharelib-hive', 'oozie_2_6_4_0_91-sharelib-hive2', 'oozie_2_6_4_0_91-sharelib-mapreduce-streaming', 'oozie_2_6_4_0_91-sharelib-pig', 'oozie_2_6_4_0_91-sharelib-spark', 'oozie_2_6_4_0_91-sharelib-sqoop', 'oozie_2_6_4_0_91-webapp', 'phoenix', 'pig', 'pig_2_6_4_0_91', 'ranger-admin', 'ranger-atlas-plugin', 'ranger-hbase-plugin', 'ranger-hdfs-plugin', 'ranger-hive-plugin', 'ranger-kafka-plugin', 'ranger-kms', 'ranger-knox-plugin', 'ranger-solr-plugin', 'ranger-storm-plugin', 'ranger-tagsync', 'ranger-usersync', 'ranger-yarn-plugin', 'ranger_2_6_4_0_91-admin', 'ranger_2_6_4_0_91-atlas-plugin', 'ranger_2_6_4_0_91-hive-plugin', 'ranger_2_6_4_0_91-kafka-plugin', 'ranger_2_6_4_0_91-kms', 'ranger_2_6_4_0_91-knox-plugin', 'ranger_2_6_4_0_91-solr-plugin', 'ranger_2_6_4_0_91-storm-plugin', 'ranger_2_6_4_0_91-tagsync', 'ranger_2_6_4_0_91-usersync', 'shc', 'shc_2_6_4_0_91', 'slider', 'slider_2_6_4_0_91', 'spark', 'spark-master', 'spark-python', 'spark-worker', 'spark-yarn-shuffle', 'spark2', 'spark2-master', 'spark2-python', 'spark2-worker', 'spark2-yarn-shuffle', 'spark2_2_6_4_0_91', 'spark2_2_6_4_0_91-master', 'spark2_2_6_4_0_91-python', 'spark2_2_6_4_0_91-worker', 'spark_2_6_4_0_91', 'spark_2_6_4_0_91-master', 'spark_2_6_4_0_91-python', 'spark_2_6_4_0_91-worker', 'spark_llap', 'spark_llap_2_6_4_0_91', 'sqoop', 'sqoop-metastore', 'sqoop_2_6_4_0_91', 'sqoop_2_6_4_0_91-metastore', 'storm', 'storm-slider-client', 'storm_2_6_4_0_91', 'storm_2_6_4_0_91-slider-client', 'superset', 'superset_2_6_4_0_91', 'tez', 'tez_2_6_4_0_91', 'tez_hive2', 'tez_hive2_2_6_4_0_91', 'zeppelin', 'zeppelin_2_6_4_0_91', 'zookeeper', 'zookeeper-server', 'zookeeper_2_6_4_0_91-server', 'hadooplzo', 'hadooplzo-native', 'hadooplzo_2_6_4_0_91', 'hadooplzo_2_6_4_0_91-native', 'openblas', 'openblas-Rblas', 'openblas-devel', 'openblas-openmp', 'openblas-openmp64', 'openblas-openmp64_', 'openblas-serial64', 'openblas-serial64_', 'openblas-static', 'openblas-threads', 'openblas-threads64', 'openblas-threads64_', 'snappy', 'snappy-devel', 'openblas', 'openblas-Rblas', 'openblas-devel', 'openblas-openmp', 'openblas-openmp64', 'openblas-openmp64_', 'openblas-serial64', 'openblas-serial64_', 'openblas-static', 'openblas-threads', 'openblas-threads64', 'openblas-threads64_', 'snappy', 'snappy-devel', 'accumulo', 'accumulo-conf-standalone', 'accumulo-source', 'accumulo_2_6_4_0_91', 'accumulo_2_6_4_0_91-conf-standalone', 'accumulo_2_6_4_0_91-source', 'atlas-metadata', 'atlas-metadata-falcon-plugin', 'atlas-metadata-hive-plugin', 'atlas-metadata-sqoop-plugin', 'atlas-metadata-storm-plugin', 'atlas-metadata_2_6_4_0_91', 'atlas-metadata_2_6_4_0_91-falcon-plugin', 'atlas-metadata_2_6_4_0_91-hive-plugin', 'atlas-metadata_2_6_4_0_91-sqoop-plugin', 'atlas-metadata_2_6_4_0_91-storm-plugin', 'bigtop-tomcat', 'datafu', 'datafu_2_6_4_0_91', 'druid', 'druid_2_6_4_0_91', 'falcon', 'falcon-doc', 'falcon_2_6_4_0_91', 'falcon_2_6_4_0_91-doc', 'flume', 'flume-agent', 'flume_2_6_4_0_91', 'flume_2_6_4_0_91-agent', 'hadoop', 'hadoop-client', 'hadoop-conf-pseudo', 'hadoop-doc', 'hadoop-hdfs', 'hadoop-hdfs-datanode', 'hadoop-hdfs-fuse', 'hadoop-hdfs-journalnode', 'hadoop-hdfs-namenode', 'hadoop-hdfs-secondarynamenode', 'hadoop-hdfs-zkfc', 'hadoop-httpfs', 'hadoop-httpfs-server', 'hadoop-libhdfs', 'hadoop-mapreduce', 'hadoop-mapreduce-historyserver', 'hadoop-source', 'hadoop-yarn', 'hadoop-yarn-nodemanager', 'hadoop-yarn-proxyserver', 'hadoop-yarn-resourcemanager', 'hadoop-yarn-timelineserver', 'hadoop_2_6_4_0_91-conf-pseudo', 'hadoop_2_6_4_0_91-doc', 'hadoop_2_6_4_0_91-hdfs-datanode', 'hadoop_2_6_4_0_91-hdfs-fuse', 'hadoop_2_6_4_0_91-hdfs-journalnode', 'hadoop_2_6_4_0_91-hdfs-namenode', 'hadoop_2_6_4_0_91-hdfs-secondarynamenode', 'hadoop_2_6_4_0_91-hdfs-zkfc', 'hadoop_2_6_4_0_91-httpfs', 'hadoop_2_6_4_0_91-httpfs-server', 'hadoop_2_6_4_0_91-mapreduce-historyserver', 'hadoop_2_6_4_0_91-source', 'hadoop_2_6_4_0_91-yarn-nodemanager', 'hadoop_2_6_4_0_91-yarn-proxyserver', 'hadoop_2_6_4_0_91-yarn-resourcemanager', 'hadoop_2_6_4_0_91-yarn-timelineserver', 'hbase', 'hbase-doc', 'hbase-master', 'hbase-regionserver', 'hbase-rest', 'hbase-thrift', 'hbase-thrift2', 'hbase_2_6_4_0_91-doc', 'hbase_2_6_4_0_91-master', 'hbase_2_6_4_0_91-regionserver', 'hbase_2_6_4_0_91-rest', 'hbase_2_6_4_0_91-thrift', 'hbase_2_6_4_0_91-thrift2', 'hive', 'hive-hcatalog', 'hive-hcatalog-server', 'hive-jdbc', 'hive-metastore', 'hive-server', 'hive-server2', 'hive-webhcat', 'hive-webhcat-server', 'hive2', 'hive2-jdbc', 'hive2_2_6_4_0_91', 'hive2_2_6_4_0_91-jdbc', 'hive_2_6_4_0_91', 'hive_2_6_4_0_91-hcatalog', 'hive_2_6_4_0_91-hcatalog-server', 'hive_2_6_4_0_91-jdbc', 'hive_2_6_4_0_91-metastore', 'hive_2_6_4_0_91-server', 'hive_2_6_4_0_91-server2', 'hive_2_6_4_0_91-webhcat', 'hive_2_6_4_0_91-webhcat-server', 'hue', 'hue-beeswax', 'hue-common', 'hue-hcatalog', 'hue-oozie', 'hue-pig', 'hue-server', 'kafka', 'kafka_2_6_4_0_91', 'knox', 'knox_2_6_4_0_91', 'livy', 'livy2', 'livy2_2_6_4_0_91', 'livy_2_6_4_0_91', 'mahout', 'mahout-doc', 'mahout_2_6_4_0_91', 'mahout_2_6_4_0_91-doc', 'oozie', 'oozie-client', 'oozie-common', 'oozie-sharelib', 'oozie-sharelib-distcp', 'oozie-sharelib-hcatalog', 'oozie-sharelib-hive', 'oozie-sharelib-hive2', 'oozie-sharelib-mapreduce-streaming', 'oozie-sharelib-pig', 'oozie-sharelib-spark', 'oozie-sharelib-sqoop', 'oozie-webapp', 'oozie_2_6_4_0_91', 'oozie_2_6_4_0_91-client', 'oozie_2_6_4_0_91-common', 'oozie_2_6_4_0_91-sharelib', 'oozie_2_6_4_0_91-sharelib-distcp', 'oozie_2_6_4_0_91-sharelib-hcatalog', 'oozie_2_6_4_0_91-sharelib-hive', 'oozie_2_6_4_0_91-sharelib-hive2', 'oozie_2_6_4_0_91-sharelib-mapreduce-streaming', 'oozie_2_6_4_0_91-sharelib-pig', 'oozie_2_6_4_0_91-sharelib-spark', 'oozie_2_6_4_0_91-sharelib-sqoop', 'oozie_2_6_4_0_91-webapp', 'phoenix', 'pig', 'pig_2_6_4_0_91', 'ranger-admin', 'ranger-atlas-plugin', 'ranger-hbase-plugin', 'ranger-hdfs-plugin', 'ranger-hive-plugin', 'ranger-kafka-plugin', 'ranger-kms', 'ranger-knox-plugin', 'ranger-solr-plugin', 'ranger-storm-plugin', 'ranger-tagsync', 'ranger-usersync', 'ranger-yarn-plugin', 'ranger_2_6_4_0_91-admin', 'ranger_2_6_4_0_91-atlas-plugin', 'ranger_2_6_4_0_91-hive-plugin', 'ranger_2_6_4_0_91-kafka-plugin', 'ranger_2_6_4_0_91-kms', 'ranger_2_6_4_0_91-knox-plugin', 'ranger_2_6_4_0_91-solr-plugin', 'ranger_2_6_4_0_91-storm-plugin', 'ranger_2_6_4_0_91-tagsync', 'ranger_2_6_4_0_91-usersync', 'shc', 'shc_2_6_4_0_91', 'slider', 'slider_2_6_4_0_91', 'spark', 'spark-master', 'spark-python', 'spark-worker', 'spark-yarn-shuffle', 'spark2', 'spark2-master', 'spark2-python', 'spark2-worker', 'spark2-yarn-shuffle', 'spark2_2_6_4_0_91', 'spark2_2_6_4_0_91-master', 'spark2_2_6_4_0_91-python', 'spark2_2_6_4_0_91-worker', 'spark_2_6_4_0_91', 'spark_2_6_4_0_91-master', 'spark_2_6_4_0_91-python', 'spark_2_6_4_0_91-worker', 'spark_llap', 'spark_llap_2_6_4_0_91', 'sqoop', 'sqoop-metastore', 'sqoop_2_6_4_0_91', 'sqoop_2_6_4_0_91-metastore', 'storm', 'storm-slider-client', 'storm_2_6_4_0_91', 'storm_2_6_4_0_91-slider-client', 'superset', 'superset_2_6_4_0_91', 'tez', 'tez_2_6_4_0_91', 'tez_hive2', 'tez_hive2_2_6_4_0_91', 'zeppelin', 'zeppelin_2_6_4_0_91', 'zookeeper', 'zookeeper-server', 'zookeeper_2_6_4_0_91-server', 'hadooplzo', 'hadooplzo-native', 'hadooplzo_2_6_4_0_91', 'hadooplzo_2_6_4_0_91-native'] 2018-03-20 16:06:55,286 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=None -> 2.6
2018-03-20 16:06:55,305 - Using hadoop conf dir: /usr/hdp/2.6.4.0-91/hadoop/conf
2018-03-20 16:06:55,308 - Group['hdfs'] {}
2018-03-20 16:06:55,312 - Group['hadoop'] {}
2018-03-20 16:06:55,313 - Group['users'] {}
2018-03-20 16:06:55,314 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-03-20 16:06:55,317 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-03-20 16:06:55,319 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-03-20 16:06:55,321 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-03-20 16:06:55,323 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2018-03-20 16:06:55,326 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2018-03-20 16:06:55,328 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs'], 'uid': None}
2018-03-20 16:06:55,330 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-03-20 16:06:55,332 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-03-20 16:06:55,334 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-03-20 16:06:55,337 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-03-20 16:06:55,339 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-03-20 16:06:55,340 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-03-20 16:06:55,344 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2018-03-20 16:06:55,358 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2018-03-20 16:06:55,359 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2018-03-20 16:06:55,362 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-03-20 16:06:55,366 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-03-20 16:06:55,368 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {}
2018-03-20 16:06:55,389 - call returned (0, '1009')
2018-03-20 16:06:55,390 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1009'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2018-03-20 16:06:55,403 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1009'] due to not_if
2018-03-20 16:06:55,405 - Group['hdfs'] {}
2018-03-20 16:06:55,406 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', u'hdfs']}
2018-03-20 16:06:55,407 - FS Type:
2018-03-20 16:06:55,408 - Directory['/etc/hadoop'] {'mode': 0755}
2018-03-20 16:06:55,452 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2018-03-20 16:06:55,455 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2018-03-20 16:06:55,490 - Repository['HDP-2.6-repo-1'] {'append_to_file': False, 'base_url': 'http://centos7-001/HDP/centos7', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None}
2018-03-20 16:06:55,508 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': '[HDP-2.6-repo-1]\nname=HDP-2.6-repo-1\nbaseurl=http://centos7-001/HDP/centos7\n\npath=/\nenabled=1\ngpgcheck=0'}
2018-03-20 16:06:55,510 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match
2018-03-20 16:06:55,512 - Repository['HDP-2.6-GPL-repo-1'] {'append_to_file': True, 'base_url': 'http://centos7-001/HDP-GPL/centos7', 'action': ['create'], 'components': [u'HDP-GPL', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None}
2018-03-20 16:06:55,520 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': '[HDP-2.6-repo-1]\nname=HDP-2.6-repo-1\nbaseurl=http://centos7-001/HDP/centos7\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-2.6-GPL-repo-1]\nname=HDP-2.6-GPL-repo-1\nbaseurl=http://centos7-001/HDP-GPL/centos7\n\npath=/\nenabled=1\ngpgcheck=0'}
2018-03-20 16:06:55,520 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match
2018-03-20 16:06:55,534 - Repository['HDP-UTILS-1.1.0.22-repo-1'] {'append_to_file': True, 'base_url': 'http://centos7-001/HDP-UTILS', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None}
2018-03-20 16:06:55,542 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': '[HDP-2.6-repo-1]\nname=HDP-2.6-repo-1\nbaseurl=http://centos7-001/HDP/centos7\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-2.6-GPL-repo-1]\nname=HDP-2.6-GPL-repo-1\nbaseurl=http://centos7-001/HDP-GPL/centos7\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-UTILS-1.1.0.22-repo-1]\nname=HDP-UTILS-1.1.0.22-repo-1\nbaseurl=http://centos7-001/HDP-UTILS\n\npath=/\nenabled=1\ngpgcheck=0'}
2018-03-20 16:06:55,542 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match
2018-03-20 16:06:55,556 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-03-20 16:06:55,756 - Skipping installation of existing package unzip
2018-03-20 16:06:55,757 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-03-20 16:06:55,778 - Skipping installation of existing package curl
2018-03-20 16:06:55,779 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-03-20 16:06:55,799 - Skipping installation of existing package hdp-select
2018-03-20 16:06:55,812 - The repository with version 2.6.4.0-91 for this command has been marked as resolved. It will be used to report the version of the component which was installed
2018-03-20 16:06:56,419 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=None -> 2.6
2018-03-20 16:06:56,452 - Using hadoop conf dir: /usr/hdp/2.6.4.0-91/hadoop/conf
2018-03-20 16:06:56,464 - checked_call['hostid'] {}
2018-03-20 16:06:56,472 - checked_call returned (0, '10acab0a')
2018-03-20 16:06:56,493 - Command repositories: HDP-2.6-repo-1, HDP-2.6-GPL-repo-1, HDP-UTILS-1.1.0.22-repo-1
2018-03-20 16:06:56,494 - Applicable repositories: HDP-2.6-repo-1, HDP-2.6-GPL-repo-1, HDP-UTILS-1.1.0.22-repo-1
2018-03-20 16:06:56,498 - Looking for matching packages in the following repositories: HDP-2.6-repo-1, HDP-2.6-GPL-repo-1, HDP-UTILS-1.1.0.22-repo-1
2018-03-20 16:07:01,619 - Adding fallback repositories: HDP-UTILS-1.1.0.22-repo-51, HDP-2.6-repo-51, HDP-2.6-GPL-repo-51
2018-03-20 16:07:05,698 - No package found for hbase_${stack_version}(hbase_(\d|_)+$)
2018-03-20 16:07:05,708 - The repository with version 2.6.4.0-91 for this command has been marked as resolved. It will be used to report the version of the component which was installed 4. But I had check every cluster node and master , there all had installed HBase components successfully. Like this: root@centos7-001 Downloads]# yum list | grep hbase\*
hbase_2_6_4_0_91.noarch 1.1.2.2.6.4.0-91 @HDP-2.6-repo-1
ranger_2_6_4_0_91-hbase-plugin.x86_64 0.7.0.2.6.4.0-91 @HDP-2.6-repo-1
hbase.noarch 1.1.2.2.6.4.0-91 HDP-2.6-repo-1
hbase-doc.noarch 1.1.2.2.6.4.0-91 HDP-2.6-repo-1
hbase-master.noarch 1.1.2.2.6.4.0-91 HDP-2.6-repo-1
hbase-regionserver.noarch 1.1.2.2.6.4.0-91 HDP-2.6-repo-1
hbase-rest.noarch 1.1.2.2.6.4.0-91 HDP-2.6-repo-1
hbase-thrift.noarch 1.1.2.2.6.4.0-91 HDP-2.6-repo-1
hbase-thrift2.noarch 1.1.2.2.6.4.0-91 HDP-2.6-repo-1
hbase_2_6_4_0_91-doc.noarch 1.1.2.2.6.4.0-91 HDP-2.6-repo-1
hbase_2_6_4_0_91-master.noarch 1.1.2.2.6.4.0-91 HDP-2.6-repo-1
hbase_2_6_4_0_91-regionserver.noarch 1.1.2.2.6.4.0-91 HDP-2.6-repo-1
hbase_2_6_4_0_91-rest.noarch 1.1.2.2.6.4.0-91 HDP-2.6-repo-1
hbase_2_6_4_0_91-thrift.noarch 1.1.2.2.6.4.0-91 HDP-2.6-repo-1
hbase_2_6_4_0_91-thrift2.noarch 1.1.2.2.6.4.0-91 HDP-2.6-repo-1
ranger-hbase-plugin.noarch 0.7.0.2.6.4.0-91 HDP-2.6-repo-1
Then I had try some way which found on internet search engine, just like update yum repos, use manual install components, reinstall ambari. But the end result was same. It's failed. These action I had build on the HDP2.5 product in past time were not meet those error. So what can I do now? Someone can give me suggest , Thanks.
... View more
Labels:
10-31-2017
05:53 AM
1. Apache Ambari Version is 2.1.2.1 2. Data Node Logs: (only catch the ERROR level info from file like hadoop-hdfs-datanode-xxxx.log) 2017-10-31 01:52:33,414 ERROR datanode.DataNode (DataXceiver.java:run(278)) - data-01:50010:DataXceiver error processing unknown operation src: /127.0.0.1:55901 dst: /127.0.0.1:50010
java.io.EOFException
at java.io.DataInputStream.readShort(DataInputStream.java:315)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:227)
at java.lang.Thread.run(Thread.java:745) 2017-10-31 04:43:54,341 ERROR datanode.DataNode (DataXceiver.java:run(278)) - data-02:50010:DataXceiver error processing unknown operation src: /127.0.0.1:35030 dst: /127.0.0.1:50010
java.io.EOFException
at java.io.DataInputStream.readShort(DataInputStream.java:315)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:227)
at java.lang.Thread.run(Thread.java:745)
2017-10-31 04:43:54,351 INFO DataNode.clienttrace (DataXceiver.java:requestShortCircuitFds(369)) - src: 127.0.0.1, dest: 127.0.0.1, op: REQUEST_SHORT_CIRCUIT_FDS, blockid: 1090201179, srvID: 37c8941a-1524-4526-adf3-7265b6013c06, success: true
2017-10-30 21:20:02,607 ERROR datanode.DataNode (DataXceiver.java:run(278)) - data-03:50010:DataXceiver error processing unknown operation src: /127.0.0.1:35388 dst: /127.0.0.1:50010
java.io.EOFException
at java.io.DataInputStream.readShort(DataInputStream.java:315)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:227)
at java.lang.Thread.run(Thread.java:745)
2017-10-30 21:20:02,611 INFO DataNode.clienttrace (DataXceiver.java:requestShortCircuitFds(369)) - src: 127.0.0.1, dest: 127.0.0.1, op: REQUEST_SHORT_CIRCUIT_FDS, blockid: 1088782115, srvID: 8945905f-8ee7-4373-8115-d1c7a17574f5, success: true There only can find this type error tip, So what's you mean for check this? Is still need attach other info? Thanks.
... View more
10-30-2017
01:49 AM
I had build HDP 2.4 in the past year on product environment, and it run well many times. But now it had some mistake happened. The all components running is OK, but the metrics display is not good. So is strange it only trouble with HDFS and YARN components. I had shot the dashboard screen like below: hdfs-err01.png yarn-err01.png Then also I had try find the detail info in log files but not success. Because there no any ERROR info in `ambari-metrics-collector.log`, So what I do fix this problems? Thanks.
... View more
Labels:
- Labels:
-
Apache Ambari
09-25-2017
07:11 AM
Today I had meet some problems with ambari metrics collector then try to readd this service. But not success it throw an exception like below: 25 Sep 2017 14:50:11,256 ERROR [ambari-client-thread-34] BaseManagementHandler:61 - Caught a system exception while attempting to create a resource: Error occured during stack advisor command invocation: Cannot create /var/run/ambari-server/stack-recommendations/2
org.apache.ambari.server.controller.spi.SystemException: Error occured during stack advisor command invocation: Cannot create /var/run/ambari-server/stack-recommendations/2
at org.apache.ambari.server.controller.internal.RecommendationResourceProvider.createResources(RecommendationResourceProvider.java:98)
at org.apache.ambari.server.controller.internal.ClusterControllerImpl.createResources(ClusterControllerImpl.java:298)
at org.apache.ambari.server.api.services.persistence.PersistenceManagerImpl.create(PersistenceManagerImpl.java:97)
at org.apache.ambari.server.api.handlers.CreateHandler.persist(CreateHandler.java:37)
at org.apache.ambari.server.api.handlers.BaseManagementHandler.handleRequest(BaseManagementHandler.java:73)
at org.apache.ambari.server.api.services.BaseRequest.process(BaseRequest.java:145)
at org.apache.ambari.server.api.services.BaseService.handleRequest(BaseService.java:126)
at org.apache.ambari.server.api.services.BaseService.handleRequest(BaseService.java:90)
at org.apache.ambari.server.api.services.RecommendationService.getRecommendation(RecommendationService.java:59)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:684)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1507)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330)
at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:118)
at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:84)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.apache.ambari.server.security.authorization.AmbariAuthorizationFilter.doFilter(AmbariAuthorizationFilter.java:257)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:113)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:103)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:113)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:54)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:45)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.apache.ambari.server.security.authorization.jwt.JwtAuthenticationFilter.doFilter(JwtAuthenticationFilter.java:96)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilter(BasicAuthenticationFilter.java:150)
at org.apache.ambari.server.security.authentication.AmbariAuthenticationFilter.doFilter(AmbariAuthenticationFilter.java:88)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.apache.ambari.server.security.authorization.AmbariUserAuthorizationFilter.doFilter(AmbariUserAuthorizationFilter.java:91)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:87)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:192)
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:160)
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:237)
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:167)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
at org.apache.ambari.server.api.MethodOverrideFilter.doFilter(MethodOverrideFilter.java:72)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
at org.apache.ambari.server.api.AmbariPersistFilter.doFilter(AmbariPersistFilter.java:47)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
at org.apache.ambari.server.security.AbstractSecurityHeaderFilter.doFilter(AbstractSecurityHeaderFilter.java:109)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
at org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:82)
at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:294)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:499)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:427)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at org.apache.ambari.server.controller.AmbariHandlerList.processHandlers(AmbariHandlerList.java:212)
at org.apache.ambari.server.controller.AmbariHandlerList.processHandlers(AmbariHandlerList.java:201)
at org.apache.ambari.server.controller.AmbariHandlerList.handle(AmbariHandlerList.java:139)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:370)
at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:494)
at org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:984)
at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1045)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:861)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:236)
at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:696)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:53)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.ambari.server.api.services.stackadvisor.StackAdvisorException: Error occured during stack advisor command invocation: Cannot create /var/run/ambari-server/stack-recommendations/2
at org.apache.ambari.server.api.services.stackadvisor.commands.StackAdvisorCommand.invoke(StackAdvisorCommand.java:305)
at org.apache.ambari.server.api.services.stackadvisor.StackAdvisorHelper.recommend(StackAdvisorHelper.java:111)
at org.apache.ambari.server.controller.internal.RecommendationResourceProvider.createResources(RecommendationResourceProvider.java:92)
... 93 more
So what's wrong with my step? Reference: ambari-metrics-collector-now-starting
... View more
Labels:
- Labels:
-
Apache Ambari
06-26-2017
06:49 AM
I think you need check the folder access. There had two place you need check: `/var/log/hbase` and `/hadoop/hbase/local/jars/tmp/`. Also I had chown those folders under hbase the region start success. Try it and congratulate。
... View more
03-09-2017
07:46 AM
It's seems not really resolve problem. It's still show up next time. What's the final way?
... View more
01-12-2017
02:55 AM
@rguruvannagari Thank for your point out. I had check that my yarn config that found it doesn't enable using cgroups, then I turn it on. Then the cli can login and execute map reduce jobs. There some resolve steps for other guys information: 1. umount /sys/fs/cgroup/cpu,cpuacct 2. mkdir /cgroup/cpu 3. Disable the property CPU Scheduling & CPU Isolation 4. Ensure the property yarn.nodemanager.container-executor.class=org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor
yarn.nodemanager.linux-container-executor.resources-handler.class=org.apache.hadoop.yarn.server.nodemanager.util.DefaultLCEResourcesHandler
yarn.nodemanager.linux-container-executor.cgroups.hierarchy=/yarn
yarn.nodemanager.linux-container-executor.cgroups.mount = true
yarn.nodemanager.linux-container-executor.cgroups.mount-path=/cgroup
yarn.nodemanager.linux-container-executor.group=hadoop 5. Add the yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user property and set it to the desired user. 6. Configure the LinuxContainerExecutor to run jobs as the user submitting the job by adding property yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users and setting it to false 7. set min.user.id to a lower value in /etc/hadoop/conf/container-executor.cfg in all NodeManagers. 8. Rest Yarn service. References: https://hadoop.apache.org/docs/r2.7.2/hadoop-yarn/hadoop-yarn-site/NodeManagerCgroups.html https://www.ibm.com/support/knowledgecenter/SSPT3X_4.2.0/com.ibm.swg.im.infosphere.biginsights.admin.doc/doc/admin_yarn_cgroups.html
... View more
01-11-2017
02:55 PM
Thank you for reply. I had try to use hive user login , the exception still show up.Also check the group there all right. I don't had any idea. Can you find that had error message , said "main : run as user is nobody" , it very strange. So seems
... View more