<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question HBase master errors HFileArchiver in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/HBase-master-errors-HFileArchiver/m-p/29899#M6684</link>
    <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;CDH 5.3.3 distribution&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have following error messages repeating&amp;nbsp;in&amp;nbsp;HBase master logs:&lt;/P&gt;&lt;PRE&gt;2015-07-21 16:36:20,964 WARN org.apache.hadoop.hbase.backup.HFileArchiver: Failed to archive class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://server:8020/hbase/data/default/olap_cube/4dc9d309b1a9089
5260662f493799e45/metric/fcd52caf2c364269939dff6c4962e76b_SeqId_168130436_ on try #2
org.apache.hadoop.security.AccessControlException: Permission denied: user=hbase, access=WRITE, inode="/hbase/data/default/olap_cube/4dc9d309b1a90895260662f493799e45/metric/fcd52caf2c364269939dff6c4962e76b_SeqId_168130436_":justAnotherUser:hado
op:-rw-r--r--
        at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:257)
        at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:238)
        at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:151)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:138)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6287)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6269)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6194)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimesInt(FSNamesystem.java:2159)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimes(FSNamesystem.java:2137)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setTimes(NameNodeRpcServer.java:991)
        at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.setTimes(AuthorizationProviderProxyClientProtocol.java:576)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setTimes(ClientNamenodeProtocolServerSideTranslatorPB.java:885)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)

        at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
        at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
        at org.apache.hadoop.hdfs.DFSClient.setTimes(DFSClient.java:2756)
        at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1304)
        at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1300)
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at org.apache.hadoop.hdfs.DistributedFileSystem.setTimes(DistributedFileSystem.java:1300)
        at org.apache.hadoop.hbase.util.FSUtils.renameAndSetModifyTime(FSUtils.java:1678)
        at org.apache.hadoop.hbase.backup.HFileArchiver$File.moveAndClose(HFileArchiver.java:586)
        at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile(HFileArchiver.java:425)
        at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchive(HFileArchiver.java:335)
        at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchive(HFileArchiver.java:347)
        at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchive(HFileArchiver.java:284)
        at org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:137)
        at org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:75)
        at org.apache.hadoop.hbase.master.CatalogJanitor.cleanParent(CatalogJanitor.java:333)
        at org.apache.hadoop.hbase.master.CatalogJanitor.scan(CatalogJanitor.java:254)
        at org.apache.hadoop.hbase.master.CatalogJanitor.chore(CatalogJanitor.java:101)
        at org.apache.hadoop.hbase.Chore.run(Chore.java:87)
        at java.lang.Thread.run(Thread.java:724)&lt;/PRE&gt;&lt;P&gt;I change&amp;nbsp;ownership of these&amp;nbsp;files manually to 'hbase' user, but after a while errors return with just different files (for other HBase tables too).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I suspect, that this problem occurs&amp;nbsp;because HBase wants to compact these files, but permissions are incorrect. Our Oozie workflows containing bulkload are run with 'justAnotherUser' instead of 'hbase' user. This is because&amp;nbsp;we prepare data and load it in HBase in same workflow and we require, that the running user is 'justAnotherUser'.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Did I guess it right? What should I do to load data to HBase correctly? Should I separate data preparation and bulkload to a different workflows? Maybe it is possible to specify user for bulkload alone? Or is it completely different issue.&lt;/P&gt;</description>
    <pubDate>Fri, 16 Sep 2022 09:35:02 GMT</pubDate>
    <dc:creator>Minutis</dc:creator>
    <dc:date>2022-09-16T09:35:02Z</dc:date>
    <item>
      <title>HBase master errors HFileArchiver</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/HBase-master-errors-HFileArchiver/m-p/29899#M6684</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;CDH 5.3.3 distribution&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have following error messages repeating&amp;nbsp;in&amp;nbsp;HBase master logs:&lt;/P&gt;&lt;PRE&gt;2015-07-21 16:36:20,964 WARN org.apache.hadoop.hbase.backup.HFileArchiver: Failed to archive class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://server:8020/hbase/data/default/olap_cube/4dc9d309b1a9089
5260662f493799e45/metric/fcd52caf2c364269939dff6c4962e76b_SeqId_168130436_ on try #2
org.apache.hadoop.security.AccessControlException: Permission denied: user=hbase, access=WRITE, inode="/hbase/data/default/olap_cube/4dc9d309b1a90895260662f493799e45/metric/fcd52caf2c364269939dff6c4962e76b_SeqId_168130436_":justAnotherUser:hado
op:-rw-r--r--
        at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:257)
        at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:238)
        at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:151)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:138)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6287)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6269)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:6194)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimesInt(FSNamesystem.java:2159)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimes(FSNamesystem.java:2137)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setTimes(NameNodeRpcServer.java:991)
        at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.setTimes(AuthorizationProviderProxyClientProtocol.java:576)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setTimes(ClientNamenodeProtocolServerSideTranslatorPB.java:885)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)

        at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
        at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
        at org.apache.hadoop.hdfs.DFSClient.setTimes(DFSClient.java:2756)
        at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1304)
        at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1300)
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at org.apache.hadoop.hdfs.DistributedFileSystem.setTimes(DistributedFileSystem.java:1300)
        at org.apache.hadoop.hbase.util.FSUtils.renameAndSetModifyTime(FSUtils.java:1678)
        at org.apache.hadoop.hbase.backup.HFileArchiver$File.moveAndClose(HFileArchiver.java:586)
        at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile(HFileArchiver.java:425)
        at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchive(HFileArchiver.java:335)
        at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchive(HFileArchiver.java:347)
        at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchive(HFileArchiver.java:284)
        at org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:137)
        at org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:75)
        at org.apache.hadoop.hbase.master.CatalogJanitor.cleanParent(CatalogJanitor.java:333)
        at org.apache.hadoop.hbase.master.CatalogJanitor.scan(CatalogJanitor.java:254)
        at org.apache.hadoop.hbase.master.CatalogJanitor.chore(CatalogJanitor.java:101)
        at org.apache.hadoop.hbase.Chore.run(Chore.java:87)
        at java.lang.Thread.run(Thread.java:724)&lt;/PRE&gt;&lt;P&gt;I change&amp;nbsp;ownership of these&amp;nbsp;files manually to 'hbase' user, but after a while errors return with just different files (for other HBase tables too).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I suspect, that this problem occurs&amp;nbsp;because HBase wants to compact these files, but permissions are incorrect. Our Oozie workflows containing bulkload are run with 'justAnotherUser' instead of 'hbase' user. This is because&amp;nbsp;we prepare data and load it in HBase in same workflow and we require, that the running user is 'justAnotherUser'.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Did I guess it right? What should I do to load data to HBase correctly? Should I separate data preparation and bulkload to a different workflows? Maybe it is possible to specify user for bulkload alone? Or is it completely different issue.&lt;/P&gt;</description>
      <pubDate>Fri, 16 Sep 2022 09:35:02 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/HBase-master-errors-HFileArchiver/m-p/29899#M6684</guid>
      <dc:creator>Minutis</dc:creator>
      <dc:date>2022-09-16T09:35:02Z</dc:date>
    </item>
    <item>
      <title>Re: HBase master errors HFileArchiver</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/HBase-master-errors-HFileArchiver/m-p/29901#M6685</link>
      <description>How are you bulk loading, specifically? Could you chown the prepared files&lt;BR /&gt;as 'hbase' before you trigger the bulk load, or use the SecureBulkLoad&lt;BR /&gt;technique offered by HBase?&lt;BR /&gt;</description>
      <pubDate>Tue, 21 Jul 2015 14:09:02 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/HBase-master-errors-HFileArchiver/m-p/29901#M6685</guid>
      <dc:creator>Harsh J</dc:creator>
      <dc:date>2015-07-21T14:09:02Z</dc:date>
    </item>
    <item>
      <title>Re: HBase master errors HFileArchiver</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/HBase-master-errors-HFileArchiver/m-p/29922#M6686</link>
      <description>&lt;P&gt;Thanks for your reply!&amp;nbsp;We are generating&amp;nbsp;HFiles and then doing this:&lt;/P&gt;&lt;PRE&gt;LoadIncrementalHFiles loader = new LoadIncrementalHFiles(conf);
loader.doBulkLoad(new Path(hfilePath), table);&lt;/PRE&gt;&lt;P&gt;'chown' files before bulkload is one way to do that, thanks for the information! I just wonder if it's the best way. Since not all of our clusters use secure mode, is it possible to implement SecureBulkLoad on non-secure cluster?&lt;/P&gt;</description>
      <pubDate>Wed, 22 Jul 2015 07:34:28 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/HBase-master-errors-HFileArchiver/m-p/29922#M6686</guid>
      <dc:creator>Minutis</dc:creator>
      <dc:date>2015-07-22T07:34:28Z</dc:date>
    </item>
    <item>
      <title>Re: HBase master errors HFileArchiver</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/HBase-master-errors-HFileArchiver/m-p/29975#M6687</link>
      <description>It should be possible to perform SecureBulkLoad without enabling Kerberos,&lt;BR /&gt;although I have not personally tested this mix. The config steps may&lt;BR /&gt;involve just configuring the SecureBulkLoad end-point and the staging&lt;BR /&gt;directory configs on the RSes and clients.&lt;BR /&gt;</description>
      <pubDate>Thu, 23 Jul 2015 12:58:02 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/HBase-master-errors-HFileArchiver/m-p/29975#M6687</guid>
      <dc:creator>Harsh J</dc:creator>
      <dc:date>2015-07-23T12:58:02Z</dc:date>
    </item>
  </channel>
</rss>

