Member since
04-21-2015
6
Posts
0
Kudos Received
0
Solutions
05-10-2015
04:04 PM
Hi Jean-Marc, Thanks for your thorough analysis! Making sure the HFiles stay around makes perfect sense, so it is just a permissions issue. And hopefully this will be fixed with HBase 1.2 then ? I will use a permissions workaround meanwhile. Best regards Jost
... View more
05-07-2015
04:57 PM
Thanks. Let me know if you still need /etc/hadoop/conf contents (which is actually a link to /etc/alternatives/hadoop-conf) I am certain that I did not modify it (not consciously or manually, that is 🙂 ) it should be the default CDH-5.3.0 quickstart one. / Jost
... View more
05-07-2015
04:50 PM
Hi, No, I did not modify any permissions. Also, I cannot find this property in the directory you are requesting (see below). The folder contents are attached FYI. / Jost - ** - ** - ** - ** - ** - ** - ** - ** - ** - ** - ** - ** - ** - [cloudera@quickstart test]$ uname -a Linux quickstart.cloudera 2.6.32-358.el6.x86_64 #1 SMP Fri Feb 22 00:31:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux [cloudera@quickstart test]$ grep -r -e dfs.permissions /etc/hadoop/conf grep: /etc/hadoop/conf/container-executor.cfg: Permission denied [cloudera@quickstart test]$ sudo grep -r -e dfs.permissions /etc/hadoop/conf [cloudera@quickstart test]$ ls -lrt /etc/hadoop/conf/ total 44 -rw-r--r-- 1 root root 3906 Apr 20 19:37 yarn-site.xml -rw-r--r-- 1 root root 315 Apr 20 19:37 ssl-client.xml -rw-r--r-- 1 root root 4391 Apr 20 19:37 mapred-site.xml -rw-r--r-- 1 root root 300 Apr 20 19:37 log4j.properties -rw-r--r-- 1 root root 1669 Apr 20 19:37 hdfs-site.xml -rw-r--r-- 1 root root 425 Apr 20 19:37 hadoop-env.sh -rw-r--r-- 1 root root 3675 Apr 20 19:37 core-site.xml -rw-r--r-- 1 root root 21 Apr 20 19:37 __cloudera_generation__ -r-------- 1 root hadoop 0 May 6 22:21 container-executor.cfg -rwxr-xr-x 1 root hadoop 1510 May 6 22:21 topology.py -rw-r--r-- 1 root hadoop 200 May 6 22:21 topology.map [cloudera@quickstart test]$
... View more
05-06-2015
10:34 PM
> What do you mean here by "Region servers"? This MR job should not create any region servers. Sorry, that was a bit misleading. No region servers are created. What is in fact created are regions (by a method with prefix ?RegionServer") Before the stack trace, the output of the job contains messages "INFO regionserver.HRegion: creating HRegion testtable .." (one of them for the test program, many of them for the real application, as it uses many regions). / Jost
... View more
05-06-2015
03:38 PM
Hi JMS, I am using cloudera quickstart VMs (version 5.3.0) for tests. The problem can also be observed in our production system, which runs cloudera 5.2.4. The exception cannot be seen in all runs, it depends on the previous use of hbase. In the VM where I see the exception, hbase contains around two weeks old data, and some tables have been dropped. (I suspected that restoring the snapshot triggers an internal archiving and uses the wrong user) HTH / Jost
... View more
04-21-2015
12:55 AM
Hi everybody, I wonder if someone could explain what is going on internally when I use an HBase snapshot as input for map-reduce as explained in [1] (configured by `initTableSnapshotMapperJob` API described in [2]). My app does the following 1 create a snapshot using the `HBaseAdmin` API 2 create a new HDFS directory in the user's home 3 calls `initTableSnapshotMapperJob` to configure a TableMapper job to run on the created snapshot (passing the new directory as the tmp restore directory) 4 sets a few more job parameters (the job creates HFiles for bulk import) and then waits for job completion 5 deletes the temporary directory The problem I am stuck with is that the initialisation (step 3) throws an exception about writing to /hbase/archive (!), after successfully creating the Region servers for the restored snapshot, in the given tmp directory. The exception is given below [3]. I can see in the job's output that regions servers are created before the exception, and the files from the table restore stay in the directory. I was not expecting hbase to *write* anything to the hbase directories when using a snapshot with an explicitly-given temporary directory to work with. What can I do to make this work? All this is tested on a cloudera quickstart VM, btw., but that should not really matter IMHO. Thanks Jost [1] http://www.slideshare.net/enissoz/mapreduce-over-snapshots [2] https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.html#initTableSnapshotMapperJob(java.lang.String,%20org.apache.hadoop.hbase.client.Scan,%20java.lang.Class,%20java.lang.Class,%20java.lang.Class,%20org.apache.hadoop.mapreduce.Job,%20boolean,%20org.apache.hadoop.fs.Path) [3] java.util.concurrent.ExecutionException: org.apache.hadoop.security.AccessControlException: Permission denied: user=cloudera, access=WRITE, inode="/hbase/archive":hbase:supergroup:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:257) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:238) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:216) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:145) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:138) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6286) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6268) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:6220) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:4087) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4057) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4030) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:787) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.mkdirs(AuthorizationProviderProxyClientProtocol.java:297) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:594) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
... View more
Labels:
- Labels:
-
Apache HBase