Archives of Support Questions (Read Only)

This is an archived board for historical reference. Information and links may no longer be available or relevant
Announcements
This board is archived and read-only for historical reference. To ask a new question, please post a new topic on the appropriate active board.

/user/accumulo permissions problem

avatar
New Contributor

I'm trying to track down an "under replicated blocks" issue and ran into a permission problem. If I run 'fsck' here's what happens:

 

[admin@fatman ~t]$ hdfs fsck /

Connecting to namenode via http://fatman.localdomain:50070

FSCK started by admin (auth:SIMPLE) from /10.1.1.10 for path / at Fri Jan 16 09:24:15 EST 2015

....................................................................................................

....................................................................................................

.............................................FSCK ended at Fri Jan 16 09:24:15 EST 2015 in 42 milliseconds

Permission denied: user=admin, access=READ_EXECUTE, inode="/user/accumulo/.Trash":accumulo:accumulo:drwx------

 

[admin@fatman ~]$ hadoop fs -ls /user/accumulo
Found 1 items
drwx------   - accumulo accumulo          0 2015-01-15 19:00 /user/accumulo/.Trash
[admin@fatman ~]$

 

This indicates that only the 'accumulo' user can access the directory. According to the /etc/passwd file the accumulo account does not support login and therefore I don't know how to become the accumulo user. Sounds like a trap to me!

 

From /etc/passwd:

accumulo:x:477:473:Accumulo:/var/lib/accumulo:/sbin/nologin

 

I believe at this point that I may have chased all the under replicated blocks into the trash folder. The Cloudera Manager idicates that I still have such blocks in HDFS (from the HDFS Health Test):

 

"1,712 under replicated blocks in the cluster. 2,678 total blocks in the cluster. Percentage under replicated blocks: 63.93%. Critical threshold: 40.00%."

 

So...I'm nobody's sys admin...and I don't know where to go from here. Can anyone give me a hint about how to resolve this permissions issue?  A secondary question, is there a way I can empty the accumulo trash folder short of executing a file system command that would involve the aforementioned permission problem?

 

 

Environment: CDH5.3 with Accumulo installed using Cloudera Manager. Small cluster with 4 data nodes.

1 ACCEPTED SOLUTION

avatar
New Contributor

I figured out how to get around the permissions problem. Simply have to 'su' to the hdfs user, which is the super user for HDFS

 

[root@fatman ~]# su - hdfs

-bash-4.1$ hdfs fsck /

 

That works.

 

View solution in original post

1 REPLY 1

avatar
New Contributor

I figured out how to get around the permissions problem. Simply have to 'su' to the hdfs user, which is the super user for HDFS

 

[root@fatman ~]# su - hdfs

-bash-4.1$ hdfs fsck /

 

That works.