Support Questions

Find answers, ask questions, and share your expertise

Accumulo permission denied

avatar
Explorer

I have installed CLoudera 5.x, and login with "root" user and trying to execute below command

 

$ accumulo $PKG.GenerateTestData --start-row 0 --count 1000 --output bulk/test_1.txt

 

getting below error.

Thread "org.apache.accumulo.examples.simple.mapreduce.bulk.GenerateTestData" died Permission denied: user=root, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x

 

What's default password for accumulo to run those scripts?

1 ACCEPTED SOLUTION

avatar
Expert Contributor

To make sure we have the same context, I think you're working through the bulk ingest overview's example. Please correct me if that's wrong.

 

Before running any of the accumulo examples, you need to do some user set up. None of them should be run as root nor any of the service principles (accumulo, hdfs, etc).

 

  1. The user that will run the data generation needs to be able to run MapReduce jobs. See the full docs for instructions on provisioning such a user. In short, ensure they have a user account on all worker nodes and that they have a user directory in HDFS (creating said home directory will require action as the hdfs super user).
  2. The user you created above will be used for the data generation step. If you are running on a secure cluster, you will need to use your kerberos password before submitting the job. Otherwise the generation step only requires an initial local login.
  3. The data loading step requires an Accumulo user. You should create a user via the Accumulo shell. Be sure to replace the instance name, zookeeper servers, and user/password given in the ARGS line with ones appropriate for your cluster. This loading should not be done as the Accumulo root user.

Let me know if you have any further problems.

View solution in original post

5 REPLIES 5

avatar
This is because the root user is not valid within HDFS. Try running the
command prefixed with "sudo -u hdfs" which runs the command as the hdfs user

Regards,
Gautam Gopalakrishnan

avatar
Explorer
hi,
What is the default password for accumulo user and hdfs user?

avatar
Expert Contributor

To make sure we have the same context, I think you're working through the bulk ingest overview's example. Please correct me if that's wrong.

 

Before running any of the accumulo examples, you need to do some user set up. None of them should be run as root nor any of the service principles (accumulo, hdfs, etc).

 

  1. The user that will run the data generation needs to be able to run MapReduce jobs. See the full docs for instructions on provisioning such a user. In short, ensure they have a user account on all worker nodes and that they have a user directory in HDFS (creating said home directory will require action as the hdfs super user).
  2. The user you created above will be used for the data generation step. If you are running on a secure cluster, you will need to use your kerberos password before submitting the job. Otherwise the generation step only requires an initial local login.
  3. The data loading step requires an Accumulo user. You should create a user via the Accumulo shell. Be sure to replace the instance name, zookeeper servers, and user/password given in the ARGS line with ones appropriate for your cluster. This loading should not be done as the Accumulo root user.

Let me know if you have any further problems.

avatar
Explorer

I got resolved that permission error. Thank you for that.

I am trying to execute below

 

 

./bin/tool.sh ./lib/accumulo-examples-simple.jar $PKG.BulkIngestExample $ARGS -t test_sample --inputDir /tmp/bulk -workDir /tmp/bulkWork

 

Here is the error

=============

Accumulo is not properly configured.

Try running $ACCUMULO_HOME/bin/bootstrap_config.sh and then editing
$ACCUMULO_HOME/conf/accumulo-env.sh

 

After that I tried running "$ACCUMULO_HOME/bin/bootstrap_config.sh" in my shell

 

It's asking more questions.

 

Choose the heap configuration:

1) 1GB
2) 2GB
3) 3GB
4) 512MB
#?

 

Choose the Accumulo memory-map type:
1) Java
2) Native
#?

 

Choose the Apache Hadoop version:
1) HADOOP 1
2) HADOOP 2

 

Please help me on this.

avatar
Expert Contributor

Hi!

 

I'd be happy to help you with this new problem. To make things easier for future users, how about we mark my answer for the original thread topic and start a new one for this issue?