Reply
New Contributor
Posts: 1
Registered: ‎04-12-2015

CDH5 VirtualBox VM - Tutorial exercise 1 not working

I have just downloaded and played the CDH VirtualBox VM.

 

Firstly I opened up a terminal windown in the VM (cloudera@quickstart.cloudera) and then SSH'd to root@quickstart.cloudera. I then ran the sqoop command and it failed (output below). Can anyone give me some guidance as to what I am doing wrong ?

 

[root@quickstart ~]# sqoop import-all-tables \
>   -m 1 \
>   --connect jdbc:mysql://quickstart.cloudera:3306/retail_db \
>   --username=retail_dba \
>   --password=cloudera \
>   --compression-codec=snappy \
>   --as-avrodatafile \
>   --warehouse-dir=/user/hive/warehouse
Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
15/04/12 12:27:48 INFO sqoop.Sqoop: Running Sqoop version: 1.4.5-cdh5.3.0
15/04/12 12:27:50 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
15/04/12 12:27:57 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
15/04/12 12:28:04 INFO tool.CodeGenTool: Beginning code generation
15/04/12 12:28:04 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `categories` AS t LIMIT 1
15/04/12 12:28:05 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `categories` AS t LIMIT 1
15/04/12 12:28:05 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-mapreduce
Note: /tmp/sqoop-root/compile/3e4e6c05ecd49aab9954d4a82f5a9fe3/categories.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
15/04/12 12:28:24 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/3e4e6c05ecd49aab9954d4a82f5a9fe3/categories.jar
15/04/12 12:28:26 WARN manager.MySQLManager: It looks like you are importing from mysql.
15/04/12 12:28:26 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
15/04/12 12:28:26 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
15/04/12 12:28:26 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
15/04/12 12:28:27 INFO mapreduce.ImportJobBase: Beginning import of categories
15/04/12 12:28:38 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
15/04/12 12:29:22 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `categories` AS t LIMIT 1
15/04/12 12:29:27 INFO mapreduce.DataDrivenImportJob: Writing Avro schema file: /tmp/sqoop-root/compile/3e4e6c05ecd49aab9954d4a82f5a9fe3/sqoop_import_categories.avsc
15/04/12 12:29:28 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
15/04/12 12:29:33 INFO client.RMProxy: Connecting to ResourceManager at quickstart.cloudera/127.0.0.1:8032
15/04/12 12:29:39 WARN security.UserGroupInformation: PriviledgedActionException as:root (auth:SIMPLE) cause:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /user/root/.staging. Name node is in safe mode.

The reported blocks 282 needs additional 2 blocks to reach the threshold 0.9990 of total blocks 284.
The number of live datanodes 1 has reached the minimum number 0. Safe mode will be turned off automatically once the thresholds have been reached.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1335)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4055)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4030)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:787)
    at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.mkdirs(AuthorizationProviderProxyClientProtocol.java:297)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:594)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)

15/04/12 12:29:39 ERROR tool.ImportAllTablesTool: Encountered IOException running import job: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /user/root/.staging. Name node is in safe mode.
The reported blocks 282 needs additional 2 blocks to reach the threshold 0.9990 of total blocks 284.
The number of live datanodes 1 has reached the minimum number 0. Safe mode will be turned off automatically once the thresholds have been reached.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1335)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4055)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4030)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:787)
    at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.mkdirs(AuthorizationProviderProxyClientProtocol.java:297)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:594)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)

 

Cloudera Employee
Posts: 8
Registered: ‎04-12-2015

Re: CDH5 VirtualBox VM - Tutorial exercise 1 not working

Hi madalex,

 

Could you try running the sqoop job as the linux user 'cloudera' instead of as 'root'?  I see that the tutorials themseleves show a root shell prompt but when I run the job as cloudera it seems to work just fine.  Part of the issue is that the path /user/root/* does not exist in HDFS.

 

The other alternative would be to make root's user directory in HDFS, but I think the rest of the exercises will work best as the cloudera user vs. root.

 

To make root's user directory you could run this as 'cloudera' with sudo permissions:

 

[cloudera@quickstart ~]$ sudo hdfs dfs -mkdir /user/root

Again, you might run into permissions issues later on in the tutorial running as root.

 

Hope this helps,
Brian

New Contributor
Posts: 2
Registered: ‎07-19-2016

Re: CDH5 VirtualBox VM - Tutorial exercise 1 not working

Hey Brain,

 

   Could you please tell me how to change the user from 'root' to 'cloudera'?I have the same problem but do not know how to do this. It might seem trivial to you though.

 

   Thanks!

 

 

Christina 

Cloudera Employee
Posts: 8
Registered: ‎04-12-2015

Re: CDH5 VirtualBox VM - Tutorial exercise 1 not working

Hi Christina,

 

From a terminal/shell prompt you can change users as follows:

 

Change from "cloudera" to "root"

 

[cloudera@quickstart ~]$ sudo su - root

Change from "root" to "cloudera"

 

[root@quickstart ~]$ su - cloudera

I hope this helps,
Brian

New Contributor
Posts: 2
Registered: ‎07-19-2016

Re: CDH5 VirtualBox VM - Tutorial exercise 1 not working

Hi Brain,

 

   Thank you for your help!!!

 

Christina

   

New Contributor
Posts: 1
Registered: ‎07-20-2016

Re: CDH5 VirtualBox VM - Tutorial exercise 1 not working

Also you could try this below command,

sudo -u hdfs hdfs dfsadmin -safemode leave