Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

I try to run commands at the terminal but get a connection refused error

avatar
Explorer

Hi, 

I start my quickstart VM and enter the terminal the following:

 

sqoop import-all-tables \
-m 1 \
--connect jdbc:mysql://quickstart.cloudera:3306/retail_db \
--username=retail_dba \
--password=cloudera \
--compression-codec=snappy \
--as-avrodatafile \
--warehouse-dir=/user/hive/warehouse

 

but get the following error

 

failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
Streaming Command Failed!
Error in mr(map = map, reduce = reduce, combine = combine, in.folder = if (is.list(input)) { :
hadoop streaming failed with error code 5

 

any help is appreciated. I am just a beginner and I dont know anything thanks. 

 

 

1 ACCEPTED SOLUTION

avatar
Guru

I believe this procedure should get you switched over from YARN / MR2 to MR1. After running it I was able to comput pi using MR1:

for service in mapreduce-historyserver yarn-nodemanager yarn-proxyserver yarn-resourcemanager; do
    sudo service hadoop-${service} stop
    sudo chkconfig hadoop-${service} off
done

sudo yum remove -y hadoop-conf-pseudo
sudo yum install -y hadoop-0.20-conf-pseudo

for service in 0.20-mapreduce-jobtracker 0.20-mapreduce-tasktracker; do
    sudo service hadoop-${service} start
    sudo chkconfig hadoop-${service} on
done

  It stops and disables the MR2 / YARN services, swaps the configuration files, then starts and enables the MR1 services. Again, the tutorial is not written to be used (or tested) with with MR1, so it's possible you'll run into some other issues. I can't think if any specific incompatibilities - just recommending that if you want to walk through the tutorial, you do it with an environment as close to the original VM as possible - otherwise who knows what differences may be involved.

View solution in original post

13 REPLIES 13

avatar
Guru

I'm afraid I'm not very familiar with R and running it against Hadoop. My first thought is that perhaps the program that creates the files and the program that looks for the files are running as different users? /user/cloudera is the default working directory for the cloudera user, but other users will default to other directories. e.g. if 'root' asks for a file called '0', unless there's an absolute path with it, it means /user/root/0. Is it possible these files exist under a different user's home directory?

avatar
New Contributor

Faced the same issue and surprisingly it worked when i prefixed the squoop command with sudo - dont understand why as the cloudera should have the same privs 

 

sqoop import-all-tables \
> -m 1 \
> --connect jdbc:mysql://quickstart:3306/retail_db \
> --username=retail_dba \
> --password=cloudera \
> --compression-codec=snappy \
> --as-avrodatafile \
> --warehouse-dir=/user/hive/warehouse
Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
/usr/lib/hadoop-0.20-mapreduce/hadoop-core-2.6.0-mr1-cdh5.4.0.jar
/usr/lib/hadoop-0.20-mapreduce/hadoop-core-mr1.jar
15/09/21 20:32:17 INFO sqoop.Sqoop: Running Sqoop version: 1.4.5-cdh5.4.0
15/09/21 20:32:17 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
15/09/21 20:32:18 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
15/09/21 20:32:19 INFO tool.CodeGenTool: Beginning code generation
15/09/21 20:32:19 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `categories` AS t LIMIT 1
15/09/21 20:32:20 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `categories` AS t LIMIT 1
15/09/21 20:32:20 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-0.20-mapreduce
Note: /tmp/sqoop-cloudera/compile/554ca9a57edc7fb8771c0729223df56c/categories.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
15/09/21 20:32:24 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-cloudera/compile/554ca9a57edc7fb8771c0729223df56c/categories.jar
15/09/21 20:32:24 WARN manager.MySQLManager: It looks like you are importing from mysql.
15/09/21 20:32:24 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
15/09/21 20:32:24 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
15/09/21 20:32:24 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
15/09/21 20:32:24 INFO mapreduce.ImportJobBase: Beginning import of categories
15/09/21 20:32:26 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `categories` AS t LIMIT 1
15/09/21 20:32:27 INFO mapreduce.DataDrivenImportJob: Writing Avro schema file: /tmp/sqoop-cloudera/compile/554ca9a57edc7fb8771c0729223df56c/sqoop_import_categories.avsc
15/09/21 20:32:29 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8021. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/09/21 20:32:30 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8021. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/09/21 20:32:31 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8021. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/09/21 20:32:32 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8021. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/09/21 20:32:33 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8021. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/09/21 20:32:34 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8021. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/09/21 20:32:35 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8021. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/09/21 20:32:36 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8021. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/09/21 20:32:37 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8021. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/09/21 20:32:38 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8021. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/09/21 20:32:38 WARN security.UserGroupInformation: PriviledgedActionException as:cloudera (auth:SIMPLE) cause:java.net.ConnectException: Call From quickstart.cloudera/127.0.0.1 to localhost:8021 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
15/09/21 20:32:38 ERROR tool.ImportAllTablesTool: Encountered IOException running import job: java.net.ConnectException: Call From quickstart.cloudera/127.0.0.1 to localhost:8021 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused

 

[cloudera@quickstart ClouderaGettingStartedCode]$ sudo sqoop import-all-tables -m 1 --connect jdbc:mysql://quickstart:3306/retail_db --username=retail_dba --password=cloudera --compression-codec=snappy --as-avrodatafile --warehouse-dir=/user/hive/warehouse
Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
15/09/21 20:36:25 INFO sqoop.Sqoop: Running Sqoop version: 1.4.5-cdh5.4.0
15/09/21 20:36:25 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
15/09/21 20:36:25 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
15/09/21 20:36:26 INFO tool.CodeGenTool: Beginning code generation
15/09/21 20:36:26 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `categories` AS t LIMIT 1
15/09/21 20:36:26 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `categories` AS t LIMIT 1
15/09/21 20:36:26 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-mapreduce
Note: /tmp/sqoop-root/compile/d58fcf6562850c1a3a17a3fe48bfea6d/categories.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
15/09/21 20:36:31 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/d58fcf6562850c1a3a17a3fe48bfea6d/categories.jar
15/09/21 20:36:31 WARN manager.MySQLManager: It looks like you are importing from mysql.
15/09/21 20:36:31 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
15/09/21 20:36:31 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
15/09/21 20:36:31 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
15/09/21 20:36:31 INFO mapreduce.ImportJobBase: Beginning import of categories
15/09/21 20:36:31 INFO Configuration.deprecation: mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
15/09/21 20:36:32 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
15/09/21 20:36:34 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `categories` AS t LIMIT 1
15/09/21 20:36:34 INFO mapreduce.DataDrivenImportJob: Writing Avro schema file: /tmp/sqoop-root/compile/d58fcf6562850c1a3a17a3fe48bfea6d/sqoop_import_categories.avsc
15/09/21 20:36:34 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
15/09/21 20:36:34 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
15/09/21 20:36:45 INFO db.DBInputFormat: Using read commited transaction isolation
15/09/21 20:36:45 INFO mapreduce.JobSubmitter: number of splits:1
15/09/21 20:36:46 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1437690038831_0009
15/09/21 20:36:51 INFO impl.YarnClientImpl: Submitted application application_1437690038831_0009
15/09/21 20:36:52 INFO mapreduce.Job: The url to track the job: http://quickstart.cloudera:8088/proxy/application_1437690038831_0009/
15/09/21 20:36:52 INFO mapreduce.Job: Running job: job_1437690038831_0009
15/09/21 20:38:11 INFO mapreduce.Job: Job job_1437690038831_0009 running in uber mode : false
15/09/21 20:38:11 INFO mapreduce.Job: map 0% reduce 0%
15/09/21 20:39:11 INFO mapreduce.Job: map 100% reduce 0%
15/09/21 20:39:15 INFO mapreduce.Job: Job job_1437690038831_0009 completed successfully
15/09/21 20:39:16 INFO mapreduce.Job: Counters: 30
File System Counters
FILE: Number of bytes read=0
FILE: Number of bytes written=135070
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=87
HDFS: Number of bytes written=1344
HDFS: Number of read operations=4
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
Job Counters
Launched map tasks=1
Other local map tasks=1
Total time spent by all maps in occupied slots (ms)=55012
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=55012
Total vcore-seconds taken by all map tasks=55012
Total megabyte-seconds taken by all map tasks=56332288
Map-Reduce Framework
Map input records=58
Map output records=58
Input split bytes=87
Spilled Records=0
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=404
CPU time spent (ms)=2000
Physical memory (bytes) snapshot=114446336
Virtual memory (bytes) snapshot=1508089856
Total committed heap usage (bytes)=60882944
File Input Format Counters
Bytes Read=0
File Output Format Counters
Bytes Written=1344
15/09/21 20:39:16 INFO mapreduce.ImportJobBase: Transferred 1.3125 KB in 161.5896 seconds (8.3174 bytes/sec)
15/09/21 20:39:16 INFO mapreduce.ImportJobBase: Retrieved 58 records.

 

And likewise all tables were imported. Is there any permission i must grant user cloudera as root?

avatar
Guru

I'm not sure why that's happening to you. The cloudera user should be set up pretty similarly to the root user - I can't imagine why one would try and use MR1 and the other would use YARN, unless there was an environment variable set in that terminal or something.

avatar
New Contributor

did not set any variable on the session,

 

this is what i have in bash_profile, 

 

# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi

# User specific environment and startup programs

export PATH=$PATH:$HOME/bin:$HADOOP_HOME/bin
export CLASSPATH=/usr/lib/hadoop/client-0.20/\*:/usr/lib/hadoop/\*
export AVRO_CLASSPATH=/usr/lib/avro

alias lart="ls -lart"
set -o vi