Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Why I am getting : The short-circuit local reads feature cannot be used because libhadoop cannot be loaded

avatar
Rising Star

I have installed Spark on RedHat Centos 6.

Installed:Java 1.8,spark-2.1.0-bin-hadoop2.7,Scala 2.12

Environment Variable set for Hadoop config

HADOOP_CONF_DIR

Hadoop directory contains hdfs-site.xml , core- site.xml

While executing I am getting below Warning and I am not able to write HDFS

17/03/27 03:48:18 INFO Utils: Successfully started service 'SparkUI' on port 4040. 17/03/27 03:48:18 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.4.124.113:4040 17/03/27 03:48:18 INFO SparkContext: Added JAR file:/storm/Teja/spark/target/uber-spark_kafka-0.0.1-SNAPSHOT.jar at spark://10.4.124.113:50101/jars/uber-spark_kafka-0.0.1-SNAPSHOT.jar with timestamp 1490600898913 17/03/27 03:48:20 WARN DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded. 17/03/27 03:48:20 INFO RMProxy: Connecting to ResourceManager at /0.0.0.0:8032 17/03/27 03:48:21 INFO Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 17/03/27 03:48:22 INFO Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) 17/03/27 03:48:23 INFO Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime

1 REPLY 1