<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: i can't read file from hdfs using pyspark (ambari-server) in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/i-can-t-read-file-from-hdfs-using-pyspark-ambari-server/m-p/292242#M215984</link>
    <description>&lt;P&gt;The error suggests the DFSClient is unable to read the blocks due to connection failure. Either the ports are blocked or unreachable from the node&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;From the node in which you are running the code snippet/From the node in which the executor ran, try reading the file using hdfs commands in debug mode which can give further clues on what node/service the client was trying to reach prior to connect timeout&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;export HADOOP_ROOT_LOGGER=DEBUG,console
hdfs dfs -cat hdfs://ec2-18-234-71-106.compute-1.amazonaws.com:8020/dataset/Tech.csv&lt;/LI-CODE&gt;</description>
    <pubDate>Sun, 22 Mar 2020 13:32:13 GMT</pubDate>
    <dc:creator>venkatsambath</dc:creator>
    <dc:date>2020-03-22T13:32:13Z</dc:date>
    <item>
      <title>i can't read file from hdfs using pyspark (ambari-server)</title>
      <link>https://community.cloudera.com/t5/Support-Questions/i-can-t-read-file-from-hdfs-using-pyspark-ambari-server/m-p/292235#M215978</link>
      <description>&lt;P&gt;from pyspark sql&lt;/P&gt;
&lt;PRE&gt;&lt;SPAN&gt;import &lt;/SPAN&gt;SparkSession&lt;BR /&gt;&lt;BR /&gt;spark &lt;SPAN&gt;= &lt;/SPAN&gt;SparkSession&lt;SPAN&gt;.&lt;/SPAN&gt;builder&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;appName&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;"aaa"&lt;/SPAN&gt;&lt;SPAN&gt;).&lt;/SPAN&gt;&lt;SPAN&gt;master&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;"local"&lt;/SPAN&gt;&lt;SPAN&gt;) &lt;/SPAN&gt;\&lt;BR /&gt;    &lt;SPAN&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;getOrCreate&lt;/SPAN&gt;&lt;SPAN&gt;()&lt;BR /&gt;&lt;/SPAN&gt;df &lt;SPAN&gt;= &lt;/SPAN&gt;spark&lt;SPAN&gt;.&lt;/SPAN&gt;read&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;csv&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;"hdfs://ec2-18-234-71-106.compute-1.amazonaws.com:8020/dataset/Tech.csv"&lt;/SPAN&gt;&lt;SPAN&gt;, &lt;/SPAN&gt;&lt;SPAN&gt;header&lt;/SPAN&gt;&lt;SPAN&gt;=&lt;/SPAN&gt;&lt;SPAN&gt;True&lt;/SPAN&gt;&lt;SPAN&gt;,&lt;BR /&gt;&lt;/SPAN&gt;                     &lt;SPAN&gt;inferSchema&lt;/SPAN&gt;&lt;SPAN&gt;=&lt;/SPAN&gt;&lt;SPAN&gt;True&lt;/SPAN&gt;&lt;SPAN&gt;)&lt;BR /&gt;&lt;/SPAN&gt;df&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;show&lt;/SPAN&gt;&lt;SPAN&gt;()&lt;BR /&gt;&lt;/SPAN&gt;&lt;/PRE&gt;
&lt;LI-SPOILER&gt;
&lt;P&gt;20/03/22 11:08:23 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable&lt;BR /&gt;Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties&lt;BR /&gt;Setting default log level to "WARN".&lt;BR /&gt;To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).&lt;BR /&gt;[Stage 0:&amp;gt; (0 + 1) / 1]20/03/22 11:08:57 WARN BlockReaderFactory: I/O error constructing remote block reader.&lt;BR /&gt;java.net.ConnectException: Connection timed out: no further information&lt;BR /&gt;at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)&lt;BR /&gt;at sun.nio.ch.SocketChannelImpl.finishConnect(Unknown Source)&lt;BR /&gt;at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)&lt;BR /&gt;at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3436)&lt;BR /&gt;at org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:777)&lt;BR /&gt;at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:694)&lt;BR /&gt;at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:355)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:673)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:882)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:934)&lt;BR /&gt;at java.io.DataInputStream.read(Unknown Source)&lt;BR /&gt;at org.apache.hadoop.mapreduce.lib.input.UncompressedSplitLineReader.fillBuffer(UncompressedSplitLineReader.java:62)&lt;BR /&gt;at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:216)&lt;BR /&gt;at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174)&lt;BR /&gt;at org.apache.hadoop.mapreduce.lib.input.UncompressedSplitLineReader.readLine(UncompressedSplitLineReader.java:94)&lt;BR /&gt;at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.skipUtfByteOrderMark(LineRecordReader.java:144)&lt;BR /&gt;at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecordReader.java:184)&lt;BR /&gt;at org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39)&lt;BR /&gt;at org.apache.spark.sql.execution.datasources.HadoopFileLinesReader.hasNext(HadoopFileLinesReader.scala:69)&lt;BR /&gt;at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)&lt;BR /&gt;at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)&lt;BR /&gt;at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:101)&lt;BR /&gt;at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:181)&lt;BR /&gt;at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:101)&lt;BR /&gt;at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)&lt;BR /&gt;at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)&lt;BR /&gt;at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636)&lt;BR /&gt;at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:255)&lt;BR /&gt;at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:247)&lt;BR /&gt;at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:858)&lt;BR /&gt;at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:858)&lt;BR /&gt;at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)&lt;BR /&gt;at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)&lt;BR /&gt;at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)&lt;BR /&gt;at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)&lt;BR /&gt;at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)&lt;BR /&gt;at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)&lt;BR /&gt;at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)&lt;BR /&gt;at org.apache.spark.scheduler.Task.run(Task.scala:123)&lt;BR /&gt;at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)&lt;BR /&gt;at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)&lt;BR /&gt;at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)&lt;BR /&gt;at java.lang.Thread.run(Unknown Source)&lt;BR /&gt;20/03/22 11:08:57 WARN DFSClient: Failed to connect to /172.31.45.122:50010 for block, add to deadNodes and continue. java.net.ConnectException: Connection timed out: no further information&lt;BR /&gt;java.net.ConnectException: Connection timed out: no further information&lt;BR /&gt;at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)&lt;BR /&gt;at sun.nio.ch.SocketChannelImpl.finishConnect(Unknown Source)&lt;BR /&gt;at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)&lt;BR /&gt;at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3436)&lt;BR /&gt;at org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:777)&lt;BR /&gt;at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:694)&lt;BR /&gt;at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:355)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:673)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:882)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:934)&lt;BR /&gt;at java.io.DataInputStream.read(Unknown Source)&lt;BR /&gt;at org.apache.hadoop.mapreduce.lib.input.UncompressedSplitLineReader.fillBuffer(UncompressedSplitLineReader.java:62)&lt;BR /&gt;at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:216)&lt;BR /&gt;at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174)&lt;BR /&gt;at org.apache.hadoop.mapreduce.lib.input.UncompressedSplitLineReader.readLine(UncompressedSplitLineReader.java:94)&lt;BR /&gt;at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.skipUtfByteOrderMark(LineRecordReader.java:144)&lt;BR /&gt;at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecordReader.java:184)&lt;BR /&gt;at org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39)&lt;BR /&gt;at org.apache.spark.sql.execution.datasources.HadoopFileLinesReader.hasNext(HadoopFileLinesReader.scala:69)&lt;BR /&gt;at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)&lt;BR /&gt;at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)&lt;BR /&gt;at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:101)&lt;BR /&gt;at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:181)&lt;BR /&gt;at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:101)&lt;BR /&gt;at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)&lt;BR /&gt;at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)&lt;BR /&gt;at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636)&lt;BR /&gt;at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:255)&lt;BR /&gt;at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:247)&lt;BR /&gt;at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:858)&lt;BR /&gt;at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:858)&lt;BR /&gt;at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)&lt;BR /&gt;at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)&lt;BR /&gt;at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)&lt;BR /&gt;at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)&lt;BR /&gt;at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)&lt;BR /&gt;at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)&lt;BR /&gt;at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)&lt;BR /&gt;at org.apache.spark.scheduler.Task.run(Task.scala:123)&lt;BR /&gt;at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)&lt;BR /&gt;at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)&lt;BR /&gt;at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)&lt;BR /&gt;at java.lang.Thread.run(Unknown Source)&lt;BR /&gt;20/03/22 11:08:57 WARN DFSClient: DFS chooseDataNode: got # 1 IOException, will wait for 2593.799588666132 msec.&lt;BR /&gt;20/03/22 11:09:21 WARN BlockReaderFactory: I/O error constructing remote block reader.&lt;BR /&gt;java.net.ConnectException: Connection timed out: no further information&lt;BR /&gt;at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)&lt;BR /&gt;at sun.nio.ch.SocketChannelImpl.finishConnect(Unknown Source)&lt;BR /&gt;at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)&lt;BR /&gt;at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3436)&lt;BR /&gt;at org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:777)&lt;BR /&gt;at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:694)&lt;BR /&gt;at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:355)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:673)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:882)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:934)&lt;BR /&gt;at java.io.DataInputStream.read(Unknown Source)&lt;BR /&gt;at org.apache.hadoop.mapreduce.lib.input.UncompressedSplitLineReader.fillBuffer(UncompressedSplitLineReader.java:62)&lt;BR /&gt;at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:216)&lt;BR /&gt;at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174)&lt;BR /&gt;at org.apache.hadoop.mapreduce.lib.input.UncompressedSplitLineReader.readLine(UncompressedSplitLineReader.java:94)&lt;BR /&gt;at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.skipUtfByteOrderMark(LineRecordReader.java:144)&lt;BR /&gt;at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecordReader.java:184)&lt;BR /&gt;at org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39)&lt;BR /&gt;at org.apache.spark.sql.execution.datasources.HadoopFileLinesReader.hasNext(HadoopFileLinesReader.scala:69)&lt;BR /&gt;at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)&lt;BR /&gt;at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)&lt;BR /&gt;at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:101)&lt;BR /&gt;at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:181)&lt;BR /&gt;at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:101)&lt;BR /&gt;at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)&lt;BR /&gt;at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)&lt;BR /&gt;at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636)&lt;BR /&gt;at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:255)&lt;BR /&gt;at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:247)&lt;BR /&gt;at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:858)&lt;BR /&gt;at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:858)&lt;BR /&gt;at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)&lt;BR /&gt;at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)&lt;BR /&gt;at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)&lt;BR /&gt;at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)&lt;BR /&gt;at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)&lt;BR /&gt;at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)&lt;BR /&gt;at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)&lt;BR /&gt;at org.apache.spark.scheduler.Task.run(Task.scala:123)&lt;BR /&gt;at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)&lt;BR /&gt;at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)&lt;BR /&gt;at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)&lt;BR /&gt;at java.lang.Thread.run(Unknown Source)&lt;BR /&gt;20/03/22 11:09:21 WARN DFSClient: Failed to connect to /172.31.45.122:50010 for block, add to deadNodes and continue. java.net.ConnectException: Connection timed out: no further information&lt;BR /&gt;java.net.ConnectException: Connection timed out: no further information&lt;BR /&gt;at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)&lt;BR /&gt;at sun.nio.ch.SocketChannelImpl.finishConnect(Unknown Source)&lt;BR /&gt;at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)&lt;BR /&gt;at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3436)&lt;BR /&gt;at org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:777)&lt;BR /&gt;at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:694)&lt;BR /&gt;at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:355)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:673)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:882)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:934)&lt;BR /&gt;at java.io.DataInputStream.read(Unknown Source)&lt;BR /&gt;at org.apache.hadoop.mapreduce.lib.input.UncompressedSplitLineReader.fillBuffer(UncompressedSplitLineReader.java:62)&lt;BR /&gt;at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:216)&lt;BR /&gt;at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174)&lt;BR /&gt;at org.apache.hadoop.mapreduce.lib.input.UncompressedSplitLineReader.readLine(UncompressedSplitLineReader.java:94)&lt;BR /&gt;at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.skipUtfByteOrderMark(LineRecordReader.java:144)&lt;BR /&gt;at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecordReader.java:184)&lt;BR /&gt;at org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39)&lt;BR /&gt;at org.apache.spark.sql.execution.datasources.HadoopFileLinesReader.hasNext(HadoopFileLinesReader.scala:69)&lt;BR /&gt;at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)&lt;BR /&gt;at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)&lt;BR /&gt;at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:101)&lt;BR /&gt;at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:181)&lt;BR /&gt;at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:101)&lt;BR /&gt;at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)&lt;BR /&gt;at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)&lt;BR /&gt;at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636)&lt;BR /&gt;at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:255)&lt;BR /&gt;at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:247)&lt;BR /&gt;at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:858)&lt;BR /&gt;at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:858)&lt;BR /&gt;at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)&lt;BR /&gt;at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)&lt;BR /&gt;at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)&lt;BR /&gt;at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)&lt;BR /&gt;at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)&lt;BR /&gt;at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)&lt;BR /&gt;at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)&lt;BR /&gt;at org.apache.spark.scheduler.Task.run(Task.scala:123)&lt;BR /&gt;at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)&lt;BR /&gt;at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)&lt;BR /&gt;at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)&lt;BR /&gt;at java.lang.Thread.run(Unknown Source)&lt;BR /&gt;20/03/22 11:09:21 WARN DFSClient: DFS chooseDataNode: got # 2 IOException, will wait for 4867.242053742068 msec.&lt;BR /&gt;[Stage 0:&amp;gt; (0 + 1) / 1]20/03/22 11:09:48 WARN BlockReaderFactory: I/O error constructing remote block reader.&lt;BR /&gt;java.net.ConnectException: Connection timed out: no further information&lt;BR /&gt;at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)&lt;BR /&gt;at sun.nio.ch.SocketChannelImpl.finishConnect(Unknown Source)&lt;BR /&gt;at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)&lt;BR /&gt;at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3436)&lt;BR /&gt;at org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:777)&lt;BR /&gt;at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:694)&lt;BR /&gt;at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:355)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:673)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:882)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:934)&lt;BR /&gt;at java.io.DataInputStream.read(Unknown Source)&lt;BR /&gt;at org.apache.hadoop.mapreduce.lib.input.UncompressedSplitLineReader.fillBuffer(UncompressedSplitLineReader.java:62)&lt;BR /&gt;at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:216)&lt;BR /&gt;at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174)&lt;BR /&gt;at org.apache.hadoop.mapreduce.lib.input.UncompressedSplitLineReader.readLine(UncompressedSplitLineReader.java:94)&lt;BR /&gt;at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.skipUtfByteOrderMark(LineRecordReader.java:144)&lt;BR /&gt;at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecordReader.java:184)&lt;BR /&gt;at org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39)&lt;BR /&gt;at org.apache.spark.sql.execution.datasources.HadoopFileLinesReader.hasNext(HadoopFileLinesReader.scala:69)&lt;BR /&gt;at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)&lt;BR /&gt;at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)&lt;BR /&gt;at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:101)&lt;BR /&gt;at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:181)&lt;BR /&gt;at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:101)&lt;BR /&gt;at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)&lt;BR /&gt;at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)&lt;BR /&gt;at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636)&lt;BR /&gt;at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:255)&lt;BR /&gt;at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:247)&lt;BR /&gt;at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:858)&lt;BR /&gt;at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:858)&lt;BR /&gt;at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)&lt;BR /&gt;at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)&lt;BR /&gt;at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)&lt;BR /&gt;at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)&lt;BR /&gt;at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)&lt;BR /&gt;at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)&lt;BR /&gt;at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)&lt;BR /&gt;at org.apache.spark.scheduler.Task.run(Task.scala:123)&lt;BR /&gt;at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)&lt;BR /&gt;at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)&lt;BR /&gt;at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)&lt;BR /&gt;at java.lang.Thread.run(Unknown Source)&lt;/P&gt;
&lt;/LI-SPOILER&gt;</description>
      <pubDate>Sun, 22 Mar 2020 15:10:26 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/i-can-t-read-file-from-hdfs-using-pyspark-ambari-server/m-p/292235#M215978</guid>
      <dc:creator>ParthiCyberPunk</dc:creator>
      <dc:date>2020-03-22T15:10:26Z</dc:date>
    </item>
    <item>
      <title>Re: i can't read file from hdfs using pyspark (ambari-server)</title>
      <link>https://community.cloudera.com/t5/Support-Questions/i-can-t-read-file-from-hdfs-using-pyspark-ambari-server/m-p/292242#M215984</link>
      <description>&lt;P&gt;The error suggests the DFSClient is unable to read the blocks due to connection failure. Either the ports are blocked or unreachable from the node&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;From the node in which you are running the code snippet/From the node in which the executor ran, try reading the file using hdfs commands in debug mode which can give further clues on what node/service the client was trying to reach prior to connect timeout&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;export HADOOP_ROOT_LOGGER=DEBUG,console
hdfs dfs -cat hdfs://ec2-18-234-71-106.compute-1.amazonaws.com:8020/dataset/Tech.csv&lt;/LI-CODE&gt;</description>
      <pubDate>Sun, 22 Mar 2020 13:32:13 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/i-can-t-read-file-from-hdfs-using-pyspark-ambari-server/m-p/292242#M215984</guid>
      <dc:creator>venkatsambath</dc:creator>
      <dc:date>2020-03-22T13:32:13Z</dc:date>
    </item>
    <item>
      <title>Re: i can't read file from hdfs using pyspark (ambari-server)</title>
      <link>https://community.cloudera.com/t5/Support-Questions/i-can-t-read-file-from-hdfs-using-pyspark-ambari-server/m-p/292506#M216147</link>
      <description>&lt;P&gt;Thanks venkat, now it's working...&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 25 Mar 2020 11:18:41 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/i-can-t-read-file-from-hdfs-using-pyspark-ambari-server/m-p/292506#M216147</guid>
      <dc:creator>ParthiCyberPunk</dc:creator>
      <dc:date>2020-03-25T11:18:41Z</dc:date>
    </item>
  </channel>
</rss>

