<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Input path does not exist: hdfs://node0:8020/user/hdfs/10000000000 in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Input-path-does-not-exist-hdfs-node0-8020-user-hdfs/m-p/25968#M1527</link>
    <description>&lt;P&gt;Not sure what I'm doing wrong here but I keep getting the same error when I run terasort. Teragen works perfectly but terasort fails.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Input path does not exist: hdfs://node0:8020/user/hdfs/10000000000&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Command line used:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;sudo -u hdfs hadoop jar /opt/cloudera/parcels/CDH-5.3.1-1.cdh5.3.1.p0.5/lib/hadoop-0.20-mapreduce/hadoop-examples-2.5.0-mr1-cdh5.3.1.jar terasort 10000000000 /home/ssd/hdfs-input /home/ssd/hdfs-output.&lt;/P&gt;</description>
    <pubDate>Fri, 16 Sep 2022 09:25:27 GMT</pubDate>
    <dc:creator>nauseous</dc:creator>
    <dc:date>2022-09-16T09:25:27Z</dc:date>
    <item>
      <title>Input path does not exist: hdfs://node0:8020/user/hdfs/10000000000</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Input-path-does-not-exist-hdfs-node0-8020-user-hdfs/m-p/25968#M1527</link>
      <description>&lt;P&gt;Not sure what I'm doing wrong here but I keep getting the same error when I run terasort. Teragen works perfectly but terasort fails.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Input path does not exist: hdfs://node0:8020/user/hdfs/10000000000&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Command line used:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;sudo -u hdfs hadoop jar /opt/cloudera/parcels/CDH-5.3.1-1.cdh5.3.1.p0.5/lib/hadoop-0.20-mapreduce/hadoop-examples-2.5.0-mr1-cdh5.3.1.jar terasort 10000000000 /home/ssd/hdfs-input /home/ssd/hdfs-output.&lt;/P&gt;</description>
      <pubDate>Fri, 16 Sep 2022 09:25:27 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Input-path-does-not-exist-hdfs-node0-8020-user-hdfs/m-p/25968#M1527</guid>
      <dc:creator>nauseous</dc:creator>
      <dc:date>2022-09-16T09:25:27Z</dc:date>
    </item>
    <item>
      <title>Re: Input path does not exist: hdfs://node0:8020/user/hdfs/10000000000</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Input-path-does-not-exist-hdfs-node0-8020-user-hdfs/m-p/25969#M1528</link>
      <description>&lt;P&gt;Was this cluster configured using Cloudera Director on AWS? Can you provide more details on how you do DNS and hostname configuration?&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 27 Mar 2015 17:27:25 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Input-path-does-not-exist-hdfs-node0-8020-user-hdfs/m-p/25969#M1528</guid>
      <dc:creator>Andrei Savu</dc:creator>
      <dc:date>2015-03-27T17:27:25Z</dc:date>
    </item>
    <item>
      <title>Re: Input path does not exist: hdfs://node0:8020/user/hdfs/10000000000</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Input-path-does-not-exist-hdfs-node0-8020-user-hdfs/m-p/25970#M1529</link>
      <description>&lt;P&gt;Found why. &amp;nbsp;Typo my mistake not removing the 1TB file size that was being generated by teragen &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Command worked:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;sudo -u hdfs hadoop jar /opt/cloudera/parcels/CDH-5.3.1-1.cdh5.3.1.p0.5/lib/hadoop-0.20-mapreduce/hadoop-examples-2.5.0-mr1-cdh5.3.1.jar terasort /home/ssd/hdfs-input /home/ssd/hdfs-output&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Works perfectly now.&lt;/P&gt;</description>
      <pubDate>Fri, 27 Mar 2015 17:35:27 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Input-path-does-not-exist-hdfs-node0-8020-user-hdfs/m-p/25970#M1529</guid>
      <dc:creator>nauseous</dc:creator>
      <dc:date>2015-03-27T17:35:27Z</dc:date>
    </item>
    <item>
      <title>Re: Input path does not exist: hdfs://node0:8020/user/hdfs/10000000000</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Input-path-does-not-exist-hdfs-node0-8020-user-hdfs/m-p/66625#M1530</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;I'm new to cloudera and spark both.&lt;/P&gt;&lt;P&gt;I'm trying to run ALS on MovieLens data using spark. I'm getting error while loading the model&lt;/P&gt;&lt;P&gt;&lt;FONT color="#FF0000"&gt;&lt;STRONG&gt;Py4JJavaError: An error occurred while calling o20.partitions.&lt;/STRONG&gt;&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT color="#FF0000"&gt;&lt;STRONG&gt;: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://quickstart.cloudera:8020/home/cloudera/Downloads/ml-100k/&lt;A title="u.data" href="http://disq.us/url?url=http%3A%2F%2Fu.data%3AP0XfC-MmN-fkzvgTNzhPhieUJEQ&amp;amp;cuid=762096" rel="nofollow noopener" target="_blank"&gt;u.data&lt;/A&gt;&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Below is my Code:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;import sys&lt;BR /&gt;import os&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;os.environ['SPARK_HOME'] = '/usr/lib/spark'&lt;BR /&gt;os.environ['PYSPARK_PYTHON'] = '/usr/local/bin/python2.7'&lt;BR /&gt;os.environ['PYSPARK_SUBMIT_ARGS'] = ('--packages com.databricks:spark-csv_2.10:1.3.0 pyspark-shell')&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;# SparkContext is available as sc and HiveContext is available as sqlContext.&lt;BR /&gt;sys.path.append('/usr/lib/spark/python')&lt;BR /&gt;sys.path.append('/usr/lib/spark/python/lib/&lt;A title="py4j-0.9-src.zip" href="http://disq.us/url?url=http%3A%2F%2Fpy4j-0.9-src.zip%3AZMt0hay8ClL69djdRrUZ_VRbE4E&amp;amp;cuid=762096" rel="nofollow noopener" target="_blank"&gt;py4j-0.9-src.zip&lt;/A&gt;')&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;from pyspark import SparkContext&lt;BR /&gt;from pyspark import HiveContext&lt;BR /&gt;sc = SparkContext()&lt;BR /&gt;sqlContext = HiveContext(sc)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;import numpy&lt;BR /&gt;from pyspark.mllib.recommendation import ALS, MatrixFactorizationModel, Rating&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;# Load and parse the data&lt;BR /&gt;data = sc.textFile('/home/cloudera/Downloads/ml-100k/&lt;A title="u.data" href="http://disq.us/url?url=http%3A%2F%2Fu.data%3AP0XfC-MmN-fkzvgTNzhPhieUJEQ&amp;amp;cuid=762096" rel="nofollow noopener" target="_blank"&gt;u.data&lt;/A&gt;')&lt;BR /&gt;ratings =&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A title="data.map" href="http://disq.us/url?url=http%3A%2F%2Fdata.map%3ACtwec--Jd_SQLlPft6L2DmEzjHw&amp;amp;cuid=762096" rel="nofollow noopener" target="_blank"&gt;data.map&lt;/A&gt;(lambda l: l.split('\t')).map(lambda l: Rating(int(l[0]), int(l[1]), float(l[2])))&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;# Build the Recommendation model using Alternative Least Squares&lt;BR /&gt;rank = 10&lt;BR /&gt;numIterations = 10&lt;BR /&gt;&lt;FONT color="#FF0000"&gt;model = ALS.train(ratings, rank, numIterations, seed=10, nonnegative=True) failing at this point&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;this is running&lt;/P&gt;&lt;P&gt;# r1 = Rating(1,2,3.0)&lt;BR /&gt;# r2 = Rating(1,1,4.0)&lt;BR /&gt;# r3 = Rating(2,1,1.0)&lt;BR /&gt;# ratings1 = sc.parallelize([r1,r2,r3])&lt;BR /&gt;# model = ALS.trainImplicit(ratings1, 1, seed=10)&lt;BR /&gt;# model.predict(2,2)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;# Evaluate the model on training data&lt;BR /&gt;testdata =&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A title="data.map" href="http://disq.us/url?url=http%3A%2F%2Fdata.map%3ACtwec--Jd_SQLlPft6L2DmEzjHw&amp;amp;cuid=762096" rel="nofollow noopener" target="_blank"&gt;data.map&lt;/A&gt;(lambda p: (p[0], p[1]))&lt;BR /&gt;predictions = model.predictAll(testdata).map(lambda r: ((r[0], r[1]), r[2]))&lt;BR /&gt;ratesAndPred =&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A title="ratings.map" href="http://disq.us/url?url=http%3A%2F%2Fratings.map%3APUSHLSLIphUw65HsOMwTcKcX0VE&amp;amp;cuid=762096" rel="nofollow noopener" target="_blank"&gt;ratings.map&lt;/A&gt;(lambda r: ((r[0], r[1]), r[2])).join(predictions)&lt;BR /&gt;MSE =&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A title="ratesAndPred.map" href="http://disq.us/url?url=http%3A%2F%2FratesAndPred.map%3AkHT9PcaTdPW0hdhod8OjQfpI0LU&amp;amp;cuid=762096" rel="nofollow noopener" target="_blank"&gt;ratesAndPred.map&lt;/A&gt;(lambda r: (r[1][0] - r[1][1])**2).mean()&lt;BR /&gt;print("Mean Squared Error = " + str(MSE))&lt;/P&gt;&lt;P&gt;Error:&lt;BR /&gt;model = ALS.train(ratings, rank, numIterations, seed=10, nonnegative=True)&lt;BR /&gt;Traceback (most recent call last):&lt;BR /&gt;File "&amp;lt;input&amp;gt;", line 1, in &amp;lt;module&amp;gt;&lt;BR /&gt;File "/usr/local/lib/python2.7/site-packages/pyspark/mllib/&lt;A title="recommendation.py" href="http://disq.us/url?url=http%3A%2F%2Frecommendation.py%3AuFRR0b2-eEFMEkyzs5d3dF6WbMo&amp;amp;cuid=762096" rel="nofollow noopener" target="_blank"&gt;recommendation.py&lt;/A&gt;", line 243, in train&lt;BR /&gt;model = callMLlibFunc("trainALSModel", cls._prepare(ratings), rank, iterations,&lt;BR /&gt;File "/usr/local/lib/python2.7/site-packages/pyspark/mllib/&lt;A title="recommendation.py" href="http://disq.us/url?url=http%3A%2F%2Frecommendation.py%3AuFRR0b2-eEFMEkyzs5d3dF6WbMo&amp;amp;cuid=762096" rel="nofollow noopener" target="_blank"&gt;recommendation.py&lt;/A&gt;", line 223, in _prepare&lt;BR /&gt;first = ratings.first()&lt;BR /&gt;File "/usr/local/lib/python2.7/site-packages/pyspark/&lt;A title="rdd.py" href="http://disq.us/url?url=http%3A%2F%2Frdd.py%3AIGJVBoY8VBXWkBY6miKxGnFefE0&amp;amp;cuid=762096" rel="nofollow noopener" target="_blank"&gt;rdd.py&lt;/A&gt;", line 1315, in first&lt;BR /&gt;rs = self.take(1)&lt;BR /&gt;File "/usr/local/lib/python2.7/site-packages/pyspark/&lt;A title="rdd.py" href="http://disq.us/url?url=http%3A%2F%2Frdd.py%3AIGJVBoY8VBXWkBY6miKxGnFefE0&amp;amp;cuid=762096" rel="nofollow noopener" target="_blank"&gt;rdd.py&lt;/A&gt;", line 1267, in take&lt;BR /&gt;totalParts = self.getNumPartitions()&lt;BR /&gt;File "/usr/local/lib/python2.7/site-packages/pyspark/&lt;A title="rdd.py" href="http://disq.us/url?url=http%3A%2F%2Frdd.py%3AIGJVBoY8VBXWkBY6miKxGnFefE0&amp;amp;cuid=762096" rel="nofollow noopener" target="_blank"&gt;rdd.py&lt;/A&gt;", line 2363, in getNumPartitions&lt;BR /&gt;return self._prev_jrdd.partitions().size()&lt;BR /&gt;File "/usr/lib/spark/python/lib/&lt;A title="py4j-0.9-src.zip/py4j/java_gateway.py" href="http://disq.us/url?url=http%3A%2F%2Fpy4j-0.9-src.zip%2Fpy4j%2Fjava_gateway.py%3A03CpP1SDTyT4tOVm8Ak5szuGjl0&amp;amp;cuid=762096" rel="nofollow noopener" target="_blank"&gt;py4j-0.9-src.zip/py4j/java_...&lt;/A&gt;", line 813, in __call__&lt;BR /&gt;answer, self.gateway_client, self.target_id,&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A title="self.name" href="http://disq.us/url?url=http%3A%2F%2Fself.name%3A4eusmbhcP7OZNk7kFdckffu_rGU&amp;amp;cuid=762096" rel="nofollow noopener" target="_blank"&gt;self.name&lt;/A&gt;)&lt;BR /&gt;File "/usr/local/lib/python2.7/site-packages/pyspark/sql/&lt;A title="utils.py" href="http://disq.us/url?url=http%3A%2F%2Futils.py%3AwCEx_V47bpHw7R9Rbp6Dltt6lsI&amp;amp;cuid=762096" rel="nofollow noopener" target="_blank"&gt;utils.py&lt;/A&gt;", line 45, in deco&lt;BR /&gt;return f(*a, **kw)&lt;BR /&gt;File "/usr/lib/spark/python/lib/&lt;A title="py4j-0.9-src.zip/py4j/protocol.py" href="http://disq.us/url?url=http%3A%2F%2Fpy4j-0.9-src.zip%2Fpy4j%2Fprotocol.py%3AMiXnPpRKWkQPGKdBk96t4ugCJzE&amp;amp;cuid=762096" rel="nofollow noopener" target="_blank"&gt;py4j-0.9-src.zip/py4j/proto...&lt;/A&gt;", line 308, in get_return_value&lt;BR /&gt;format(target_id, ".", name), value)&lt;BR /&gt;&lt;FONT color="#FF0000"&gt;&lt;STRONG&gt;Py4JJavaError: An error occurred while calling o20.partitions.&lt;/STRONG&gt;&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT color="#FF0000"&gt;&lt;STRONG&gt;: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://quickstart.cloudera:8020/home/cloudera/Downloads/ml-100k/&lt;A title="u.data" href="http://disq.us/url?url=http%3A%2F%2Fu.data%3AP0XfC-MmN-fkzvgTNzhPhieUJEQ&amp;amp;cuid=762096" rel="nofollow noopener" target="_blank"&gt;u.data&lt;/A&gt;&lt;/STRONG&gt;&lt;/FONT&gt;&lt;BR /&gt;at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(&lt;A title="FileInputFormat.java" href="http://disq.us/url?url=http%3A%2F%2FFileInputFormat.java%3AINzKUZ1AQgRV-Bl3cWUMEYlLUbI&amp;amp;cuid=762096" rel="nofollow noopener" target="_blank"&gt;FileInputFormat.java&lt;/A&gt;:287)&lt;BR /&gt;at org.apache.hadoop.mapred.FileInputFormat.listStatus(&lt;A title="FileInputFormat.java" href="http://disq.us/url?url=http%3A%2F%2FFileInputFormat.java%3AINzKUZ1AQgRV-Bl3cWUMEYlLUbI&amp;amp;cuid=762096" rel="nofollow noopener" target="_blank"&gt;FileInputFormat.java&lt;/A&gt;:229)&lt;BR /&gt;at org.apache.hadoop.mapred.FileInputFormat.getSplits(&lt;A title="FileInputFormat.java" href="http://disq.us/url?url=http%3A%2F%2FFileInputFormat.java%3AINzKUZ1AQgRV-Bl3cWUMEYlLUbI&amp;amp;cuid=762096" rel="nofollow noopener" target="_blank"&gt;FileInputFormat.java&lt;/A&gt;:315)&lt;BR /&gt;at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:202)&lt;BR /&gt;at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)&lt;BR /&gt;at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)&lt;BR /&gt;at scala.Option.getOrElse(Option.scala:120)&lt;BR /&gt;at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)&lt;BR /&gt;at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)&lt;BR /&gt;at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)&lt;BR /&gt;at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)&lt;BR /&gt;at scala.Option.getOrElse(Option.scala:120)&lt;BR /&gt;at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)&lt;BR /&gt;at org.apache.spark.api.java.JavaRDDLike$class.partitions(JavaRDDLike.scala:64)&lt;BR /&gt;at org.apache.spark.api.java.AbstractJavaRDDLike.partitions(JavaRDDLike.scala:46)&lt;BR /&gt;at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)&lt;BR /&gt;at sun.reflect.NativeMethodAccessorImpl.invoke(&lt;A title="NativeMethodAccessorImpl.java" href="http://disq.us/url?url=http%3A%2F%2FNativeMethodAccessorImpl.java%3A7n4dTgA9H6EzRxeDovXL1msIRFA&amp;amp;cuid=762096" rel="nofollow noopener" target="_blank"&gt;NativeMethodAccessorImpl.java&lt;/A&gt;:57)&lt;BR /&gt;at sun.reflect.DelegatingMethodAccessorImpl.invoke(&lt;A title="DelegatingMethodAccessorImpl.java" href="http://disq.us/url?url=http%3A%2F%2FDelegatingMethodAccessorImpl.java%3AEQf2mrNus59DjXdTjhcWw9zPzv0&amp;amp;cuid=762096" rel="nofollow noopener" target="_blank"&gt;DelegatingMethodAccessorImp...&lt;/A&gt;:43)&lt;BR /&gt;at java.lang.reflect.Method.invoke(&lt;A title="Method.java" href="http://disq.us/url?url=http%3A%2F%2FMethod.java%3A2DwozSqHhIisPUm3QTMzA_MP2bY&amp;amp;cuid=762096" rel="nofollow noopener" target="_blank"&gt;Method.java&lt;/A&gt;:606)&lt;BR /&gt;at py4j.reflection.MethodInvoker.invoke(&lt;A title="MethodInvoker.java" href="http://disq.us/url?url=http%3A%2F%2FMethodInvoker.java%3AHiZX2lq3LxiwVryhKeDjSN4rigM&amp;amp;cuid=762096" rel="nofollow noopener" target="_blank"&gt;MethodInvoker.java&lt;/A&gt;:231)&lt;BR /&gt;at py4j.reflection.ReflectionEngine.invoke(&lt;A title="ReflectionEngine.java" href="http://disq.us/url?url=http%3A%2F%2FReflectionEngine.java%3AvGrtbmzrACWu1zmCnYEMlHb5jA4&amp;amp;cuid=762096" rel="nofollow noopener" target="_blank"&gt;ReflectionEngine.java&lt;/A&gt;:381)&lt;BR /&gt;at py4j.Gateway.invoke(&lt;A title="Gateway.java" href="http://disq.us/url?url=http%3A%2F%2FGateway.java%3ApjTSHzZZMQd12kWZdaUgS9OKiAs&amp;amp;cuid=762096" rel="nofollow noopener" target="_blank"&gt;Gateway.java&lt;/A&gt;:259)&lt;BR /&gt;at py4j.commands.AbstractCommand.invokeMethod(&lt;A title="AbstractCommand.java" href="http://disq.us/url?url=http%3A%2F%2FAbstractCommand.java%3Av_1nAPOlHTHfTvy6-XNuZHYcPnk&amp;amp;cuid=762096" rel="nofollow noopener" target="_blank"&gt;AbstractCommand.java&lt;/A&gt;:133)&lt;BR /&gt;at py4j.commands.CallCommand.execute(&lt;A title="CallCommand.java" href="http://disq.us/url?url=http%3A%2F%2FCallCommand.java%3AZBLqyiEzV_8iYoZnPMQnDGzzSl4&amp;amp;cuid=762096" rel="nofollow noopener" target="_blank"&gt;CallCommand.java&lt;/A&gt;:79)&lt;BR /&gt;at&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A title="py4j.GatewayConnection.run" href="http://disq.us/url?url=http%3A%2F%2Fpy4j.GatewayConnection.run%3AjvnILNGop89pEIkGgvopN_5SHvw&amp;amp;cuid=762096" rel="nofollow noopener" target="_blank"&gt;py4j.GatewayConnection.run&lt;/A&gt;(&lt;A title="GatewayConnection.java" href="http://disq.us/url?url=http%3A%2F%2FGatewayConnection.java%3AFLoNJVWe4SJfOStNH57cmk-nbL8&amp;amp;cuid=762096" rel="nofollow noopener" target="_blank"&gt;GatewayConnection.java&lt;/A&gt;:209)&lt;BR /&gt;at&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A title="java.lang.Thread.run" href="http://disq.us/url?url=http%3A%2F%2Fjava.lang.Thread.run%3AihyBEa3w3PknQXFtdf9s-Zclq5I&amp;amp;cuid=762096" rel="nofollow noopener" target="_blank"&gt;java.lang.Thread.run&lt;/A&gt;(&lt;A title="Thread.java" href="http://disq.us/url?url=http%3A%2F%2FThread.java%3AK09PFWg8SqvGk4CysQqy_ILXY-0&amp;amp;cuid=762096" rel="nofollow noopener" target="_blank"&gt;Thread.java&lt;/A&gt;:745)&lt;/P&gt;&lt;P&gt;Please help me with this, I'm new to spark and there's no blog on this error. thanks&lt;/P&gt;</description>
      <pubDate>Mon, 23 Apr 2018 00:50:18 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Input-path-does-not-exist-hdfs-node0-8020-user-hdfs/m-p/66625#M1530</guid>
      <dc:creator>AdityaBhandari</dc:creator>
      <dc:date>2018-04-23T00:50:18Z</dc:date>
    </item>
    <item>
      <title>Re: Input path does not exist: hdfs://node0:8020/user/hdfs/10000000000</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Input-path-does-not-exist-hdfs-node0-8020-user-hdfs/m-p/66626#M1531</link>
      <description>I was referencing a local file system path.&lt;BR /&gt;Need to ref it as: ‘file:///home...’ then. That tells it to use local file system. It worked.</description>
      <pubDate>Mon, 23 Apr 2018 01:19:22 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Input-path-does-not-exist-hdfs-node0-8020-user-hdfs/m-p/66626#M1531</guid>
      <dc:creator>AdityaBhandari</dc:creator>
      <dc:date>2018-04-23T01:19:22Z</dc:date>
    </item>
  </channel>
</rss>

