<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: read/write hdfs files with standalone python script in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/read-write-hdfs-files-with-standalone-python-script/m-p/60053#M55556</link>
    <description>&lt;P&gt;Hello Creaping,&lt;BR /&gt;&lt;BR /&gt;As HDFS is not a standard unix filesystem, it is not possible to read it with native python IO libraries. As HDFS is open-source, there are plenty of connectors out there. You can also acces HDFS via HttpFS on a REST interface.&lt;BR /&gt;&lt;BR /&gt;In case you'd like to parse large amount of data, none of that will be suitable, as the script itself still&amp;nbsp;runs on a single computer. To solve that, you can use pyspark to rewrite your script and use the spark-provided utils to manipulate data. This'll solve both HDFS access and the distribution of the workload for you.&lt;BR /&gt;&lt;BR /&gt;Zsolt&lt;/P&gt;</description>
    <pubDate>Mon, 18 Sep 2017 13:35:36 GMT</pubDate>
    <dc:creator>zherczeg</dc:creator>
    <dc:date>2017-09-18T13:35:36Z</dc:date>
    <item>
      <title>read/write hdfs files with standalone python script</title>
      <link>https://community.cloudera.com/t5/Support-Questions/read-write-hdfs-files-with-standalone-python-script/m-p/59867#M55555</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have some python standalone files, which acces data through the common command:&lt;/P&gt;&lt;PRE&gt;with open("filename") as f:
   for lines in f:
[...]&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I want make the python scripts able to run, without changing too much of the code and without dependencies, if possible. Right now I start the files as spark-programms in the Workflow in HUE.&lt;/P&gt;&lt;P&gt;Are there built-in packages I can use? I tried to import pydoop and hdfs, but they didnt exist.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;My goal is to make these scripts run and be able to read/write files on the HDFS.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks for the help.&lt;/P&gt;</description>
      <pubDate>Fri, 16 Sep 2022 12:14:16 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/read-write-hdfs-files-with-standalone-python-script/m-p/59867#M55555</guid>
      <dc:creator>Creaping</dc:creator>
      <dc:date>2022-09-16T12:14:16Z</dc:date>
    </item>
    <item>
      <title>Re: read/write hdfs files with standalone python script</title>
      <link>https://community.cloudera.com/t5/Support-Questions/read-write-hdfs-files-with-standalone-python-script/m-p/60053#M55556</link>
      <description>&lt;P&gt;Hello Creaping,&lt;BR /&gt;&lt;BR /&gt;As HDFS is not a standard unix filesystem, it is not possible to read it with native python IO libraries. As HDFS is open-source, there are plenty of connectors out there. You can also acces HDFS via HttpFS on a REST interface.&lt;BR /&gt;&lt;BR /&gt;In case you'd like to parse large amount of data, none of that will be suitable, as the script itself still&amp;nbsp;runs on a single computer. To solve that, you can use pyspark to rewrite your script and use the spark-provided utils to manipulate data. This'll solve both HDFS access and the distribution of the workload for you.&lt;BR /&gt;&lt;BR /&gt;Zsolt&lt;/P&gt;</description>
      <pubDate>Mon, 18 Sep 2017 13:35:36 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/read-write-hdfs-files-with-standalone-python-script/m-p/60053#M55556</guid>
      <dc:creator>zherczeg</dc:creator>
      <dc:date>2017-09-18T13:35:36Z</dc:date>
    </item>
    <item>
      <title>Re: read/write hdfs files with standalone python script</title>
      <link>https://community.cloudera.com/t5/Support-Questions/read-write-hdfs-files-with-standalone-python-script/m-p/60109#M55557</link>
      <description>&lt;P&gt;Hello&amp;nbsp;&lt;SPAN&gt;Zsolt,&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;thanks for the reply. The problem was, that I don't&amp;nbsp;have the permissions to install python packages like pydoop.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;I was not sure if there is a native way, but I will ask the sysadmin to install some packages.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 19 Sep 2017 13:11:03 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/read-write-hdfs-files-with-standalone-python-script/m-p/60109#M55557</guid>
      <dc:creator>Creaping</dc:creator>
      <dc:date>2017-09-19T13:11:03Z</dc:date>
    </item>
  </channel>
</rss>

