<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: How to add the hadoop and yarn configuration file to the Spark application class path ? in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/How-to-add-the-hadoop-and-yarn-configuration-file-to-the/m-p/126810#M55603</link>
    <description>&lt;P&gt;@Artem Ervits ,&lt;/P&gt;&lt;P&gt;Thanks again !  And Sorry If I am asking too many questions here .  &lt;/P&gt;&lt;P&gt;What actually I am looking for is ..I should not use the spark-submit script as per the project requirement , So the cluster configuration I am passing through the spark config  as given below .&lt;/P&gt;&lt;P&gt;SparkConf sparkConfig = new SparkConf().setAppName("Example App of Spark on Yarn");
sparkConfig.set("spark.hadoop.yarn.resourcemanager.hostname","XXXX");
sparkConfig.set("spark.hadoop.yarn.resourcemanager.address","XXXXX:8032");&lt;/P&gt;&lt;P&gt;And it is able to identify the Resource Manager but it failing because it is not identifying the file system . &lt;/P&gt;&lt;P&gt;Though I am setting the hdfs file system configuration as well. &lt;/P&gt;&lt;P&gt;sparkConfig.set("fs.defaultFS", "hdfs://xxxhacluster");
sparkConfig.set("ha.zookeeper.quorum", "xxx:2181,xxxx:2181,xxxx:2181"); And it assuming it as the local file system. And error I am getting in the Resource Manager is &lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;exited with exitCode: -1000 due to: File file:/tmp/spark-0e6626c2-d344-4cae-897f-934e3eb01d8f/__spark_libs__1448521825653017037.zip does not exist &lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;Thanks and Regards,&lt;/P&gt;&lt;P&gt;Param.&lt;/P&gt;</description>
    <pubDate>Tue, 28 Feb 2017 01:54:54 GMT</pubDate>
    <dc:creator>parameswarnc</dc:creator>
    <dc:date>2017-02-28T01:54:54Z</dc:date>
    <item>
      <title>How to add the hadoop and yarn configuration file to the Spark application class path ?</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/How-to-add-the-hadoop-and-yarn-configuration-file-to-the/m-p/126805#M55598</link>
      <description>&lt;P&gt;Hi All,&lt;/P&gt;&lt;P&gt;I am new to spark , I am trying to submit the spark application from the Java program and I am able to submit the one for spark standalone cluster .Actually what I want to achieve is submitting the job to the Yarn cluster and I am able to connect to the yarn cluster by explicitly adding the Resource Manager property in the spark config as below .&lt;/P&gt;&lt;P&gt;sparkConfig.set("spark.hadoop.yarn.resourcemanager.address","XXXX:8032");&lt;/P&gt;&lt;P&gt;But application is failing due to &lt;/P&gt;&lt;P&gt;exited with  exitCode: -1000 due to: File file:/tmp/spark-0e6626c2-d344-4cae-897f-934e3eb01d8f/__spark_libs__1448521825653017037.zip does not exist&lt;/P&gt;&lt;P&gt;This I got it from the Resource manger log , what I found is that it is assuming the file system as local and not uploading the required libraries .&lt;/P&gt;&lt;P&gt;Source and destination file systems are the same. Not copying file:/tmp/spark-1ed67f05-d496-4000-86c1-07fcf8526181/__spark_libs__1740543841989079602.zip&lt;/P&gt;&lt;P&gt;This I got it from the Spark application where I am running my program . &lt;/P&gt;&lt;P&gt;Issue I am suspecting here is it is assuming the file system as local not hdfs , Correct me If I am wrong .&lt;/P&gt;&lt;P&gt;Question here is : &lt;/P&gt;&lt;P&gt;1.What is the actually issue for the job to fail , given the required data or log info above ?&lt;/P&gt;&lt;P&gt;2.Could you please tell me how to add the resource files to spark configuration like addResource in Hadoop configuration. &lt;/P&gt;&lt;P&gt;Thanks in Advance ,&lt;/P&gt;&lt;P&gt;Param.&lt;/P&gt;</description>
      <pubDate>Sun, 26 Feb 2017 20:01:35 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/How-to-add-the-hadoop-and-yarn-configuration-file-to-the/m-p/126805#M55598</guid>
      <dc:creator>parameswarnc</dc:creator>
      <dc:date>2017-02-26T20:01:35Z</dc:date>
    </item>
    <item>
      <title>Re: How to add the hadoop and yarn configuration file to the Spark application class path ?</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/How-to-add-the-hadoop-and-yarn-configuration-file-to-the/m-p/126806#M55599</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/14624/parameswarnc.html" nodeid="14624"&gt;@Param NC&lt;/A&gt; please take a look at our documentation &lt;A href="http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_spark-component-guide/content/ch_developing-spark-apps.html" target="_blank"&gt;http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_spark-component-guide/content/ch_developing-spark-apps.html&lt;/A&gt;&lt;/P&gt;&lt;P&gt;for general knowledge here's an example of doing it in YARN mode, from: &lt;A href="http://spark.apache.org/docs/1.6.2/submitting-applications.html" target="_blank"&gt;http://spark.apache.org/docs/1.6.2/submitting-applications.html&lt;/A&gt; usually HADOOP_CONF_DIR points to /etc/hadoop/conf on HDP distribution. That directory contains core-site.xml, yarn-site.xml, hdfs-site.xml etc.&lt;/P&gt;&lt;PRE&gt;export HADOOP_CONF_DIR=XXX
./bin/spark-submit \
  --class org.apache.spark.examples.SparkPi \
  --master yarn \
  --deploy-mode cluster \  # can be client for client mode
  --executor-memory 20G \
  --num-executors 50 \
  /path/to/examples.jar \
  1000
&lt;/PRE&gt;</description>
      <pubDate>Sun, 26 Feb 2017 23:12:54 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/How-to-add-the-hadoop-and-yarn-configuration-file-to-the/m-p/126806#M55599</guid>
      <dc:creator>aervits</dc:creator>
      <dc:date>2017-02-26T23:12:54Z</dc:date>
    </item>
    <item>
      <title>Re: How to add the hadoop and yarn configuration file to the Spark application class path ?</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/How-to-add-the-hadoop-and-yarn-configuration-file-to-the/m-p/126807#M55600</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/14624/parameswarnc.html" nodeid="14624"&gt;@Param NC&lt;/A&gt; here's how I got it to work on my cluster&lt;/P&gt;&lt;PRE&gt;export HADOOP_CONF_DIR=/etc/hadoop/conf
/usr/hdp/current/spark-client/bin/spark-submit   --class org.apache.spark.examples.SparkPi   --master yarn   --deploy-mode cluster   --executor-memory 1G   --num-executors 3   /usr/hdp/current/spark-client/lib/spark-examples*.jar   100
&lt;/PRE&gt;</description>
      <pubDate>Sun, 26 Feb 2017 23:37:03 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/How-to-add-the-hadoop-and-yarn-configuration-file-to-the/m-p/126807#M55600</guid>
      <dc:creator>aervits</dc:creator>
      <dc:date>2017-02-26T23:37:03Z</dc:date>
    </item>
    <item>
      <title>Re: How to add the hadoop and yarn configuration file to the Spark application class path ?</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/How-to-add-the-hadoop-and-yarn-configuration-file-to-the/m-p/126808#M55601</link>
      <description>&lt;P&gt;@Artem Ervits &lt;/P&gt;&lt;P&gt;Thank you very much for the response .&lt;/P&gt;&lt;P&gt;I am able to submit the job to YARN through the spark-submit command ,but what actually I am looking here is for doing the same thing trough the program . It would be great if you would give the template for the same, java preferably .&lt;/P&gt;&lt;P&gt;-Param.&lt;/P&gt;</description>
      <pubDate>Mon, 27 Feb 2017 14:44:55 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/How-to-add-the-hadoop-and-yarn-configuration-file-to-the/m-p/126808#M55601</guid>
      <dc:creator>parameswarnc</dc:creator>
      <dc:date>2017-02-27T14:44:55Z</dc:date>
    </item>
    <item>
      <title>Re: How to add the hadoop and yarn configuration file to the Spark application class path ?</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/How-to-add-the-hadoop-and-yarn-configuration-file-to-the/m-p/126809#M55602</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/14624/parameswarnc.html" nodeid="14624"&gt;@Param NC&lt;/A&gt; you need to build your application with hadoop-client dependency in your pom.xml or sbt, for scope, supply &amp;lt;scope&amp;gt;provided&amp;lt;/scope&amp;gt;. &lt;A href="http://spark.apache.org/docs/1.6.2/submitting-applications.html"&gt;http://spark.apache.org/docs/1.6.2/submitting-applications.html&lt;/A&gt;&lt;/P&gt;&lt;H1&gt;Bundling Your Application’s Dependencies&lt;A href="http://spark.apache.org/docs/1.6.2/submitting-applications.html#bundling-your-applications-dependencies"&gt;&lt;/A&gt;&lt;/H1&gt;&lt;P&gt;If your code depends on other projects, you will need to package them alongside your application in order to distribute the code to a Spark cluster. To do this, create an assembly jar (or “uber” jar) containing your code and its dependencies. Both &lt;A href="https://github.com/sbt/sbt-assembly"&gt;sbt&lt;/A&gt; and &lt;A href="http://maven.apache.org/plugins/maven-shade-plugin/"&gt;Maven&lt;/A&gt; have assembly plugins. When creating assembly jars, list Spark and Hadoop as &lt;CODE&gt;provided&lt;/CODE&gt; dependencies; these need not be bundled since they are provided by the cluster manager at runtime. Once you have an assembled jar you can call the &lt;CODE&gt;bin/spark-submit&lt;/CODE&gt; script as shown here while passing your jar.&lt;/P&gt;&lt;P&gt;For Python, you can use the &lt;CODE&gt;--py-files&lt;/CODE&gt; argument of &lt;CODE&gt;spark-submit&lt;/CODE&gt; to add &lt;CODE&gt;.py&lt;/CODE&gt;, &lt;CODE&gt;.zip&lt;/CODE&gt; or &lt;CODE&gt;.egg&lt;/CODE&gt; files to be distributed with your application. If you depend on multiple Python files we recommend packaging them into a &lt;CODE&gt;.zip&lt;/CODE&gt; or &lt;CODE&gt;.egg&lt;/CODE&gt;.&lt;/P&gt;&lt;P&gt;More info here &lt;A href="http://spark.apache.org/docs/1.6.2/running-on-yarn.html"&gt;http://spark.apache.org/docs/1.6.2/running-on-yarn.html&lt;/A&gt;&lt;/P&gt;&lt;P&gt;Here's a sample pom.xml definition for hadoop-client&lt;/P&gt;&lt;PRE&gt;&amp;lt;dependencies&amp;gt;
        &amp;lt;dependency&amp;gt;
            &amp;lt;groupId&amp;gt;org.apache.hadoop&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;hadoop-client&amp;lt;/artifactId&amp;gt;
            &amp;lt;version&amp;gt;2.7.1.2.3.0.0-2557&amp;lt;/version&amp;gt;
	    &amp;lt;scope&amp;gt;provided&amp;lt;/scope&amp;gt;
            &amp;lt;type&amp;gt;jar&amp;lt;/type&amp;gt;
        &amp;lt;/dependency&amp;gt;
    &amp;lt;/dependencies&amp;gt;
    &amp;lt;properties&amp;gt;
        &amp;lt;project.build.sourceEncoding&amp;gt;UTF-8&amp;lt;/project.build.sourceEncoding&amp;gt;
        &amp;lt;maven.compiler.source&amp;gt;1.7&amp;lt;/maven.compiler.source&amp;gt;
        &amp;lt;maven.compiler.target&amp;gt;1.7&amp;lt;/maven.compiler.target&amp;gt;
    &amp;lt;/properties&amp;gt;
    
    &amp;lt;repositories&amp;gt;
        &amp;lt;repository&amp;gt;
            &amp;lt;id&amp;gt;HDPReleases&amp;lt;/id&amp;gt;
            &amp;lt;name&amp;gt;HDP Releases&amp;lt;/name&amp;gt;
            &amp;lt;url&amp;gt;http://repo.hortonworks.com/content/repositories/public&amp;lt;/url&amp;gt;
            &amp;lt;layout&amp;gt;default&amp;lt;/layout&amp;gt;
            &amp;lt;releases&amp;gt;
                &amp;lt;enabled&amp;gt;true&amp;lt;/enabled&amp;gt;
                &amp;lt;updatePolicy&amp;gt;always&amp;lt;/updatePolicy&amp;gt;
                &amp;lt;checksumPolicy&amp;gt;warn&amp;lt;/checksumPolicy&amp;gt;
            &amp;lt;/releases&amp;gt;
            &amp;lt;snapshots&amp;gt;
                &amp;lt;enabled&amp;gt;false&amp;lt;/enabled&amp;gt;
                &amp;lt;updatePolicy&amp;gt;never&amp;lt;/updatePolicy&amp;gt;
                &amp;lt;checksumPolicy&amp;gt;fail&amp;lt;/checksumPolicy&amp;gt;
            &amp;lt;/snapshots&amp;gt;
        &amp;lt;/repository&amp;gt;
        &amp;lt;repository&amp;gt;
            &amp;lt;id&amp;gt;HDPJetty&amp;lt;/id&amp;gt;
            &amp;lt;name&amp;gt;Hadoop Jetty&amp;lt;/name&amp;gt;
            &amp;lt;url&amp;gt;http://repo.hortonworks.com/content/repositories/jetty-hadoop/&amp;lt;/url&amp;gt;
            &amp;lt;layout&amp;gt;default&amp;lt;/layout&amp;gt;
            &amp;lt;releases&amp;gt;
                &amp;lt;enabled&amp;gt;true&amp;lt;/enabled&amp;gt;
                &amp;lt;updatePolicy&amp;gt;always&amp;lt;/updatePolicy&amp;gt;
                &amp;lt;checksumPolicy&amp;gt;warn&amp;lt;/checksumPolicy&amp;gt;
            &amp;lt;/releases&amp;gt;
            &amp;lt;snapshots&amp;gt;
                &amp;lt;enabled&amp;gt;false&amp;lt;/enabled&amp;gt;
                &amp;lt;updatePolicy&amp;gt;never&amp;lt;/updatePolicy&amp;gt;
                &amp;lt;checksumPolicy&amp;gt;fail&amp;lt;/checksumPolicy&amp;gt;
            &amp;lt;/snapshots&amp;gt;
        &amp;lt;/repository&amp;gt;
        &amp;lt;repository&amp;gt;
            &amp;lt;snapshots&amp;gt;
                &amp;lt;enabled&amp;gt;false&amp;lt;/enabled&amp;gt;
            &amp;lt;/snapshots&amp;gt;
            &amp;lt;id&amp;gt;central&amp;lt;/id&amp;gt;
            &amp;lt;name&amp;gt;bintray&amp;lt;/name&amp;gt;
            &amp;lt;url&amp;gt;http://jcenter.bintray.com&amp;lt;/url&amp;gt;
        &amp;lt;/repository&amp;gt;
    &amp;lt;/repositories&amp;gt;

&lt;/PRE&gt;</description>
      <pubDate>Mon, 27 Feb 2017 23:01:19 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/How-to-add-the-hadoop-and-yarn-configuration-file-to-the/m-p/126809#M55602</guid>
      <dc:creator>aervits</dc:creator>
      <dc:date>2017-02-27T23:01:19Z</dc:date>
    </item>
    <item>
      <title>Re: How to add the hadoop and yarn configuration file to the Spark application class path ?</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/How-to-add-the-hadoop-and-yarn-configuration-file-to-the/m-p/126810#M55603</link>
      <description>&lt;P&gt;@Artem Ervits ,&lt;/P&gt;&lt;P&gt;Thanks again !  And Sorry If I am asking too many questions here .  &lt;/P&gt;&lt;P&gt;What actually I am looking for is ..I should not use the spark-submit script as per the project requirement , So the cluster configuration I am passing through the spark config  as given below .&lt;/P&gt;&lt;P&gt;SparkConf sparkConfig = new SparkConf().setAppName("Example App of Spark on Yarn");
sparkConfig.set("spark.hadoop.yarn.resourcemanager.hostname","XXXX");
sparkConfig.set("spark.hadoop.yarn.resourcemanager.address","XXXXX:8032");&lt;/P&gt;&lt;P&gt;And it is able to identify the Resource Manager but it failing because it is not identifying the file system . &lt;/P&gt;&lt;P&gt;Though I am setting the hdfs file system configuration as well. &lt;/P&gt;&lt;P&gt;sparkConfig.set("fs.defaultFS", "hdfs://xxxhacluster");
sparkConfig.set("ha.zookeeper.quorum", "xxx:2181,xxxx:2181,xxxx:2181"); And it assuming it as the local file system. And error I am getting in the Resource Manager is &lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;exited with exitCode: -1000 due to: File file:/tmp/spark-0e6626c2-d344-4cae-897f-934e3eb01d8f/__spark_libs__1448521825653017037.zip does not exist &lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;Thanks and Regards,&lt;/P&gt;&lt;P&gt;Param.&lt;/P&gt;</description>
      <pubDate>Tue, 28 Feb 2017 01:54:54 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/How-to-add-the-hadoop-and-yarn-configuration-file-to-the/m-p/126810#M55603</guid>
      <dc:creator>parameswarnc</dc:creator>
      <dc:date>2017-02-28T01:54:54Z</dc:date>
    </item>
    <item>
      <title>Re: How to add the hadoop and yarn configuration file to the Spark application class path ?</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/How-to-add-the-hadoop-and-yarn-configuration-file-to-the/m-p/126811#M55604</link>
      <description>&lt;P&gt;have you tried the following?&lt;/P&gt;&lt;PRE&gt;import org.apache.hadoop.fs._
import org.apache.spark.deploy.SparkHadoopUtil
import java.net.URI

val hdfs_conf = SparkHadoopUtil.get.newConfiguration(sc.getConf)
val hdfs = FileSystem.get(hdfs_conf)
&lt;/PRE&gt;</description>
      <pubDate>Tue, 28 Feb 2017 20:32:03 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/How-to-add-the-hadoop-and-yarn-configuration-file-to-the/m-p/126811#M55604</guid>
      <dc:creator>aervits</dc:creator>
      <dc:date>2017-02-28T20:32:03Z</dc:date>
    </item>
    <item>
      <title>Re: How to add the hadoop and yarn configuration file to the Spark application class path ?</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/How-to-add-the-hadoop-and-yarn-configuration-file-to-the/m-p/126812#M55605</link>
      <description>&lt;P&gt;@Artem Ervits ,&lt;/P&gt;&lt;P&gt;Thanks a lot for your time and help given. &lt;/P&gt;&lt;P&gt;However I am able to achieve my objective by setting the properties of hadoop and yarn in spark configuration .&lt;/P&gt;&lt;P&gt;sparkConfig.set("spark.hadoop.yarn.resourcemanager.hostname","XXX");
sparkConfig.set("spark.hadoop.yarn.resourcemanager.address","XXX:8032");
sparkConfig.set("spark.yarn.access.namenodes", "hdfs://XXXX:8020,hdfs://XXXX:8020");
sparkConfig.set("spark.yarn.stagingDir", "hdfs://XXXX:8020/user/hduser/");
&lt;/P&gt;&lt;P&gt;Regards,&lt;/P&gt;&lt;P&gt;Param.&lt;/P&gt;</description>
      <pubDate>Wed, 01 Mar 2017 00:58:14 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/How-to-add-the-hadoop-and-yarn-configuration-file-to-the/m-p/126812#M55605</guid>
      <dc:creator>parameswarnc</dc:creator>
      <dc:date>2017-03-01T00:58:14Z</dc:date>
    </item>
    <item>
      <title>Re: How to add the hadoop and yarn configuration file to the Spark application class path ?</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/How-to-add-the-hadoop-and-yarn-configuration-file-to-the/m-p/126813#M55606</link>
      <description>&lt;P&gt;Sorry for the delayed reply ...I got busy in some work.&lt;/P&gt;&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/393/aervits.html" nodeid="393"&gt;@Artem Ervits&lt;/A&gt; thanks a lot for all the responses . &lt;/P&gt;&lt;P&gt;I was able to achieve this by setting the spark configuration as below ;-&lt;/P&gt;&lt;P&gt;sparkConfig.set("spark.hadoop.yarn.resourcemanager.hostname","XXXXX");
sparkConfig.set("spark.hadoop.yarn.resourcemanager.address","XXXXX:8032");
sparkConfig.set("spark.yarn.access.namenodes", "hdfs://XXXXX:8020,hdfs://XXXX:8020"); &lt;/P&gt;&lt;P&gt;sparkConfig.set("spark.yarn.stagingDir", "hdfs://XXXXX:8020/user/hduser/"); &lt;/P&gt;&lt;P&gt;sparkConfig.set("--deploy-mode", deployMode);&lt;/P&gt;&lt;P&gt;Thanks ,&lt;/P&gt;&lt;P&gt;Param.&lt;/P&gt;</description>
      <pubDate>Wed, 15 Mar 2017 18:25:25 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/How-to-add-the-hadoop-and-yarn-configuration-file-to-the/m-p/126813#M55606</guid>
      <dc:creator>parameswarnc</dc:creator>
      <dc:date>2017-03-15T18:25:25Z</dc:date>
    </item>
    <item>
      <title>Re: How to add the hadoop and yarn configuration file to the Spark application class path ?</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/How-to-add-the-hadoop-and-yarn-configuration-file-to-the/m-p/126814#M55607</link>
      <description>&lt;P&gt;I am not able to find spark.hadoop.yarn.*  properties. these properties are not listed in any spark documents. please help me where can I find list of spark.hadoop.yarn properties?&lt;/P&gt;</description>
      <pubDate>Thu, 18 Jan 2018 06:43:36 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/How-to-add-the-hadoop-and-yarn-configuration-file-to-the/m-p/126814#M55607</guid>
      <dc:creator>rina_jadav12</dc:creator>
      <dc:date>2018-01-18T06:43:36Z</dc:date>
    </item>
  </channel>
</rss>

