Support Questions

Find answers, ask questions, and share your expertise

Invoke Livy with pyFiles attribute

avatar
Cloudera Employee

Platform: HDP 2.6.4

If I set –py-files in pyspark (shell mode), it works fine. However, if I set pyFiles parameter in Livy’s CURL request, it returns error “No module found”

I was able to replicate this issue on HDP sandbox as well.

Example:

Create livy/spark session:

curl -X POST --data '{"kind": "pyspark", "pyFiles" : ["/some hdfs location/splitter.py"]}' -H "Content-Type: application/json" -H "X-Requested-By: root"http://localhost:8999/sessions

Submit livy/spark statement: Based on the response above, I extracted session id, and it was 71.

curl -X POST --data '{"code": "from splitter import getWords"}' \-H "Content-Type: application/json" -H "X-Requested-By: root"http://localhost:8999/sessions/71/statements

Check statement status:

curl -X GET -H "Content-Type: application/json" -H "X-Requested-By: root"http://localhost:8999/sessions/71/statements

Response:

{
  "id": 0,
  "code": "from splitter import getWords",
  "state": "available",
  "output": {
    "status": "error",
    "execution_count": 0,
    "ename": "ImportError",
    "evalue": "No module named splitter",
    "traceback": [
      "Traceback (most recent call last):\n",
      "ImportError: No module named splitter\n"
    ]
  },
  "progress": 1.0
}

Any ideas? pyspark shell works fine, but Livy does not. Please suggest.

Thank you

1 ACCEPTED SOLUTION

avatar
Cloudera Employee

Finally I was able to get it working. You need to pass 'spark.yarn.dist.pyFiles' to conf. An example:

curl -X POST --data '{"kind":"pyspark", "conf":{ "spark.yarn.dist.pyFiles" : "hdfs://sandbox-hdp.hortonworks.com:8020/user/skekatpu/pw/codebase"} }' -H "Content-Type: application/json" -H "X-Requested-By: someuserid"http://localhost:8999/sessions

...where 'codebase' is an hdfs folder containing .py modules.

Felix:

Yes, we have some flows that work with batches as well, but this particular one needs interactive connectivity to Livy, and hence /sessions needs to be used.

View solution in original post

6 REPLIES 6

avatar

@skekatpuray --py-files is for command line only. Try using spark.submit.pyFiles instead with Livy. You should add this via Spark configurations in "conf" field of REST. Check this link for more information:

https://community.hortonworks.com/articles/151164/how-to-submit-spark-application-through-livy-rest....

Perhaps those pyFiles you should add to hdfs and point from hdfs instead from file system level, since those wont be present for Livy locally.

HTH

*** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.

avatar

@skekatpuray I see you are using session api instead of batches. Try running with

curl -X POST --data '{"kind":"pyspark", "conf":{ "pyFiles" : "/user/skekatpu/pw/codebase/splitter.py"} }'-H "Content-Type: application/json"-H "X-Requested-By: root" http://localhost:8999/batches

HTH

*** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.

avatar
Cloudera Employee

Sorry, it didn't work. Here's the request to create session:

curl -X POST --data '{"kind":"pyspark", "conf":{ "spark.submit.pyFiles" : "/user/skekatpu/pw/codebase/splitter.py"} }' -H "Content-Type: application/json" -H "X-Requested-By: root" http://localhost:8999/sessions

I retried it using fully qualified hdfs name (hdfs:///sandbox-hdp.hortonworks.com/user/skekatpu/pw/codebase/splitter.py), still didn't work. Response:

{ "id": 1, "code": "import splitter ", "state": "available", "output": { "status": "error", "execution_count": 1, "ename": "ImportError", "evalue": "No module named splitter", "traceback": [ "Traceback (most recent call last):\n", "ImportError: No module named splitter\n" ] }, "progress": 1.0 }

avatar
Master Guru

so the splitter.py is in the hdfs directory /user/sketapu/pw/codebase with read/write/execute permissions?

https://community.hortonworks.com/articles/151164/how-to-submit-spark-application-through-livy-rest....

https://stackoverflow.com/questions/46809200/submitting-python-file-in-batch-mode-in-livywithout-had...


0down vote

For people using incubating mode of livy for first time,kindly check that the template file is renamed with stripping off .template in livy.conf.template.Then make sure that the following configurations are present in it.

livy.spark.master = local
livy.file.local-dir-whitelist =/path/to/script/folder/

Kindly make sure that forward slash is present in end of path

avatar
Cloudera Employee

Finally I was able to get it working. You need to pass 'spark.yarn.dist.pyFiles' to conf. An example:

curl -X POST --data '{"kind":"pyspark", "conf":{ "spark.yarn.dist.pyFiles" : "hdfs://sandbox-hdp.hortonworks.com:8020/user/skekatpu/pw/codebase"} }' -H "Content-Type: application/json" -H "X-Requested-By: someuserid"http://localhost:8999/sessions

...where 'codebase' is an hdfs folder containing .py modules.

Felix:

Yes, we have some flows that work with batches as well, but this particular one needs interactive connectivity to Livy, and hence /sessions needs to be used.

avatar

Thanks for sharing the solution!