Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

spark issue after ran the job

avatar

We have 3 node cluster

each node have 32 GB ram.

But still System going in hung stat after running the job.

Job is doing converting dataframe to csv using com.databricks.csv.

1 ACCEPTED SOLUTION

avatar

Issue is resolved after increasing physical ram of the machine. Now it is working fine. I was running the job on 32 GB ram node and I increased the it to 64 GB and ran same code 3-4 times.

View solution in original post

6 REPLIES 6

avatar
Super Collaborator

could you please post little more information on the job, the submit command etc. What is your data source?

avatar

plz suggest if I can tune my cluster for spark.

avatar

@Arun A K

This is the command :-

We are reading csv files.

java -cp .:spark-assembly-1.6.1.2.4.2.0-258-hadoop2.7.1.2.4.2.0-258.jar:commons-csv-1.1.jar:spark-csv_2.10-1.4.0.jar SparkMainPlain xyz

avatar
Super Collaborator

If spark-csv_2.10-1.4.0.jar is your application, please submit it using spark-submit rather than running it as java application. Could you explain little more on what the application is doing? What is the data source? How do you turn your data into a data frame etc...

avatar

Issue is resolved after increasing physical ram of the machine. Now it is working fine. I was running the job on 32 GB ram node and I increased the it to 64 GB and ran same code 3-4 times.

avatar

can anyone help me to tune the spark to run same job on 32 GB system. Because I my cluster was 32 GB with 3 node, I think 32 GB per node is enough and free memory was always 20 GB on every node.