Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Questions on several API end points and model

avatar
Explorer

(1) I tried to invoke "/refresh". The http status code indicates 200. But, I have not noticed the Oryx console messages changed to reflect the re-build of the model matrix. Shouldn't I see some ALS iteration messages in the console ? How do I know the model is refreshing ? Can refreshing take "new" ALS parameters to rebuild the model matrix ?

 

(2) Is there a way to clean all trainig data and re-set the model ?

 

 

 

3 ACCEPTED SOLUTIONS

avatar
Master Collaborator

/refresh on the Serving Layer? that just asks to check for a new model manually, and suggests a sync of local data to HDFS. It won't affect a model directly.

 

Have a look at the "Force Run" link in the Computation Layer for a way to suggest a model rebuild.

 

You can always delete all of the previous input and model and start from a new generation 00000... Or you can just delete the model and keep the data if you just want to re-run the model. You wouldn't need to do this in general but it's always possible.

View solution in original post

avatar
Master Collaborator

No it is not the same 'because' computation in the paper. The one in the paper is better. However it requires storing a k x k matrix for every user, or computing it on the fly, both of which are pretty prohibitive. They're not hard to implement though. This is a cheap-o non-personalized computation based on item similarity.

 

No, the system does not serve the original data, just results from the factored model. It's assumed that, if the caller needs this info, the caller has it, and its purpose is generally not specific to the core recommender, so accessing this data is not part of the engine.

View solution in original post

avatar
Master Collaborator

At model-build time, yes this is equivalent to a single input with value 3. At runtime, this would have a very slightly different effect as an incremental update since applying and update of 1 and then 2 is slightly different from applying one update of 3.

 

Ingesting is the same as sending the data points to /pref one after the other. So they are not aggregated at ingest time, no.

View solution in original post

7 REPLIES 7

avatar
Master Collaborator

/refresh on the Serving Layer? that just asks to check for a new model manually, and suggests a sync of local data to HDFS. It won't affect a model directly.

 

Have a look at the "Force Run" link in the Computation Layer for a way to suggest a model rebuild.

 

You can always delete all of the previous input and model and start from a new generation 00000... Or you can just delete the model and keep the data if you just want to re-run the model. You wouldn't need to do this in general but it's always possible.

avatar
Explorer

Sean,

 

As moving to use several Oryx API end points, I have questions as below...

(1) For API "/because" : I traced the codes and it seems the influence score is computed based on the Y-latent feature similarity from the requested item-ID with the known items associated to the requested user-ID. Confirm? In the ALS paper (Hu, et. al., "Collaborative Filtering for Implicit Feedback Datasets"), there is a section (Section 5: Explaining recommendations) regarding why a specific item was recommended to the user. Is Oryx implementation following that paper's approach ?

 

(2) Is it possible to get the users' input data (i.e., rating data as saved in R-matrix RbyRow, RbyColumn)?

I noticed there is no such API and I am wondering how to get that information from the internal structure.

 

Thanks.

 

 

avatar
Master Collaborator

No it is not the same 'because' computation in the paper. The one in the paper is better. However it requires storing a k x k matrix for every user, or computing it on the fly, both of which are pretty prohibitive. They're not hard to implement though. This is a cheap-o non-personalized computation based on item similarity.

 

No, the system does not serve the original data, just results from the factored model. It's assumed that, if the caller needs this info, the caller has it, and its purpose is generally not specific to the core recommender, so accessing this data is not part of the engine.

avatar
Explorer

Sean,

 

As moving to use  Oryx API /ingest end points, I have questions as below...

(1) From the Oryx UI /ingest, it allows you to input userID,itemID,value...

If I input two events (userID-1,itemID-1, 2.0) and then (userID-1,itemID-1, 1.0) , then system will accumulate the value (based on user id and item id), so that the equivilent is (userID-1,itemID-1, 3.0) ?

 

(2) From the Oryx UI /ingest, it allows you to input userID,itemID,value from the CSV file. Same as (1), will it "auto" aggregrate the value inside the CSV based on  user id and item id ?

 

Thanks

avatar
Master Collaborator

At model-build time, yes this is equivalent to a single input with value 3. At runtime, this would have a very slightly different effect as an incremental update since applying and update of 1 and then 2 is slightly different from applying one update of 3.

 

Ingesting is the same as sending the data points to /pref one after the other. So they are not aggregated at ingest time, no.

avatar
Explorer

Sean,

 

Follow up on the /ingest end point to model computation.

I ingested a file of about 1GB zipped file (unzipped size is about 4GB)...

It's about 10 millions users record.

Questions:

(1)

I am thinking the generation 0 will be computed based on the 1 GB zipped file (i.e., 10 millions users).

However, from the computation log, it seems to run the generation 0 with a sub set of the data (about 200MB zipped data), then it starts to run into generation 1...

Is that normal ?

 

(2) In the running into generation 1, it gets the out-of-menory issue as below (my VM is 24GB MEM 64 Linux and I run in local mode)...

Then, it starts to trigger generation 1 computation again, but using differet copied /tmp/ file.... Any suggestions and comments? Thanks.

(* Log *)

 

Sat Feb 28 00:09:45 PST 2015 INFO Generation 0 complete
Sat Feb 28 00:10:45 PST 2015 INFO Running new generation due to elapsed time: 409 minutes
Sat Feb 28 00:10:45 PST 2015 INFO Starting run for instance 
Sat Feb 28 00:10:45 PST 2015 INFO Last complete generation is 0
Sat Feb 28 00:10:45 PST 2015 INFO Making new generation 2
Sat Feb 28 00:10:45 PST 2015 INFO Waiting 209s for data to start uploading to generation 1 and then move to 2...
Sat Feb 28 00:14:15 PST 2015 INFO Running generation 1
Sat Feb 28 00:14:22 PST 2015 INFO Copying /tmp/1425111255160-0/oryx-append-2983995934077746346.csv.gz to /tmp/1425111255160-1
Sat Feb 28 00:14:22 PST 2015 INFO Copying /tmp/1425111255160-0/oryx-append-7628769517257349553.csv.gz to /tmp/1425111255160-1
Sat Feb 28 00:14:23 PST 2015 INFO Copying /tmp/1425111255160-0/oryx-append-1155998814849386940.csv.gz to /tmp/1425111255160-1
Sat Feb 28 00:14:23 PST 2015 INFO Copying /tmp/1425111255160-0/oryx-append-1103482790086927796.csv.gz to /tmp/1425111255160-1
Sat Feb 28 00:14:23 PST 2015 INFO Copying /tmp/1425111255160-0/oryx-append-6832095713207111419.csv.gz to /tmp/1425111255160-1
Sat Feb 28 00:14:23 PST 2015 INFO Copying /tmp/1425111255160-0/oryx-append-5243247952662826167.csv.gz to /tmp/1425111255160-1
Sat Feb 28 00:14:24 PST 2015 INFO Reading /tmp/1425111255160-3/0.csv.gz
Sat Feb 28 00:14:43 PST 2015 INFO Pruning near-zero entries
Sat Feb 28 00:14:44 PST 2015 INFO No input files in /tmp/1425111255160-5
Sat Feb 28 00:14:44 PST 2015 INFO Pruning near-zero entries
Sat Feb 28 00:14:44 PST 2015 INFO Reading /tmp/1425111255160-4/0.csv.gz
Sat Feb 28 00:14:53 PST 2015 INFO Reading /tmp/1425111255160-1/oryx-append-2983995934077746346.csv.gz
Sat Feb 28 00:15:50 PST 2015 INFO Reading /tmp/1425111255160-1/oryx-append-1155998814849386940.csv.gz
Sat Feb 28 00:16:20 PST 2015 INFO Reading /tmp/1425111255160-1/oryx-append-1103482790086927796.csv.gz
Sat Feb 28 00:16:55 PST 2015 INFO Reading /tmp/1425111255160-1/oryx-append-6832095713207111419.csv.gz
Sat Feb 28 00:17:54 PST 2015 INFO Reading /tmp/1425111255160-1/oryx-append-7628769517257349553.csv.gz
Sat Feb 28 00:19:03 PST 2015 INFO Reading /tmp/1425111255160-1/oryx-append-5243247952662826167.csv.gz
Sat Feb 28 00:20:00 PST 2015 INFO Pruning near-zero entries
Sat Feb 28 00:20:03 PST 2015 INFO Building factorization...
Sat Feb 28 00:20:03 PST 2015 INFO Starting from new, random Y matrix
Sat Feb 28 00:20:03 PST 2015 INFO Constructed initial Y
Sat Feb 28 00:20:03 PST 2015 INFO Executing ALS with parallelism 4
Sat Feb 28 00:20:56 PST 2015 INFO 100000 X/tag rows computed (4689MB heap)
Sat Feb 28 00:21:32 PST 2015 INFO 200000 X/tag rows computed (5207MB heap)
Sat Feb 28 00:22:08 PST 2015 INFO 300000 X/tag rows computed (5821MB heap)
Sat Feb 28 00:22:49 PST 2015 INFO 400000 X/tag rows computed (5192MB heap)
...
...
Sat Feb 28 00:37:25 PST 2015 INFO 2400000 X/tag rows computed (5160MB heap)
Sat Feb 28 00:38:11 PST 2015 INFO 2500000 X/tag rows computed (5935MB heap)
Sat Feb 28 00:38:11 PST 2015 WARNING Memory is low. Increase heap size with -Xmx, decrease new generation size with larger -XX:NewRatio value, and/or use -XX:+UseCompressedOops
Sat Feb 28 00:39:02 PST 2015 INFO 2600000 X/tag rows computed (5858MB heap)
Sat Feb 28 00:39:59 PST 2015 INFO 2700000 X/tag rows computed (5934MB heap)
Sat Feb 28 00:39:59 PST 2015 WARNING Memory is low. Increase heap size with -Xmx, decrease new generation size with larger -XX:NewRatio value, and/or use -XX:+UseCompressedOops
...
...
Sat Feb 28 01:26:17 PST 2015 INFO 6100000 X/tag rows computed (5815MB heap)
Sat Feb 28 01:28:41 PST 2015 INFO 6200000 X/tag rows computed (5926MB heap)
Sat Feb 28 01:28:41 PST 2015 WARNING Memory is low. Increase heap size with -Xmx, decrease new generation size with larger -XX:NewRatio value, and/or use -XX:+UseCompressedOops
Sat Feb 28 01:31:15 PST 2015 INFO 6300000 X/tag rows computed (5652MB heap)
Sat Feb 28 01:33:52 PST 2015 INFO 6400000 X/tag rows computed (5758MB heap)
Sat Feb 28 01:36:35 PST 2015 INFO 6500000 X/tag rows computed (5912MB heap)
Sat Feb 28 01:36:35 PST 2015 WARNING Memory is low. Increase heap size with -Xmx, decrease new generation size with larger -XX:NewRatio value, and/or use -XX:+UseCompressedOops
Sat Feb 28 01:39:28 PST 2015 INFO 6600000 X/tag rows computed (5825MB heap)
Sat Feb 28 01:42:27 PST 2015 INFO 6700000 X/tag rows computed (5864MB heap)
Sat Feb 28 01:45:43 PST 2015 INFO 6800000 X/tag rows computed (5767MB heap)
Sat Feb 28 01:49:00 PST 2015 INFO 6900000 X/tag rows computed (5883MB heap)
Sat Feb 28 01:52:28 PST 2015 INFO 7000000 X/tag rows computed (5982MB heap)
Sat Feb 28 01:52:28 PST 2015 WARNING Memory is low. Increase heap size with -Xmx, decrease new generation size with larger -XX:NewRatio value, and/or use -XX:+UseCompressedOops
Sat Feb 28 01:56:14 PST 2015 INFO 7100000 X/tag rows computed (5881MB heap)
Sat Feb 28 02:00:10 PST 2015 INFO 7200000 X/tag rows computed (5908MB heap)
Sat Feb 28 02:00:10 PST 2015 WARNING Memory is low. Increase heap size with -Xmx, decrease new generation size with larger -XX:NewRatio value, and/or use -XX:+UseCompressedOops
Sat Feb 28 02:04:33 PST 2015 INFO 7300000 X/tag rows computed (5929MB heap)
Sat Feb 28 02:04:33 PST 2015 WARNING Memory is low. Increase heap size with -Xmx, decrease new generation size with larger -XX:NewRatio value, and/or use -XX:+UseCompressedOops
Sat Feb 28 02:09:16 PST 2015 INFO 7400000 X/tag rows computed (5817MB heap)
Sat Feb 28 02:14:13 PST 2015 INFO 7500000 X/tag rows computed (5908MB heap)
Sat Feb 28 02:14:13 PST 2015 WARNING Memory is low. Increase heap size with -Xmx, decrease new generation size with larger -XX:NewRatio value, and/or use -XX:+UseCompressedOops
Sat Feb 28 02:19:37 PST 2015 INFO 7600000 X/tag rows computed (5952MB heap)
Sat Feb 28 02:19:37 PST 2015 WARNING Memory is low. Increase heap size with -Xmx, decrease new generation size with larger -XX:NewRatio value, and/or use -XX:+UseCompressedOops
Sat Feb 28 02:25:32 PST 2015 INFO 7700000 X/tag rows computed (5975MB heap)
Sat Feb 28 02:25:32 PST 2015 WARNING Memory is low. Increase heap size with -Xmx, decrease new generation size with larger -XX:NewRatio value, and/or use -XX:+UseCompressedOops
Sat Feb 28 02:32:12 PST 2015 INFO 7800000 X/tag rows computed (5919MB heap)
Sat Feb 28 02:32:12 PST 2015 WARNING Memory is low. Increase heap size with -Xmx, decrease new generation size with larger -XX:NewRatio value, and/or use -XX:+UseCompressedOops
Sat Feb 28 02:39:46 PST 2015 INFO 7900000 X/tag rows computed (5909MB heap)
Sat Feb 28 02:39:46 PST 2015 WARNING Memory is low. Increase heap size with -Xmx, decrease new generation size with larger -XX:NewRatio value, and/or use -XX:+UseCompressedOops
Sat Feb 28 02:48:25 PST 2015 INFO 8000000 X/tag rows computed (5974MB heap)
Sat Feb 28 02:48:25 PST 2015 WARNING Memory is low. Increase heap size with -Xmx, decrease new generation size with larger -XX:NewRatio value, and/or use -XX:+UseCompressedOops
Sat Feb 28 02:58:51 PST 2015 INFO 8100000 X/tag rows computed (5971MB heap)
Sat Feb 28 02:58:51 PST 2015 WARNING Memory is low. Increase heap size with -Xmx, decrease new generation size with larger -XX:NewRatio value, and/or use -XX:+UseCompressedOops
Sat Feb 28 03:11:47 PST 2015 INFO 8200000 X/tag rows computed (5943MB heap)
Sat Feb 28 03:11:47 PST 2015 WARNING Memory is low. Increase heap size with -Xmx, decrease new generation size with larger -XX:NewRatio value, and/or use -XX:+UseCompressedOops
Sat Feb 28 03:28:50 PST 2015 INFO 8300000 X/tag rows computed (5956MB heap)
Sat Feb 28 03:28:50 PST 2015 WARNING Memory is low. Increase heap size with -Xmx, decrease new generation size with larger -XX:NewRatio value, and/or use -XX:+UseCompressedOops
Sat Feb 28 03:38:35 PST 2015 WARNING Unexpected error in execution
com.cloudera.oryx.computation.common.JobException: java.lang.OutOfMemoryError: GC overhead limit exceeded
	at com.cloudera.oryx.als.computation.local.FactorMatrix.call(FactorMatrix.java:63)
	at com.cloudera.oryx.als.computation.local.ALSLocalGenerationRunner.runSteps(ALSLocalGenerationRunner.java:98)
	at com.cloudera.oryx.computation.common.GenerationRunner.runGeneration(GenerationRunner.java:236)
	at com.cloudera.oryx.computation.common.GenerationRunner.call(GenerationRunner.java:109)
	at com.cloudera.oryx.computation.PeriodicRunner.run(PeriodicRunner.java:214)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
	at org.apache.commons.math3.linear.Array2DRowRealMatrix.copyOut(Array2DRowRealMatrix.java:529)
	at org.apache.commons.math3.linear.Array2DRowRealMatrix.getData(Array2DRowRealMatrix.java:254)
	at com.cloudera.oryx.common.math.QRDecomposition.<init>(QRDecomposition.java:107)
	at com.cloudera.oryx.common.math.RRQRDecomposition.<init>(RRQRDecomposition.java:89)
	at com.cloudera.oryx.common.math.CommonsMathLinearSystemSolver.getSolver(CommonsMathLinearSystemSolver.java:37)
	at com.cloudera.oryx.common.math.MatrixUtils.getSolver(MatrixUtils.java:126)
	at com.cloudera.oryx.als.common.factorizer.als.AlternatingLeastSquares$Worker.call(AlternatingLeastSquares.java:489)
	at com.cloudera.oryx.als.common.factorizer.als.AlternatingLeastSquares$Worker.call(AlternatingLeastSquares.java:397)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	... 3 more

Sat Feb 28 03:39:35 PST 2015 INFO Running new generation due to elapsed time: 209 minutes
Sat Feb 28 03:39:35 PST 2015 INFO Starting run for instance 
Sat Feb 28 03:39:35 PST 2015 INFO Last complete generation is 0
Sat Feb 28 03:39:36 PST 2015 INFO No need to make a new generation
Sat Feb 28 03:39:36 PST 2015 INFO Generation 2 is old enough to proceed
Sat Feb 28 03:39:36 PST 2015 INFO Running generation 1
Sat Feb 28 03:39:40 PST 2015 INFO Copying /tmp/1425123576534-0/oryx-append-7628769517257349553.csv.gz to /tmp/1425123576534-1
Sat Feb 28 03:39:40 PST 2015 INFO Copying /tmp/1425123576534-0/oryx-append-2983995934077746346.csv.gz to /tmp/1425123576534-1
Sat Feb 28 03:39:41 PST 2015 INFO Copying /tmp/1425123576534-0/oryx-append-1155998814849386940.csv.gz to /tmp/1425123576534-1
Sat Feb 28 03:39:41 PST 2015 INFO Copying /tmp/1425123576534-0/oryx-append-1103482790086927796.csv.gz to /tmp/1425123576534-1
Sat Feb 28 03:39:41 PST 2015 INFO Copying /tmp/1425123576534-0/oryx-append-6832095713207111419.csv.gz to /tmp/1425123576534-1
Sat Feb 28 03:39:42 PST 2015 INFO Copying /tmp/1425123576534-0/oryx-append-5243247952662826167.csv.gz to /tmp/1425123576534-1
Sat Feb 28 03:39:42 PST 2015 INFO Reading /tmp/1425123576534-3/0.csv.gz
Sat Feb 28 03:40:01 PST 2015 INFO Pruning near-zero entries
Sat Feb 28 03:40:02 PST 2015 INFO No input files in /tmp/1425123576534-5
Sat Feb 28 03:40:02 PST 2015 INFO Pruning near-zero entries
Sat Feb 28 03:40:03 PST 2015 INFO Reading /tmp/1425123576534-4/0.csv.gz
Sat Feb 28 03:40:09 PST 2015 INFO Reading /tmp/1425123576534-1/oryx-append-7628769517257349553.csv.gz
Sat Feb 28 03:41:10 PST 2015 INFO Reading /tmp/1425123576534-1/oryx-append-1155998814849386940.csv.gz
Sat Feb 28 03:41:50 PST 2015 INFO Reading /tmp/1425123576534-1/oryx-append-1103482790086927796.csv.gz
Sat Feb 28 03:42:19 PST 2015 INFO Reading /tmp/1425123576534-1/oryx-append-2983995934077746346.csv.gz
Sat Feb 28 03:43:22 PST 2015 INFO Reading /tmp/1425123576534-1/oryx-append-6832095713207111419.csv.gz
Sat Feb 28 03:44:35 PST 2015 INFO Reading /tmp/1425123576534-1/oryx-append-5243247952662826167.csv.gz
Sat Feb 28 03:45:31 PST 2015 INFO Pruning near-zero entries
Sat Feb 28 03:45:34 PST 2015 INFO Building factorization...
Sat Feb 28 03:45:34 PST 2015 INFO Starting from new, random Y matrix
Sat Feb 28 03:45:34 PST 2015 INFO Constructed initial Y
Sat Feb 28 03:45:34 PST 2015 INFO Executing ALS with parallelism 4
Sat Feb 28 03:47:13 PST 2015 INFO 100000 X/tag rows computed (4885MB heap)
Sat Feb 28 03:49:46 PST 2015 INFO 200000 X/tag rows computed (4786MB heap)
...
Sat Feb 28 05:21:19 PST 2015 INFO 1700000 X/tag rows computed (4883MB heap)
Sat Feb 28 05:49:59 PST 2015 INFO 1800000 X/tag rows computed (4882MB heap)
Sat Feb 28 05:57:33 PST 2015 WARNING Unexpected error in execution
com.cloudera.oryx.computation.common.JobException: java.lang.OutOfMemoryError: GC overhead limit exceeded
	at com.cloudera.oryx.als.computation.local.FactorMatrix.call(FactorMatrix.java:63)
	at com.cloudera.oryx.als.computation.local.ALSLocalGenerationRunner.runSteps(ALSLocalGenerationRunner.java:98)
	at com.cloudera.oryx.computation.common.GenerationRunner.runGeneration(GenerationRunner.java:236)
	at com.cloudera.oryx.computation.common.GenerationRunner.call(GenerationRunner.java:109)
	at com.cloudera.oryx.computation.PeriodicRunner.run(PeriodicRunner.java:214)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
	at org.apache.commons.math3.linear.Array2DRowRealMatrix.<init>(Array2DRowRealMatrix.java:62)
	at org.apache.commons.math3.linear.Array2DRowRealMatrix.createMatrix(Array2DRowRealMatrix.java:145)
	at org.apache.commons.math3.linear.AbstractRealMatrix.transpose(AbstractRealMatrix.java:612)
	at com.cloudera.oryx.common.math.QRDecomposition.<init>(QRDecomposition.java:107)
	at com.cloudera.oryx.common.math.RRQRDecomposition.<init>(RRQRDecomposition.java:89)
	at com.cloudera.oryx.common.math.CommonsMathLinearSystemSolver.getSolver(CommonsMathLinearSystemSolver.java:37)
	at com.cloudera.oryx.common.math.MatrixUtils.getSolver(MatrixUtils.java:126)
	at com.cloudera.oryx.als.common.factorizer.als.AlternatingLeastSquares$Worker.call(AlternatingLeastSquares.java:489)
	at com.cloudera.oryx.als.common.factorizer.als.AlternatingLeastSquares$Worker.call(AlternatingLeastSquares.java:397)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	... 3 more

 

avatar
Master Collaborator

Have you set it to start a generation based on the amount of input received? that could be triggering the new computation.

 

That said are you sure it only has part of the input? it's possible the zipped file sizes aren't that comparable.

 

Yes, you simply don't have enough memory allocated to your JVM. Your system memory doesn't matter if you haven't let the JVM use much of it. This is in local mode right? you need to use -Xmx to give more heap.

 

Yes it will use different tmp directories for different jobs. That's normal.