Member since
09-29-2015
38
Posts
0
Kudos Received
0
Solutions
01-04-2019
04:19 AM
i have used the below command to copy 36TB to blob using snaoshot. HADOOP_CLIENT_OPTS="-Xmx40G" hadoop distcp -update -delete $SNAPSHOT_PATH wasbs://buclusterbackup@blobplatformdataxe265ecb.blob.core.windows.net/sep_backup/application_data getting Azure exception errors and Java IO error. i re ran with -skipcrccheck still the same error.
... View more
08-24-2017
01:40 PM
The amount of memory to assign to the JVM is relative to the number of documents in solr core nav_elements as per the documentation. See role log to get this number from your instance. The JVM sizing formula is number of nav_elements * 200, which gives you a rough estimate of what is required for normal operation.
... View more
04-02-2017
09:24 PM
1 Kudo
Thanks, Indeed in my case the memory I assigned to the executor was overrides by the memory passed in the workflow so the executors were running with 1 GB instead of 8GB. I fixed it by passing the memory in the workflow xml
... View more
12-13-2016
11:01 AM
Hi Ranan, Because this is an older thread and already marked as solved, lets keep this conversation on the other thread you opened: http://community.cloudera.com/t5/Advanced-Analytics-Apache-Spark/Debug-Spark-program-in-Eclipse-Data-in-AWS/m-p/48472#U48472
... View more
05-10-2016
08:17 PM
Not yet
... View more