Member since
03-31-2017
57
Posts
1
Kudos Received
0
Solutions
07-05-2018
09:51 AM
Hi, @Felix Albani Thanks.
... View more
06-08-2018
11:48 AM
@Jay Kumar SenSharma Thanks
... View more
06-14-2018
07:44 AM
Hi, @Felix Albani I set driver memory to 20 GB.I tried using below spark-submit parameters : ./bin/spark-submit --driver-memory 20g --executor-cores 3 --num-executors 20 --executor-memory 2g --conf spark.yarn.executor.memoryOverhead=1024 --conf spark.yarn.driver.memoryOverhead=1024 --class org.apache.TransformationOper --master yarn-cluster /home/hdfs/priyal/spark/TransformationOper.jar Cluster configuration is : 1 Master node(r3.xlarge) and 1 worker node(r3.xlarge) : 4 vCPUs, 30GB memory,40 GB storage Still getting the same issue spark job is in running state and YARN memory is 95% used.
... View more
03-23-2018
09:50 AM
@Rahul Soni, Hi, I edited the comment.Please check it.
... View more
03-16-2018
05:18 AM
@priyal patel, Atlas currently doesn't provide lineage for pig scripts. All supported ones are listed here : https://hortonworks.com/apache/atlas/#section_1
... View more
06-02-2017
08:46 PM
7 Kudos
Below posts might help: https://community.hortonworks.com/questions/63761/sqoop-import-to-hive-again-stroing-repeted-recorde.html https://community.hortonworks.com/questions/51508/sqoop-imported-more-records-than-source.html
... View more