Member since
04-11-2018
47
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
19829 | 06-12-2018 11:26 AM |
06-04-2020
08:28 AM
Probably worth pointing out that the behaviour of insertInto & saveAsTable can differ under certain conditions: https://towardsdatascience.com/understanding-the-spark-insertinto-function-1870175c3ee9 https://stackoverflow.com/questions/47844808/what-are-the-differences-between-saveastable-and-insertinto-in-different-savemod
... View more
08-01-2018
11:44 AM
Here i am ask for those lover microsoft edge password manager who want to get some time to time upgrade in their operating system then you have to go through this windows here you can get time to time the upgrade will be available and also after upgrade this they secure your all the password that can be created in the previous one.
... View more
06-11-2018
02:32 PM
@RAUI 1. Currently i am writing dataframe into hive table using insertInto() and mode("append") i am able to write the data into hive table but i am not sure that is the correct way to do it? Please review the following link, I'm hoping it helps address this question: https://stackoverflow.com/questions/47844808/what-are-the-differences-between-saveastable-and-insertinto-in-different-savemod 2. For the exception I would suggest you open separate thread and add more information - including full error stack, spark client command line arguments and code you are running that is failing. HTH
... View more
06-12-2018
11:25 AM
@RAUI
Does the answer helpful to resolve your issue..!! Take a moment to Log in and Click on Accept button below to accept the answer, That would be great help to Community users to find solution quickly for these kind of issues and close this thread.
... View more
09-04-2018
04:41 PM
@RAUI Yes there is another way of achieving this. You can use the method copy() from the FileUtil class and pass your FileSystem object to it to effectively copy your files from the source HDFS location to the target. As with using rename() you will need to ensure you target directory is created before calling copy. FileUtil.copy() has a signature where you provide a source and destination FS and in this case you would provide the same FS object since you are looking to copy files to a different location on the same HDFS. There is also a boolean option to delete the source file after the copy if that fits your use case. Here is a link to the FileUtil API: http://hadoop.apache.org/docs/r2.8.0/api/org/apache/hadoop/fs/FileUtil.html
... View more
05-29-2018
02:43 PM
@RAUI If the above answer helped you, Please consider clicking "Accept" button and close this thread.
... View more
01-27-2019
04:01 PM
I see these errors too. I feel like they only started happening after I upgraded from Spark 2.2 to Spark 2.4. HTH someone figure out the actual problem... , I sporadically see this problem too. It only started happening after I upgraded from Spark 2.2 to Spark 2.4
... View more
04-20-2018
12:39 PM
@Matt Clarke, Really appreciate your detailed answers as always.
... View more
04-17-2018
01:34 PM
@Matt Clarke, seems quite refined approach. happy to see your response.
... View more
04-11-2018
04:49 PM
@Rahoul
A
Progress of defects will not be tracked/updated in this forum. You can track the progress of this Jira through the Jira link provided above. Can you close this thread by clicking "accept" on an answer? Thank you, Matt
... View more