Member since
06-09-2016
529
Posts
129
Kudos Received
104
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1788 | 09-11-2019 10:19 AM | |
| 9427 | 11-26-2018 07:04 PM | |
| 2561 | 11-14-2018 12:10 PM | |
| 5563 | 11-14-2018 12:09 PM | |
| 3244 | 11-12-2018 01:19 PM |
06-12-2018
02:12 PM
@Anpan K Usually upon a saved change/new service install ambari will ask you to restart one or more services. . Ambari is aware that certain actions performed on service Y will require a restart of service X - Hence marks as restart is required. . Why restart is required sometimes requires you to know what have been latest changes performed. Check the services version and see if any changes have been done recently. Or perhaps a new service was added. HTH
... View more
06-12-2018
01:55 PM
@JAy PaTel Try with putty alternative pscp, hopefully this will work. Or else you can create a shared folder on your virtualbox and use it to move files from windows to virtual machine.
... View more
06-12-2018
01:51 PM
@Anpan K This probably means you made a change on ambari. Ambari gives you this alert so that you know that new configuration needs to be pushed to the nodes. During restart ambari will push the new configuration to the nodes and the alert will disappear - You should restart when possible. HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
06-12-2018
12:36 PM
3 Kudos
@Vinay K You need to enable user impersonation in hive configuration. Set hive.server2.enable.doAs = true save and restart hive server2. When doAs is set to false, queries execute as the Hive user and not the end user. Setting it to true so that queries will be run as end user instead. In ambari you can do this change in Hive > Configs > Settings > HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
06-12-2018
12:04 PM
@JAy PaTel try with 127.0.0.1 instead of localhost. scp -v -p 2222 datafile.txt root@127.0.0.1:/ HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
06-12-2018
11:59 AM
@JAy PaTel The above means that during startup of the spark application the sparkcontext was not initialized. I've seen this particular error in cases where application code is doing something previous to creating the sparkcontext/sparksession and this pre-creation of sparkcontext code piece is delayed (for any reasons) leading to this issue. I recommend you review your code with detail. Take oozie out of the picture. Also try run it in yarn-cluster and/or yarn-client mode, perhaps it will fail as well and this will simplify the troubleshooting. HTH
... View more
06-11-2018
05:33 PM
@JAy PaTel The error message you see is very generic. When dealing with this type of errors if possible you could set yarn.nodemanager.delete.debug-delay-sec=600 this will give you some time to go to the actual node where this is failing and dig into the yarn local dir to hopefully find the actual cause for the job to fail. Check under /hadoop/yarn/local/usercache for the application id and any log files that could potentially lead to a better understanding of the problem. HTH
... View more
06-11-2018
05:26 PM
@priyal patel First make sure you know if OOM is happening on driver or in executor. You can find this by looking at the logs. To test I suggest you increase the --driver-memory to 10g or even 20g and see what happens. Also try running on yarn-client mode instead of yarn-cluster. If OOM error comes on the sdtout of spark-submit you will know the driver is running out of memory. Else you can check the yarn logs -applicationId <appId> to see what happened on the executor side. HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
06-11-2018
02:32 PM
@RAUI 1. Currently i am writing dataframe into hive table using insertInto() and mode("append") i am able to write the data into hive table but i am not sure that is the correct way to do it? Please review the following link, I'm hoping it helps address this question: https://stackoverflow.com/questions/47844808/what-are-the-differences-between-saveastable-and-insertinto-in-different-savemod 2. For the exception I would suggest you open separate thread and add more information - including full error stack, spark client command line arguments and code you are running that is failing. HTH
... View more
06-11-2018
01:45 PM
@Manikandan Jeyabal Add the following settings to your custom spark-defaults spark.driver.extraClassPath=/usr/hdp/current/hadoop-client/lib/snappy*.jar
spark.driver.extraLibraryPath=/usr/hdp/current/hadoop-client/lib/native Also there is another thread with same suggestion here HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more