Member since
09-20-2018
360
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2532 | 05-14-2019 10:47 AM |
06-19-2020
09:08 AM
Hi, Have you enabled log aggregation in your Configuration? Could you please enable it if not enabled? Thanks AKR
... View more
06-19-2020
06:34 AM
Hi, This seems to be typeconversion issue in the timestamp field. did you tried casting the timestamp into a string while populating spark data frame and then you can again convert that string into spark timestamp datatype? (i.e) after fetching the value from the Query the timestamp value needs to be converted into string in spark df and then reconvert that string to spark timestamp instead of directly pushing values from Netezza to spark because if you convert to string, it will not have datatype compatibility issues and this should work. Thanks AKR
... View more
06-18-2020
09:40 AM
Hi Madhura, The following link may be usefull for your question https://docs.cloudera.com/HDPDocuments/HDP2/HDP-2.0.6.0/bk_installing_manually_book/content/rpm-chap1-11.html For calculating maximum number of containers allowed per node. The following formula can be used: # of containers = min (2*CORES, 1.8*DISKS, (Total available RAM) / MIN_CONTAINER_SIZE) Where MIN_CONTAINER_SIZE is the minimum container size (in RAM). This value is dependent on the amount of RAM available -- in smaller memory nodes, the minimum container size should also be smaller Thanks AKR
... View more
05-07-2020
06:53 AM
Hi, We do understand that the jobs are not getting into Running state from accepted state.Could you please share the entire Yarn logs and Resource manager logs to check for any Errors. Added jobs also will get stuck if there is no much resources in the cluster. This can be checked from the Resource manager WebUI. Thanks AKR
... View more
05-06-2020
10:26 AM
Hi, Did you check whether the Queue has enough resources available? Did you tried submitting the job in other queues and let us know the updates? Thanks AKR
... View more
01-07-2020
01:41 AM
Hi, We understand that logs are not getting deleted even though you had enabled spark.history.fs properties. Did you found any errors in SHS logs with regarding to this? Thanks AKR
... View more
01-07-2020
01:34 AM
Hi, Is the issue happens for a particular queue? Could you please let us know and is the issue happens for a partcular job alone? It would be fine if you can share us the application logs, RM logs and fair scheduler.xml for further analysis. Thanks AKR
... View more
01-06-2020
09:37 AM
Hi, The below mentioned links will more information on your clarifications https://spark.apache.org/docs/latest/configuration.html Thanks Arun
... View more
01-06-2020
09:26 AM
Hi, As mentioned in the previous posts, did you tried increasing the memory and whether it solved the issue? Please let us know if you are still facing any issues? Thanks AKR
... View more
01-05-2020
06:57 AM
Hi, Did you tried disabling SPNEGO authentication in Configuration properties and tried restarting the service? Thanks AKR
... View more