Member since
08-29-2018
109
Posts
3
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3286 | 11-14-2019 02:54 AM | |
10830 | 11-05-2019 07:51 PM |
03-28-2021
07:52 PM
Hi @jake_allston , do you find the culprit? I've got similar issue. Took 5 minutes to load spark-shell. Its a new cluster. Doen not happen to my other cluster
... View more
02-10-2021
09:41 AM
It looks like you are running Spark shell on a Windows machine, maybe your local laptop. Is there anywhere in the code you are mentioning the hostname, "dclvmsbigdmd01"? If not, where is your 172.30.294.196 (hive.metastore.uris)? Does this IP resolve the name dclvmsbigdmd01? Can you review if the host/domain is reachable from your local?
... View more
07-14-2020
06:11 AM
Hello, AFAIK, the Stanford CoreNLP wrapper for Apache Spark should not be a bottleneck in terms of parallel processing. Spark would take care of running it parallelly on multiple documents. Regardless of the number of documents, the number of API requests to the CoreNLP server would remain the same.
... View more
07-14-2020
04:02 AM
Hey, could you share the exact trace of output that you receive? If the issue is on the WebUI, could you also share the Screenshot of what you see?
... View more
05-08-2020
10:55 PM
Okay, let me know if changing HiveContext to SparkContext makes any difference. It could give a lead to resolution.
... View more
05-08-2020
02:20 AM
Hi @clvi, Try adding the --appOwner <username> to the yarn logs command. However, I think the application states are erased from the RM state store, probably due to an RM State Restore.
... View more
03-23-2020
05:29 AM
Hello RIshab, Can you please mention the error you are facing exactly?
... View more
11-14-2019
02:56 PM
@gsthina .. Thanks for your help. Yeah I have checked it already. Utility present on [1] says that it can convert Zeppelin to Jupyter but my requirement is to convert jupyter to zeppelin. And my requirement is to convert python code (jupyter notebook) to pyspark (zeppelin notebook) which is also not supported by this converter. You can check in the description.
... View more
11-14-2019
02:54 AM
Hey @avengers, Just thought, this could add some more value to this question here. Spark SQL uses a Hive Metastore to manage the metadata of persistent relational entities (e.g. databases, tables, columns, partitions) in a relational database (for fast access) [1]. Also, I don't think there would be a MetaStore crash if we use it along with HiveOnSpark. [1] https://jaceklaskowski.gitbooks.io/mastering-spark-sql/spark-sql-hive-metastore.html
... View more
11-05-2019
08:21 PM
Hi @wret_1311, Thanks for your response and I appreciate for confirming the solution. I'm glad, it helped you 🙂
... View more