We're attempting to execute a basic Spark job to read/write data from Solr, using the following environment:
- CDP version: 7.1.9
- Spark: Spark3
- Solr: 8.11
- Spark-Solr Connector: opt/cloudera/parcels/SPARK3/lib/spark3/spark-solr/spark-solr-3.9.3000.3.3.7191000.0-78-shaded.jar
When we try to interact with Solr through Spark, the execution process hangs indefinitely, without any errors or results. Other components, such as Hive and HBase, integrate smoothly with Spark, and we’re using a valid Kerberos ticket that successfully authenticates with other Hadoop components. Additionally, we’ve tested REST API calls to Solr via both curl and Python’s requests library, and we’re able to retrieve data with the Kerberos ticket.
The problem appears isolated to Spark’s connection with Solr, as all other systems interact as expected. Has anyone experienced a similar issue or have ideas on what might be causing this?
Here’s the Spark code we’re trying:
solr_options = {
"zkhost": "zkURL-01.orgis.ie:2181,zkURL-02.orgis.ie:2181,zkURL.orgis.ie:2181/solr",
"collection": "collection_phoectic_test2"
}
# Read data from Solr
df = spark.read.format("solr").options(**solr_options).load()
df.show()
Interestingly, if I specify a non-existent Solr collection, I get an error stating that the collection doesn’t exist. This leads me to believe that Zookeeper is managing the initial connection, as it has the metadata for the Solr collections. However, it seems the Spark executor might be connecting to Zookeeper but failing to establish a connection between Spark executor nodes and Solr nodes.
Additional Details:
- The Spark UI logs (stderr) do not provide much insight, and I’m looking for any common troubleshooting steps or configurations that might resolve this.
If anyone has suggestions or has resolved a similar issue, please let me know. Thank you!