Member since
06-09-2016
529
Posts
129
Kudos Received
104
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1738 | 09-11-2019 10:19 AM | |
| 9343 | 11-26-2018 07:04 PM | |
| 2492 | 11-14-2018 12:10 PM | |
| 5342 | 11-14-2018 12:09 PM | |
| 3157 | 11-12-2018 01:19 PM |
06-01-2018
08:00 PM
1 Kudo
@Maxim Dashenko sorry about that. The following is working fine: spark-shell --driver-class-path curator-client-2.7.1.jar
scala> import org.apache.curator.utils.PathUtils;
import org.apache.curator.utils.PathUtils
scala> classOf[PathUtils].getProtectionDomain().getCodeSource().getLocation()
res0: java.net.URL = file:/root/curator-client-2.7.1.jar Try adding --driver-class-path curator-client-2.7.1.jar to your spark-submit and let me know if it works fine. Also you may want to try using --conf spark.driver.userClassPathFirst=true with the jar and see if that helps. Thanks!
... View more
06-01-2018
05:46 PM
2 Kudos
@Maxim Dashenko Perhaps when you are packaging your jar file is not adding the curator-client jar correctly to the fat jar. I tested the following and it works fine: 1. wget http://central.maven.org/maven2/org/apache/curator/curator-client/2.7.1/curator-client-2.7.1.jar 2. spark-shell --jar curator-client-2.7.1.jar 3. scala> import org.apache.curator.utils.PathUtils;
import org.apache.curator.utils.PathUtils
scala> PathUtils.validatePath("/tmp")
No errors thrown while running the above. You can try adding the --jars and point to the jar file as well and see if that helps. spark-submit --jars curator-client-2.7.1.jar --class com.test.Test --master local test-0.0.1-SNAPSHOT.jar Otherwise I suggest you check your fat/uber jar. HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
06-01-2018
05:06 PM
@RC Here is what I think you should use to make this work: -X POST use processor HTTP method attribute and set to POST -x http://proxyhost:port I think you should consider adding this configuration or add properties Proxy host=proxyhost, Proxy port=port and Proxy protocol=https -H "Accept: application/json" Use the Attributes to send referencing a new attribute with the value (add 1 extra attributes with corresponding value) -H "Content-Type: application/x-www-form-urlencoded" Use Content type attribute -d 'data sent' Send Message Body = true. And the content should come as inbound flowfile to the processor. To test try adding a GenerateFlowFile processor with the content without single quotes https://hostname/security/v1/oauth/token Set in Remote url Finally since endpoint is ssl enabled you need to set SSL Context Service. HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
06-01-2018
02:55 PM
Sorry is not clear by the description what the issue is? It would help if you can clarify what is the problem you are facing and if there is any error stack please also add it.
... View more
06-01-2018
02:47 PM
@chaouki trabelsi mongodb connector is build to leverage spark parallelism. So I think is a good alternative on this case. If you have further questions on how to use it or anything else please open a separate thread! Thanks!
... View more
06-01-2018
02:27 PM
@Pankaj Singh Yes, if you like to have mysql in cluster mode you need to perform this configuration manually.
... View more
06-01-2018
02:06 PM
1 Kudo
@chaouki trabelsi @Victor There are 2 approaches you can take. One is using package and the other is using jars (you need to download the jars) Package approach Add the following configuration on your zeppelin spark interpreter: spark.jars.packages = org.mongodb.spark:mongo-spark-connector_2.11:2.2.2
# for more information read here https://spark-packages.org/package/mongodb/mongo-spark Jar approach You need to add the mongo db connector jars to the spark interpreter configuration. 1. Download the mongodb connector jar for spark (depending on your spark version make sure you download the correct scala version - if spark2 you should use 2.11 scala) 2. Add the jars to the zeppelin spark interpreter using spark.jars property spark.jars = /location/of/jars On both cases you need to save and restart the interpreter. HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
06-01-2018
01:55 PM
@Pankaj Singh When adding hive as service via ambari you can select to add: 1. New mysql database - > With this option will install a mysql server database and configure automatically 2. Existing Mysql Database -> With this option you need to install mysql server database add user and create database hive 3. Existing PostgreSQL Database 4. Existing Oracle Database HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
05-31-2018
09:37 PM
@Anpan K Please take a moment to login and click the "accept" link on my answer answer.
... View more
05-31-2018
09:21 PM
@Anpan K could you share the content of file1? I have run the following successfully curl -u admin:admin -X GET 'http://localhost:8080/api/v1/clusters/?fields=Clusters/health_report' | jq '.items[0].Clusters.health_report."Host/host_status/UNHEALTHY"' HTH
... View more