Member since
02-03-2016
61
Posts
15
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1970 | 05-23-2016 11:29 AM | |
4605 | 05-21-2016 08:30 AM |
11-06-2018
08:37 AM
Hey, the alert publisher passes a json file containing the alerts to the custom script, is there a way to get that json from outside ? maybe a REST call ? thanks.
... View more
10-22-2018
06:43 PM
Thank you so munch! I change the group of '/tmp/logs' to hadoop , and restart the JobHistoryServer role, everything being OK. So happy !
... View more
05-25-2017
12:38 PM
Benassi, Within Cloudera Data Science Workbench you should be able to use almost any Python, R, or Java library you want. While we have not tested and do not support Apache Phoenix directly, you should be able to access it from within a session using the same methods you would on your local laptop. For instance, Phoenix has a Python client library here: https://phoenix.apache.org/phoenix_python.html You could also likely use the JDBC driver from R, Python, and Scala engines using your favorite database library. Best, Tristan
... View more
05-23-2016
11:29 AM
This is a custom data validation error. So, I'm invalidating this.
... View more
05-21-2016
10:42 AM
Yes you will certainly need to provide access keys for S3 access to work. I don't think (?) that would be a solution to a VerifyError, which is a much lower-level error indicating corrupted builds. Yes, it's expected that AWS SDK dependencies were updated along with the new Spark version in CDH 5.7. I think the current version should depend on jets3t 0.9, which is the one you want.
... View more