Member since
06-09-2016
529
Posts
129
Kudos Received
104
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1100 | 09-11-2019 10:19 AM | |
7182 | 11-26-2018 07:04 PM | |
1489 | 11-14-2018 12:10 PM | |
3037 | 11-14-2018 12:09 PM | |
2171 | 11-12-2018 01:19 PM |
07-03-2018
06:48 PM
1 Kudo
@Richard Hagarty Yes, you need to have a support subscription to upload and access smart sense data analysis results. Please review this here: https://docs.hortonworks.com/HDPDocuments/SS1/SmartSense-1.4.5/bk_installation/content/smartsense_doc_overview.html If so, how do I sign up, and is there a free or trial version? Please go to hortonworks.com site and use the contact form or phone # to get more information about subscription and any other questions you may have. HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
07-03-2018
05:40 PM
@Melchicédec NDUWAYO If the above answer has helped please login and mark the answer as "Accepted"
... View more
07-03-2018
04:52 PM
@vivek jain I dont see any code making use of withCatalog function. If this function is not beeing used what is the expected output? As an example perhaps you could try adding something like this to show some of the content of the hbase table: val df = withCatalog(catalog)
df.show() HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
07-03-2018
04:47 PM
1 Kudo
@Melchicédec NDUWAYO You could use the following java script client package or similar: https://www.npmjs.com/package/livy-client Or you could code your own using javascript jquery or similar. Overall steps you should consider: 1. Create a livy session to be able to submit code 2. Reference the session id when running the code Also keep the livy api link handy when working with livy 🙂 https://livy.incubator.apache.org/docs/latest/rest-api.html HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
07-03-2018
04:37 PM
@Mike Lok Please review the following article and validate your PAM configuration: https://community.hortonworks.com/content/supportkb/48753/how-to-use-pam-for-hiveserver2-authentication.html Also verify if your passwd file has correct permission and is in correct format. HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
07-03-2018
01:01 PM
@Bhushan Kandalkar RANGER-989 is actually a duplicate of https://issues.apache.org/jira/browse/RANGER-804 Solution to this issue was to include the following 2 jars in the lib directory for the usersync process commons-httpclient-3.1.jar
commons-codec-1.4.jar Make sure you are on the correct node when copying the files to /usr/hdp/current/ranger-usersync/lib/ as usersync process maybe installed on separate node. HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
07-03-2018
11:39 AM
@Vladislav Shcherbakov I think using InvokeScriptedProcessor may be a good option on this case. Please take a look here: http://funnifi.blogspot.com/2016/02/invokescriptedprocessor-hello-world.html Also there is a similar article that you could review here: https://community.hortonworks.com/articles/193822/parsing-web-pages-for-images-with-apache-nifi.html HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
07-03-2018
11:31 AM
@Yassine Yes, you could use Pandas and Matplotlib along with pyspark. For example you could use spark api to read data from cluster in parallel, process the data and then you could transform the spark dataframe to pandas and use matplotlib to show the results. There are other interactions but I think this may be the most common one I've seen.
... View more
07-02-2018
08:41 PM
@Yassine Yes, psypark interpreter can be used to run python. However the application will automatically have reference to spark libraries. Also note pyspark interpreter launches a yarn application and by default this is configured to run with 2 executors - This means you will see an application master + 2 containers for the running pyspark interpreter. If you are not really making any use of spark and only write code that does not need to be run in cluster perhaps you should consider installing just the python interpreter. HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
07-02-2018
08:32 PM
1 Kudo
@R Escobar You can configure spark to use hive LLAP - Review this: https://github.com/hortonworks-spark/spark-llap/wiki/7.-Support-Matrix and https://community.hortonworks.com/articles/72454/apache-spark-fine-grain-security-with-llap-test-dr.html This is still in TP (Technical preview) in HDP 2.6.5 HTH
... View more