Member since
08-16-2019
42
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1299 | 06-03-2020 07:21 PM |
09-26-2023
09:08 PM
Hi Shashank, Is the map reduce jobs are failing ? If you are concerned about the ERROR:Tools helper /usr/hdp/3.0.1.0-187/hadoop/libexec/tools/hadoop-distcp.sh was not found https://issues.apache.org/jira/browse/HADOOP-17042 This will not impact the DistcpJob
... View more
03-11-2022
04:08 AM
hi, this isn't working as it is on nifi 1.14, can you give me a hand please? i used a "generateFlowfile" with some random text, and connected to executeScript but get the following: ExecuteScript[id=78c5739f-017f-1000-0000-0000016ca301] ExecuteScript[id=78c5739f-017f-1000-0000-0000016ca301] failed to process due to javax.script.ScriptException: java.lang.NullPointerException: java.lang.NullPointerException in <script> at line number 25; rolling back session: java.lang.NullPointerException ↳ causes: Traceback (most recent call last): File "<script>", line 25, in <module> java.lang.NullPointerException java.lang.NullPointerException: java.lang.NullPointerException ↳ causes: javax.script.ScriptException: java.lang.NullPointerException: java.lang.NullPointerException in <script> at line number 25 ↳ causes: org.apache.nifi.processor.exception.ProcessException: javax.script.ScriptException: java.lang.NullPointerException: java.lang.NullPointerException in <script> at line number 25
... View more
11-06-2020
09:22 AM
I don't recommend using the spark livy connector anymore. Also by default it's not setup for use for anything other than zeppelin. i would connect via kafka. in nifi you can check the livy controller settings and make sure it's not respawning new ones.
... View more
06-03-2020
07:21 PM
There was access issue for the files event-processor.log, das-webapp.log. Gave access to those files, which resolved DAS WebUI issue.
... View more
05-31-2020
09:18 AM
1 Kudo
Hi @Shelton , Was you get chance to look into this issue, needed help on this. Thanks
... View more
03-30-2020
11:45 AM
Can I use pyhive to connect to Hive using Hive JDBC string instead of a single hostname? The following doesn't work for me. from pyhive import hive hive_conn = hive.Connection(host=<JDBC STRING>, configuration {'serviceDiscoveryMode':'zooKeeper','zooKeeperNamespace':'hiveserver2'})
... View more
02-10-2020
09:37 AM
I would also like to know this. Is Hive on Spark currently supported by HortonWorks? Since which version? Thanks in advance, David Resende
... View more
10-07-2019
02:01 PM
I got the solution - I should send messages to sandbox-hdf.hortonworks.com and see those messages in HDF rather than HDP.
... View more
10-03-2019
07:34 AM
since you have specified the target location in hdfs, you can create a table to read the data from the hdfs location. Also you can import to infer schema from the DB. sqoop import --connect jdbc:mysql://mysql_localhost/schema_database \
--username sqoop \
--table visits --password '*******' --hive-import --create-hive-table --hive-table Hive_database.visits -m 1
... View more
09-17-2019
11:02 AM
Hi @dstreev Thanks for your article, I was checking and correct me if I'm wrong, but the same could be done using Knox service, that comes by default with HDP, it's that correct? Or there is some extra feature with this service? Regards Gerard
... View more