Member since
02-16-2016
89
Posts
24
Kudos Received
10
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
9916 | 05-14-2018 01:54 PM | |
1512 | 05-08-2018 05:07 PM | |
1067 | 05-08-2018 04:46 PM | |
2848 | 02-13-2018 08:53 PM | |
3392 | 11-09-2017 04:24 PM |
02-13-2018
08:08 PM
Best place to start will be to check YARN tuning: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_command-line-installation/content/determine-hdp-memory-config.html There can be number of reasons why your jobs might be running slow like what is the current load on cluster, how is capacity scheduling set up like priority, queues, resources etc. You can check Hive execution plan to see bottlenecks.
... View more
02-13-2018
07:40 PM
Similar issue: https://community.hortonworks.com/questions/86429/cannot-connect-to-zookeeper-server-from-zookeeper.html Problem is your zookeeper address which is pointing to localhost:2181, the worker/slave node does not have a local zookeeper running hence the "Connection refused". Check link above for detailed solution.
... View more
11-09-2017
04:24 PM
1 Kudo
Template attached. I am using Nifi 1.3.0. In this template if a file is not found in GetFile processor every 10 seconds an event will be generated. You can put an email processor in front of Monitor Activity to send mail. You can also set custom subject headers in Monitoring processor (also in the template). nifimonitoring.xml
... View more
11-07-2017
07:39 PM
2 Kudos
Put a MonitorActivity processor after your GetFile processor. Set the threshold to your interval. If a file is not found in that interval MonitorActivity will be triggered. https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.4.0/org.apache.nifi.processors.standard.MonitorActivity/index.html
... View more
11-07-2017
07:25 PM
1 Kudo
Try stopping Nifi and purging everything within your provenance repository then start Nifi. Check nifi-app.log file for any provenance related events. Check if the user running Nifi process has access to read/write in set directory. I had a similar issue today instead my provenance implementation was set to Volatile, which I changed to WriteAhead. Also note, by default implementation is PersistentProvenanceRepository and if you have been changing implementations back and forth you will need to delete provenance data. (WriteAhead can read PersistentProvenanceRepository but not other way around).
... View more
08-16-2017
08:18 PM
This is a bug in Ambari 2.5.1 [https://issues.apache.org/jira/browse/AMBARI-21473] Resolution: Remove /etc/zeppelin/conf/interpreter.json file and restart Zeppelin service
... View more
05-05-2017
08:57 PM
This is not a Hive issue rather a file system or file encoding issue. SELECT * in Hive actually does nothing except read the file from file system. So if you run a hadoop fs cat on your underlying file, you should see the same behavior. You can check file encoding on bash as $ file -i filename You can change the encoding using iconv. And convert to utf-8 which is printable encoding. iconv -f current_encoding -t new_encoding input.file -o out.file
... View more
05-05-2017
07:11 PM
1 Kudo
Trying to give closure to this topic: This is a very misleading situation where it seems Nifi is not running but it actually is. And you get ERR_CONNECTION_CLOSED or 'can't display page' etc. error when you try to hit the secure url. Reason: Nifi is configured to authenticate client for connection and since there is not authentication provided the UI errors out. Resolution: Import client certificate in your browser. This certificate needs to be trusted by the same authority as Nifi. Once imported, close and re-open the browser (refreshing will not work). Given you have imported the certificates, you should be prompted to select a certificate to login.
... View more
05-04-2017
05:45 PM
Please provide information on how you are generating and defining your keytabs. try klist -k nifi-1-service-keytab If you principals have HOST (machine name or IP) as part of the definition like xxxx/HOST_NAME@domain you will not be able to use the keytab on any other machine. Renaming the keytab will not work as content of the file still point to a specific host. It is best practice to have separate keytab for separate machines. Reusing the same keytab is not the most secure option. Alternatively, if you define a principal in AD as headless that is without HOST attribute. And then create a keytab, that keytab can be used on any host typically this is your hdfs principal. But, not too secure.
... View more
03-27-2017
09:20 PM
Try something like: hadoop fs -cat /path_to_hdfs_file/test.csv | head -c 40000000
... View more