Member since
07-30-2019
333
Posts
357
Kudos Received
76
Solutions
06-24-2020
07:50 PM
If you execute "rmr /solr". you may encounter following authentication error. rmr /solr Authentication is not valid : /solr/security To workaround this you can use skipACL option, which found in the following site. https://www.programmersought.com/article/60861634876/ Note: This is workaround steps in Cloudera Manager. 1. Zookeeper->Configration->java Configuration Options for Zookeeper Server Add -Dzookeeper.skipACL=yes 2. Restart the zookeeper service Remember to revert the change and restart zookeeper after you manage to remove /solr rgds, Rama.
... View more
07-27-2017
03:45 AM
I agree This flow shows workflow for log collection, aggregation, store and display.
Ingest logs from folders. Listen for syslogs on UDP port. Merge syslogs and drop-in logs and persist merged logs to Solr for historical search. Dashboard: stream real-time log events to dashboard and enable cross-filter search on historical logs data https://community.hortonworks.com/articles/961/a-collection-of-nifi-examples.html
... View more
05-12-2016
02:08 PM
1 Kudo
Hi @Andrew Grande Thanks!, I actually did that:) in the demo video I had it, but disabled and removed in the flow attached [Kathy will say if its going down based on value "You may want to buy some today" or Alex will say if Its going up," you may want to sell some" ;)] Thanks again!
... View more
03-14-2017
02:12 PM
Build your MicroSD card with www.pibakery.org it lets you pre configure boot, wifi, ...
... View more
12-17-2015
03:05 PM
1 Kudo
When running Solr in a clustered mode (SolrCloud), it has a runtime dependency on a ZooKeeper, where it stores configs, coordinates leader election, tracks replicas allocation, etc. All-in-all, there's a whole tree of ZK nodes created with sub-nodes. Deploying SolrCloud into a Hadoop cluster usually means re-using the centralized ZK quorum already maintained by HDP. Unfortunately, if not explicitly taken care of, SolrCloud will happily dump all its ZK content in ZK root, which really complicates things for an admin down the line. If you need to clean up your ZK first, take a look at this how-to. Solution is to put all SolrCloud ZK entries under its own ZK node (e.g. /solr). Here's how one does it: su - zookeeper
cd /usr/hdp/current/zookeeper-client/bin/
# point it a ZK quorum (or just a single ZK server is ok, e.g. localhost)
./zkCli.sh -server lake02:2181,lake03:2181,lake04:2181
# in zk shell now
# note the empty brackets are _required_
create /solr []
# verify the zk node has been created, must not complain the node doesn't exist
ls /solr
quit
# back in the OS shell
# start SolrCloud and tell it which ZK node to use
su - solr
cd /opt/lucidworks-hdpsearch/solr/bin/
# note how we add '/solr' to a ZK quorum address.
# it must be added to the _last_ ZK node address
# this keeps things organized and doesn't pollute root ZK tree with Solr artifacts
./solr start -c -z lake02:2181,lake03:2181,lake04:2181/solr
# alternatively, if you have multiple IPs on your Hadoop nodes and have
# issues accessing Solr UI and dashboards, try binding it to an address explicitly:
./solr start -c -z lake02:2181,lake03:2181,lake04:2181/solr -h $HOSTNAME
... View more
Labels:
01-16-2017
09:14 AM
One question.. Does the ID of a Processor or ProcessFlow change if NiFi is rebooted?
... View more
05-15-2017
08:17 PM
Hey Andrew, I'm having the same issue. Did you find any fixes for this? It's great it can be changed without a restart... but having to come back in and set this anytime Solr or the server itself is restarted isn't great. 🙂 EDIT: Help from Hortonworks support pointed out that in log4j.properties (either in Ambari if your Solr Cloud is managed there, or directly on server if not) you can adjust the following line from: log4j.rootLogger=INFO, file, CONSOLE Changed to: log4j.rootLogger=WARN, file, CONSOLE That will allow it to retain this change on Solr startup.
... View more
10-23-2015
03:06 PM
1 Kudo
Consider the following flow where records are inserted in the RDBMS (or Phoenix/HBase in this instance): The PutSQL and similar processor depends on a database connection, provided by a DBCPConnectionPool controller service (Controller Settings -> Controller Services): The best part: when one creates a template, all linked services will automatically be included. When template is rehydrated in another NiFi instance, services like the one below will automatically be added to the new NiFi flow:
... View more
11-16-2015
01:13 PM
@Andrew Grande - the good news is that in the next version NiFi and HDF, the swapping has been refactored quite, so the comment about having to adjust the nifi.queue.swap.threshold property will no longer be needed.
... View more