Member since
01-09-2014
283
Posts
70
Kudos Received
50
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1698 | 06-19-2019 07:50 AM | |
2723 | 05-01-2019 08:07 AM | |
2772 | 04-10-2019 08:49 AM | |
2666 | 03-20-2019 09:30 AM | |
2356 | 01-23-2019 10:58 AM |
05-19-2016
11:01 AM
What is the value of your "ZooKeeper Root" set to in the kafka service configuration? If it is set to something like "/kafka", you'll need to specify that in your zk string: bin/kafka-console-consumer --zookeeper localhost:2181 /kafka --topic kafkatest --from-beginning What version of kafka parcel are you using? -pd
... View more
05-19-2016
10:56 AM
If your edge node is part of the cluster, and you are using parcels, then you won't have start and stop scripts, and the recommended method to run flume is by setting up a flume service in CM to run on the edge node. The only difference between an edge node and a cluster node, is that the edge nodes generally don't run hadoop services. Have you installed the flume rpms on this edge node or are you using parcels? Where are you running the flume-ng command from: which flume-ng
alternatives --display flume-ng -pd
... View more
05-18-2016
03:37 PM
If you are using flume to deliver to hdfs, it is recommended to have that flume agent run on a node in your cluster. If you are using flume to collect events from other applications and send downstream to another agent which then delivers to its final destination (hdfs, solr, etc), then you can run that agent on a cluster node, or on the machine where the events are being generated. If it is not running on a CDH node, you can use packages to install flume, and then use the stop and start scripts to start it and keep it running as a daemon. -pd
... View more
05-17-2016
11:46 AM
1 Kudo
No, initialize solr usually happens when the service is installed, but it may not have if you installed from director. It is in the Actions drop down menu, and grayed out when solr is running
... View more
05-17-2016
10:43 AM
6 Kudos
Have you ever had solr up and running in this cluster? Do you have any active collections? One thing to try is to perform an "Initialize Solr" action from CM (from within the solr service), when the solr service is shut down. If solr has not been initialized in ZK, you could get your observed error. -pd
... View more
05-12-2016
09:03 AM
Taildir source is documented here: http://archive.cloudera.com/cdh5/cdh/5/flume-ng/FlumeUserGuide.html#taildir-source -pd
... View more
05-12-2016
07:11 AM
You need some method to forward the logs to the flume agents. You could do something like rsyslog with the imfile input module (http://www.rsyslog.com/doc/v8-stable/configuration/modules/imfile.html ) to forward to a syslog source on the flume agents, or you could just install the standalone flume agent ( without the rest of the CDH) via rpms or tarball: http://www.cloudera.com/documentation/enterprise/release-notes/topics/cdh_vd_cdh_download.html If you run the standalone flume agents, they could use spooldir or the new taildir source (in flume for CDH5.7) to monitor the files and forward via avro to flume agents within your cluster. -pd
... View more
05-12-2016
07:06 AM
Add 3 more hdfs sinks, all using the same channel. Be sure to add hdfs.filePrefix with a unique value per sink to avoid filename collision. Hopefully that will deliver the events fast enough to keep up. -pd
... View more
05-06-2016
08:55 AM
As noted in your other similar post, one of two thing happened: 1. Your single sink is not keeping up with the source, you need to add more sinks to pull from the same channel or 2. You had an error in your sink that caused it to stop delivering to hdfs. You should review the logs for the first error (prior to the channel full exceptions). Often times restarting will resolve the issue. If you add additional sinks, that will help as well, because failure of one sink won't prevent the other sinks from pulling events off the channel. -pd
... View more
05-06-2016
08:52 AM
1 Kudo
The error is reporting a problem when trying to write into the "/solr" directory, as indicated here: Unable to create core [live_logs] Caused by: Permission denied: user=solr, access=WRITE, inode="/solr":hdfs:supergroup:drwxr-xr-x at You need to give the solr user ownership of the "/solr" directory in hdfs. sudo -u hdfs hdfs dfs -chown solr:solr /solr Additionally, you hvae changed permissions/ownership on the /user folder and /tmp, which is not recommended: sudo -u hdfs hdfs dfs -chown solr:solr /user sudo -u hdfs hdfs dfs -mkdir /tmp sudo -u hdfs hdfs dfs -chown solr:solr /tmp -pd
... View more