Member since
01-09-2014
283
Posts
70
Kudos Received
50
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1698 | 06-19-2019 07:50 AM | |
2723 | 05-01-2019 08:07 AM | |
2772 | 04-10-2019 08:49 AM | |
2666 | 03-20-2019 09:30 AM | |
2356 | 01-23-2019 10:58 AM |
03-30-2022
12:42 PM
It worked perfect for me. Installing from scratch Cloudera Manager 7.4.4 and CDP 7.1.7.
... View more
04-23-2020
06:49 PM
we had the some issue, and I tried this, it seems not work.
... View more
11-14-2019
08:53 PM
@bdelpizzo Can you see any error in kafka logs/mirror maker logs? It might be possible that the mirror maker is not able to process messages, because of size of message. If the size of any message is high than configured/default value then it might stuck in queue. Check for message.max.bytes property
... View more
06-20-2019
01:37 AM
thanks for your answer, do i need the .meta and .meta.tmp files?
... View more
05-01-2019
08:07 AM
No, if you only have one sink, you would have one file (assuming you don't use header variable buckets). The sink will consume from all three partitions and may deliver those in one batch to one file. -pd
... View more
03-20-2019
09:30 AM
The snapshots are part of the indexes, representing a point in time list of the segments in the index. When you perform the backup, the metadata (information about the cluster) and the snapshot specified indicate s which set of index files to be backup up/copied to the destination hdfs directory (as specified in the <backup> section of the source solr.xml) This blog walks through the process https://blog.cloudera.com/blog/2017/05/how-to-backup-and-disaster-recovery-for-apache-solr-part-i/ When you run the --prepare-snapshot-export, it creates a copy of the metadata, and a copylisting of all the files that will be copied by the distcp command, to the remote cluster. Then, when you execute the snapshot export, the distcp command will copy those files to the remote cluster. The -b on the restore command is just the name of the directory (represented by the snapshot name) that was created and copied by distcp. -pd
... View more
03-13-2019
06:52 PM
Please create a new thread for distinct questions, instead of bumping an older, resolved thread. As to your question, the error is clear as is the documentation, quoted below: """ Spooling Directory Source This source lets you ingest data by placing files to be ingested into a “spooling” directory on disk. This source will watch the specified directory for new files, and will parse events out of new files as they appear. The event parsing logic is pluggable. After a given file has been fully read into the channel, it is renamed to indicate completion (or optionally deleted). Unlike the Exec source, this source is reliable and will not miss data, even if Flume is restarted or killed. In exchange for this reliability, only immutable, uniquely-named files must be dropped into the spooling directory. Flume tries to detect these problem conditions and will fail loudly if they are violated: If a file is written to after being placed into the spooling directory, Flume will print an error to its log file and stop processing. If a file name is reused at a later time, Flume will print an error to its log file and stop processing. """ - https://archive.cloudera.com/cdh5/cdh/5/flume-ng/FlumeUserGuide.html#spooling-directory-source It appears that you can get around this by using ExecSource with a script or command that reads the files, but you'll have to sacrifice reliability. It may be instead worth investing in an approach that makes filenames unique (`uuidgen` named softlinks in another folder, etc.)
... View more
02-27-2019
11:29 AM
Thats odd that the VM is read only....Are you making the change in CM for the flume logging safety valve? -pd
... View more
01-28-2019
05:33 PM
Thanks. I'll try it the way you told me.
... View more
01-23-2019
06:18 PM
It does decrease but I was monitoring the size of the directory and expected to be close to empty when done draining. It's good to know it just keeps the last two logs and I also noticed it creates a new empty checkpoint when its done. Thanks for the help!
... View more