Member since
07-15-2015
43
Posts
1
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
7256 | 01-31-2018 02:48 AM | |
2660 | 10-29-2017 10:08 PM | |
10932 | 05-09-2017 06:53 AM | |
4575 | 01-31-2017 10:17 PM |
09-20-2020
03:32 AM
@ValiD_M Did you find any solution for this?
... View more
03-10-2020
05:21 AM
When ranger_audits collection got created in solr, solr plugin test connection in ranger UI is throwing the error Authentication Required.
... View more
03-02-2020
11:42 PM
Hi, I'm using cloudera data platform 7.0.3. Getting below error while test connection from ranger solr plugin. Problem accessing /solr/admin/collections. Reason: <pre> Authentication required<pre> Please help me out Thanks in advance!
... View more
Labels:
02-26-2018
11:16 PM
bin/zookeeper-shell.sh <zookeeper_host:port>:2181 If you want to delete only topics rmr /brokers/topics for deleting brokers rmr /brokers
... View more
01-31-2018
02:48 AM
1 Kudo
Resolved by deleting brokers data from zookeeper Thanks.
... View more
01-30-2018
10:10 PM
Hi,
I've been getting problems with the consumer in CDH5.13.0. For other CDH-5.11.0 and CDH-5.12.0 versions with Kafka version : 0.10.2-kafka-2.2.0 are working fine. Added advertised.listeners also and i'm running in a non-secure environment. Observed client.id is blank for console consumer whereas it is client.id = console-producer for producer. I need to work with CDH-5.13.0,please help me out
When I run the old API consumer, it works by running
bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning
However, when I run the new API consumer, I don't get anything when I run
bin/kafka-console-consumer.sh --new-consumer --topic test --from-beginning --bootstrap-server localhost:9092
... View more
Labels:
- Labels:
-
Apache Kafka
-
Apache Zookeeper
10-29-2017
10:08 PM
In my case, Actually the storepass and keypass should be same for solr keystore.
... View more
05-09-2017
07:10 AM
Hi,
ISSUE: Requested data length 146629817 is longer than maximum configured RPC length 134217728
Earlier, ipc.maximum.data.length used to be 64MB and got the same error and we changed that to 128MB. Now again it got exceeded and resulting data corruption/missing issues. Is there any maximum configurable value of ipc.maximum.data.length? Can we change this value above 128MB?
Thanks in advance
... View more
Labels:
- Labels:
-
Apache Hadoop
05-09-2017
06:53 AM
Yes Harsh, it's number of blocks. Block count is 6 Million. Deleted unwanted small files, now the cluster health is good Is there any limit that a datanode should have only x no. of blocks?
... View more
05-09-2017
03:01 AM
Thanks HarshJ for your reply, In namenode log , I am facing this issue.. CDH version 5.7.1 Block count reached to ~6million and how many blocks a datanode can handle/namenode get block report. I saw block count threshold value set 3lakh in cloudera. Can you please explain about block report format and length.
... View more