Member since
12-17-2018
14
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
7111 | 11-13-2017 03:17 PM |
07-05-2018
10:15 AM
try adding these below to define range scan.setStartRow(org.apache.hadoop.hbase.util.Bytes.toBytesBinary(prefixFilterValue)); scan.setEndRow(org.apache.hadoop.hbase.util.Bytes.toBytesBinary(prefixFilterValue.concat(String.valueOf(Long.MAX_VALUE))));
... View more
07-04-2018
02:17 PM
what is your input? Are you trying to access all rows with latest TS or specific record based on input?
... View more
05-14-2018
03:22 PM
Wondering why HDP 2.6 (means HADOOP 2.7.3) comes with Avro 1.7.4 version which does not support Serialization. https://hadoop.apache.org/docs/r2.7.3/hadoop-mapreduce-client/hadoop-mapreduce-client-core/dependency-analysis.html
... View more
12-13-2017
11:20 PM
Can you please share your kafka logs? Please enable log level to DEBUG n tools-log4j. Wondering if your brokerid changed or live .
... View more
12-12-2017
08:34 PM
1. what version of kafka you are running? 2. what error you are getting in the console? 3. also use --new-consumer explicitely and share the errors after producing some messages while console consumer running.
... View more
12-10-2017
10:38 PM
1. Try with PLAINTEXTSASL instead of SASL_PLAINTEXT 2. Do you have rangers as well? please retry after restarting ranger service if you have
... View more
11-21-2017
10:45 PM
chnage this lne from consumerConfig.put("security.protocol", "PLAINTEXTSASL"); to consumerConfig.put("security.protocol", "SASL_PLAINTEXT"); Reference: https://kafka.apache.org/090/documentation.html (search for security.protocol)
... View more
11-13-2017
05:46 PM
you can set the values for the follwoing attributes. offsets.topic.num.partitions=50 offsets.topic.replication.factor=3 losing this topic may cause issue for kafka broker to startup of service consumer request(metadata fetch failure)
... View more
11-13-2017
03:17 PM
Starting from version 0.8.2.0, the offsets committed by the consumers aren’t saved in Zookeeper but on a partitioned and replicated topic named __consumer_offsets (internal topic), which is hosted on the Kafka brokers in the cluster. When a consumer commits some offsets, it sends a message to the broker to the __consumer_offsets topic. The message has the following structure : key = [group, topic, partition] value = offset If consumer process dies, it will be able to start up and start reading where it left off based on offset stored in "__consumer_offset" or as discussed another consumer in the consumer group can take over.
... View more
11-12-2017
11:12 AM
1. Plz double check listener port default is 6667 in hdp 2.6. chance to 9092 and restart. 2. Try using zookeeper for consumer console. It works for me though.
... View more