Member since
11-30-2016
33
Posts
5
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
34427 | 02-28-2017 04:58 PM |
10-17-2018
10:13 AM
@Param NC, I am facing the same issue with trying to start a sparksession on yarn. Did you solve this ?
... View more
03-18-2017
12:51 AM
1 Kudo
@Param NC - There is no way to close a question. Once, you have found a suitable answer to a question, you can Accept the answer. However, there is an option to Unfollow the question (see screenshot), resulting in not receiving any further communication from that question. Hope this helps.
... View more
03-15-2017
05:35 PM
@Param NC , Please close this thread by accepting the answer and consider asking new question.
... View more
03-08-2017
07:08 PM
1 Kudo
@Param NC From command I can see that you are using --new-consumer to describe consumer group. When new consumer is used it tries to fetch consumer group info from consumer offset topics which gets created in kafka log directory. Try using --zookeeper instead of --new-consumer, for eg : $ /usr/bin/kafka-consumer-groups.sh --zookeeper <zookeeper-hostname>:2181 --describe --group <consumer-group>
... View more
01-17-2018
10:43 PM
I am not able to find spark.hadoop.yarn.* properties. these properties are not listed in any spark documents. please help me where can I find list of spark.hadoop.yarn properties?
... View more
06-28-2017
11:17 AM
We encountered a similar issue when upgrading our Ambari from 2.4 to 2.5. Our Kafka brokers would not restart. Here was the error message: /var/log/kafka/server.log.2017-06-27-19:java.lang.IllegalArgumentException: requirement failed: security.inter.broker.protocol must be a protocol in the configured set of advertised.listeners. The valid options based on currently configured protocols are Set(SASL_PLAINTEXT) We had specified PLAINTEXTSASL as the SASL protocol in the configuration. To fix this we changed the following configuration in Custom kafka-broker: security.inter.broker.protocol=SASL_PLAINTEXT
... View more
12-14-2016
06:11 PM
1 Kudo
In theory it would be one per client, not sure if you should try 180 or 1800. Both are pretty high though. You should set it to the # of CPU Cores available on all the region servers. Depending on data size, perhaps you need more nodes in the cluster? How big is this data? What version of HBase? What version of Hadoop? What JDK version? How much RAM on the nodes? How big is each region? Did you restart after changing the parameter? Are you looking at the HBase Master, JMX, Logs, Stack Trace and other diagnostics provided by HBase. Also Ambari and other monitoring tools you have may help. Anything in the logs? Do you need the end point coprocessor? Can you scan the data? Read with Spark? Read with NiFi? Or Read through Phoenix as a SQL query? Read: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_hadoop-high-availability/content/config-ha-reads-hbase.html See: https://svn.apache.org/repos/asf/hbase/hbase.apache.org/trunk/0.94/performance.html# http://hbase.apache.org/0.94/book/important_configurations.html#recommended_configurations http://www.slideshare.net/lhofhansl/h-base-tuninghbasecon2015ok http://hbase.apache.org/0.94/book/important_configurations.html https://community.hortonworks.com/articles/46220/phoenix-hbase-tuning-quick-hits.html
... View more