Member since
06-07-2016
923
Posts
322
Kudos Received
115
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3958 | 10-18-2017 10:19 PM | |
4225 | 10-18-2017 09:51 PM | |
14550 | 09-21-2017 01:35 PM | |
1747 | 08-04-2017 02:00 PM | |
2329 | 07-31-2017 03:02 PM |
06-16-2017
12:13 PM
@Stevens Yeung Can you please try the following: beeline -u jdbc:hive2:// --hivevar rdate=112211 -e "select 9${hivevar:rdate}9"
... View more
06-16-2017
02:47 AM
@Karan Alang What you are doing is right. "--new-consumer" should make it work if you are using the new consumer. Can you please try again the following. kafka-run-class.sh kafka.admin.ConsumerGroupCommand --new-consumer --describe --group myGroup --zookeeper localhost:2181 But if this doesn't help, check your zookeeper. Run zookeeper cli. do an ls on /consumers/myGroup/offsets/<topicname> You should get some thing like [0]. Then run a get command on this offset. Something like: get /consumers/myGroup/offsets/<topicname>/0 Then you'll see the first thing is your offset. If it's there, then paste the result and we'll go from there.
... View more
06-14-2017
05:54 PM
@Karan Alang Yes, new brokers can be added while Kafka is online and partitions can be reassigned to these new brokers. One thing to remember is you are not creating new partitions. If you create new partitions, and assuming you have keyed messages where applications require data to be in order then you will lose the order guarantees because in a keyed message Kafka makes sure that a particular key always lands on a particular partition. When you add new partitions, you break this behavior. But you can easily add new brokers and assign existing partitions to the new brokers in order to balance the cluster. https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools#Replicationtools-6.ReassignPartitionsTool
... View more
06-05-2017
06:03 AM
@zkfs
What's your heap size? This is a Java Garbage collection issue. Usually happens in Eden generation. What's the size of young generation? JVM parameter "NewSize"? In fact you can see it's the parallel new garbage collection that is unable to allocate memory. Parallel new garbage collector works on young generation. You probably have to allocate more memory.
... View more
05-29-2017
05:48 AM
@Suhel So here is your problem: INFO balancer.Balancer: namenodes = [hdfs://belongcluster1, hdfs://belongcluster1:8020] There should be only active namenode here. It's showing both. Do you have the following property in your configs (exactly this property): "dfs.namenode.rpc-address"?
... View more
05-29-2017
05:24 AM
@Suhel Actually, don't delete anything. Your version of Ambari does not seem to be affected by this bug. Try the following: sudo -u hdfs -b hdfs balancer -fs hdfs://belongcluster1:8020 -threshold 5 My guess is you were only missing port number. Can you please try it.
... View more
05-29-2017
02:43 AM
@Suhel I think you should have only one value and it should point to your "name service". You should have a value for name service when you have HA enabled. See the following link on how this works in HA - third row: https://hadoop.apache.org/docs/r2.4.1/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
... View more
05-29-2017
02:26 AM
@Suhel Check this link if you have the same issue: https://community.hortonworks.com/articles/4595/balancer-not-working-in-hdfs-ha.html
... View more
05-29-2017
12:54 AM
@Suhel
Do you also have a standby namenode? Can you try the following: sudo -u hdfs -b hdfs balancer -fs hdfs://<your name node>:8020 -threshold 5
... View more
05-29-2017
12:21 AM
@Suhel
sudo -u hdfs -b hdfs balancer -threshold 5 What do you have "-b" for in this command? Shouldn't this be sudo -u hdfs hdfs balancer -threshold 5
... View more