Member since
09-23-2015
81
Posts
108
Kudos Received
41
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5702 | 08-17-2017 05:00 PM | |
2263 | 06-03-2017 09:07 PM | |
2783 | 03-29-2017 06:02 PM | |
5279 | 03-07-2017 06:16 PM | |
1959 | 02-26-2017 06:30 PM |
10-02-2015
04:13 PM
we don't have any specific jira for this. We are planning on bringing apache trunk changes for security into our branch. This will add the necessary changes to mirrormaker to work secure setup
... View more
10-02-2015
03:50 PM
We are aiming for Dal-M20 but this is not at a high priority at this point.
... View more
10-01-2015
11:52 PM
2 Kudos
Make sure ui kerberos auth to local rules are configured properly. Once principle from AD is used for negotiation with MIT KDC, there need to be a rule that translate it to local account in Storm UI node. Many times those can be copied from core-site.xml. For example: ui.filter.params:"type": "kerberos""kerberos.principal": "HTTP/nimbus.witzend.com""kerberos.keytab": "/vagrant/keytabs/http.keytab""kerberos.name.rules": "RULE:[2:$1@$0]([jt]t@.*EXAMPLE.COM)s/.*/$MAPRED_USER/ RULE:[2:$1@$0]([nd]n@.*EXAMPLE.COM)s/.*/$HDFS_USER/DEFAULT" Note that rules are listed as string without commas. 2. You will need to create mapping for MIT Domain KDC and correlated resource used for the Domain, in this case Storm UI. You will need to execute following commands on Windows workstation from the command line: ksetup /AddKDC $DOMAIN $KDC ksetup /AddHostToRealmMap $hadoop_resource $Domain Note that this adds registry entries in: HKLM\System\CurrentControlSet\Control\Lsa\Kerberos\HostToRealm If you need to troubleshoot the issues you can try accessing Storm UI within the cluster using curl command. For example: curl -i --negotiate -u:anyUser -b ~/cookiejar.txt -c ~/cookiejar.txt http://storm-ui-hostname:8080/api/v1/cluster/summary This will be helpful to see if kerberos UI configs are working. In order to isolate the issue you can use storm service keytabs as well as user principles. Another important thing to check is to make sure that trust is working properly and encryption types match on both KDCs.
... View more
10-01-2015
07:16 PM
4 Kudos
Shouldn't be a memory constraint. Did you try deleting storm-local dir you can find the location storm.local.dir in /etc/storm/conf/storm.yaml. Stop storm services delete this dir and restart the nimbus.
... View more
10-01-2015
02:02 PM
1 Kudo
The partitions will be distributed among the kafka nodes. If you created a topic with 3 partitions and you've 3 kafka nodes than each node will get a single topic partition and it will be the leader for it. When there is no Key in the messages you are trying to write , Kafka client does a round robin picking of each node and writes to that topic partition and moves on to next one. If you provide a key it will do hash based partitioning to determine which topic partition to write to.
... View more
10-01-2015
06:29 AM
1 Kudo
Even if there is no replication there will be a leader. It still makes a call to broker.list to find out who is the leader for given topic-partition. Partitions are spread across the cluster so the client still needs who is leader of a partition irrespective of replication.
... View more
10-01-2015
02:31 AM
No you cannot share the same topic among multiple topics. if you have parallelism lower than the topic partitions each executor of kafka spout will get multiple partitions to read from. Any reason you are looking to do this.
... View more
09-30-2015
09:00 PM
2 Kudos
broker.list for kafka consumers or producers used for bootstrapping. Consumer or Producer makes a request to get TopicMetadata which tells the clients what are the topic partitions and who are leaders for these partitions so that clients can send requests to the leaders. To answer your question brokerList will be shuffled and it will go through each one of the hosts and makes TopicMetadataRequest if its succeeded it will return , if not it will continue to next broker.
... View more
09-30-2015
03:49 PM
3 Kudos
It depends on how the spout is implemented. Lets look at KafkaSpout the failed messages can be for 2 reasons 1. Downstream bolts are failed to process or called collector.fail 2. Downstream bolts are failed to acknowledge the tuple within topology.message.timeout.secs this is 30secs by default. In two cases you can see the spout failed number go up. But kafkaSpout will replay until the messages are acknowledged . If you are using Acking with storm-core topology it guarantees at least once delivery, i.e there might be duplicates but no message loss.
... View more
09-30-2015
02:02 PM
3 Kudos
Yes. Go to Hosts from Ambari and you can click on Add Service at the bottom page. From the list of available services you can select kafka broker to be installed on that node.
... View more
- « Previous
- Next »