Member since
02-07-2019
1792
Posts
1
Kudos Received
0
Solutions
12-10-2019
03:35 AM
In Kafka, sometimes the topics are marked for deletion. However, even after restarting brokers, the topics are not deleted. This video explains the steps to delete a topic manually.
Open the video on YouTube here
... View more
Labels:
12-10-2019
03:34 AM
This video provides the steps to configure Kafka Mirror Maker in Kerberized clusters.
Open YouTube video here
To run Kafka Mirror Maker using Kerberos clusters, do the following:
Environment:
Cluster A (Source-Kerberized):
c189-node2.squadron-labs.com Broker1
c189-node3.squadron-labs.com Broker2
c189-node4.squadron-labs.com Broker3
Cluster B (Destination-Kerberized):
c289-node2.squadron-labs.com Broker1
c289-node3.squadron-labs.com Broker2
c289-node4.squadron-labs.com Broker3
* In this example, both environments are sharing same KDC and it was created on specific principal for the same.
Destination Files
consumer.properties
bootstrap.servers=<brokerSourceHost>:<brokerPort>,<brokerSourceHost>:<brokerPort>
group.id=<consumerGroupName>
security.protocol=PLAINTEXTSASL
producer.properties
bootstrap.servers=<brokerDestinationHost>:<brokerPort>,<brokerDestinationHost>:<brokerPort>
key.serializer=org.apache.kafka.common.serialization.ByteArraySerializer
value.serializer=org.apache.kafka.common.serialization.ByteArraySerializer
security.protocol=PLAINTEXTSASL
kafka_mirrormaker_jaas.conf
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="<pathTokeytab>"
storeKey=true
useTicketCache=false
serviceName="kafka"
principal="<principal>@<REALM>";
};
Destination commands
export KAFKA_OPTS="-Djava.security.auth.login.config=<path_to_kafka_mirrormaker_jaas.conf>"
./kafka-run-class.sh kafka.tools.MirrorMaker --consumer.config <path_To_Consumer.properties>
--producer.config <path_To_producer.properties> --whitelist "<MirrorMakertopicName>"
Number of consumption streams
Use the --num.streams option to specify the number of mirror consumer threads to create. Source commands
./kafka-console-producer.sh --broker-list <brokerSourceHost>:<brokerPort>
--topic <MirrorMakertopicName> --security-protocol PLAINTEXTSASL
./kafka-console-consumer.sh --bootstrap-server <brokerDestinationHost>:<brokerPort>
--topic <MirrorMakertopicName> --security-protocol PLAINTEXTSASL
... View more
Labels:
12-10-2019
03:33 AM
This video explains steps to install the required support for Intel® Intelligent Storage Acceleration Library on a Hortonworks Data Platform version 3 cluster, over Linux CENTOS 7.
Open the video on YouTube here
ISA-L is a collection of optimized low-level functions targeting storage applications. It is developed by Intel, and its name means Intel(R) Intelligent Storage Acceleration Library.
ISA-L includes the following:
Erasure codes - Fast block Reed-Solomon type erasure codes for any encode/decode matrix in GF(2^8).
CRC - Fast implementations of cyclic redundancy check. Six different polynomials supported.
iscsi32, ieee32, t10dif, ecma64, iso64, jones64.
Raid - Calculate and operate on XOR and P+Q parity found in common RAID implementations.
Compression - Fast deflate-compatible data compression.
De-compression - Fast inflate-compatible data compression.
To check for ISA-L library, do the following:
$ hadoop checknative
ISA-L: false Loading ISA-L failed: Failed to load libisal.so.2 (libisal.so.2: cannot open shared object file: No such file or directory)
For ISA-L support on CentOS 7:
$ yum install gcc make autoconf automake libtool yasm git
$ git clone https://github.com/01org/isa-l/
$ cd isa-l/
$ ./autogen.sh
$ ./configure --prefix=/usr --libdir=/usr/lib64
$ make
$ sudo make install
To install the library precompiled from Hortonworks repositories: To check if the installation went well:
$ hadoop checknative
ISA-L: true /lib64/libisal.so.2
... View more
Labels:
12-09-2019
01:23 AM
This video explains step by step instructions for protecting HDFS directories from unintended deletions.
There are times when Hadoop admins need to protect directories from accidental deletions, even from admins themselves. This is a guide for accomplishing that, so sensitive directories cannot be deleted by mistake and extra procedures have to be performed and double-checked before actually deleting those specific folders.
Open the video on YouTube here
... View more
Labels:
12-08-2019
09:39 PM
The video provides the steps to connect to the Kafka server using SASL_SSL protocol.
Open the video on YouTube here
To connect to Kafka server using SASL_SSL protocol using one way SSL, do the following:
Server side
Configure the following properties in Ambari server > Kafka > config > Custom kafka-broker.
ssl.keystore.location=path-to-your-keystore
ssl.keystore.password=keystore-password
ssl.truststore.location=path-to-your-truststore
ssl.truststore.password=keystore-password
Under Ambari server > Kafka > config > Kafka Broker > Listeners can add the security protocol as: SASL_SSL://localhost:<port>
Since this is a one way SSL communication between client-server, ensure to have enabled the property ssl.client.auth=none. This means that a client authentication is not required. By default this property is set to None. This can be double checked from Ambari console > Kafka > Configs. The same can be searched using the filter text box at the top right of the service screen.
Client Side
Create a file client.properties with the following content:
ssl.truststore.location=<pathToTrustStore> // This file must contain server rootCA
ssl.truststore.password=<trustStore password>
Get a valid Kerberos ticket and execute new producer/consumer API as follows:
Producer
/usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list <brokerHost>:<brokerSASL_SSLPort>
--topic <topicName> --producer.config <path_To_client.properties> --security-protocol SASL_SSL
Consumer
/usr/hdp/current/kafka-broker/bin/kafka-console-consumer.sh --bootstrap-server <brokerHost>:<brokerSASL_SSLPort>
--topic <topicName> --consumer.config <path_To_client.properties> --security-protocol SASL_SSL
... View more
Labels:
12-08-2019
09:37 PM
This video describes the steps performed to write custom UDFs in Hive.
Open the video on YouTube here
... View more
Labels:
12-08-2019
09:34 PM
This video focuses on how to retrieve the cURL commands for various tasks performed in Ambari UI.
Open YouTube video here
... View more
Labels:
12-08-2019
09:32 PM
This video explains how to move the master service from one node to another using Ambari Web UI. The video also talks about the need for this option.
Open the video on YouTube here
... View more
Labels:
12-08-2019
09:28 PM
This video describes why Beeline must be used over Hive CLI when Ranger is in place. It is recommended to use Beeline when Ranger is enabled in the clusters.
Open the video on YouTube here
... View more
Labels:
12-08-2019
09:25 PM
This video describes the configuration for Ranger SSL. This video reviews the architecture and steps needed to secure Ranger Admin UI. Some basic troubleshooting tips are also covered.
Open YouTube video here
... View more
Labels:
- « Previous
- Next »