Member since
05-20-2016
155
Posts
220
Kudos Received
30
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5949 | 03-23-2018 04:54 AM | |
2159 | 10-05-2017 02:34 PM | |
1143 | 10-03-2017 02:02 PM | |
7740 | 08-23-2017 06:33 AM | |
2475 | 07-27-2017 10:20 AM |
04-05-2017
10:06 AM
2 Kudos
In a secure cluster, we need to pass hive principal name( hive/_HOST@EXAMPLE.COM ) in the jdbc url. Why is the hive principal name required can someone please help me explaining it. jdbc:hive2://zkhost:2181/db;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;principal=hive/_HOST@EXAMPLE.COM; documentation https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-JDBCClientSetupforaSecureCluster Please note, while this may not be needed for zookeeper discovery mode, if the provided jdbc is HS2 based , then hive principal is a must.
... View more
Labels:
- Labels:
-
Apache Hive
03-21-2017
06:47 AM
3 Kudos
you can use below ambari api curl -k "https://xyz.com:8443/api/v1/clusters/cl1?fields=Clusters/security_type" -u admin:admin
{
"href" : "https://xyz.com:8443/api/v1/clusters/cl1?fields=Clusters/security_type",
"Clusters" : {
"cluster_name" : "cl1",
"security_type" : "KERBEROS",
"version" : "HDP-2.5"
}
}
... View more
03-16-2017
06:55 AM
2 Kudos
I can only see looking for no# of NameNode components. Is there any better way of doing this ?
curl -u admin:admin -H"X-Requested-by:ambari" -i -k -X GET "http://XXX.XXX.XXX.XXX:8080/api/v1/clusters/santhosh/services/HDFS/components/NAMENODE?fields=host_components/HostRoles/component_name"
... View more
03-16-2017
06:21 AM
2 Kudos
Can someone please let me know the Ambari API to find whether HDFS is HA enabled or not ?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
03-08-2017
12:20 PM
10 Kudos
Storm 1.1.X provide an external storm kafka client that we could use to build storm topology. Please note this is support for Kafka 0.10 onwards. Below is the step by step guide on how to use the API's. Add below dependency to your pom.xml <dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-kafka-client</artifactId>
<version>1.1.1-SNAPSHOT</version>
</dependency>
The kafka spout implementation for the topology is configured using KafkaSpoutConfig. Below is a sample config object creation. KafkaSpoutConfig spoutConf = KafkaSpoutConfig.builder(bootStrapServers, topic)
.setGroupId(consumerGroupId)
.setOffsetCommitPeriodMs(10_000)
.setFirstPollOffsetStrategy(UNCOMMITTED_LATEST)
.setMaxUncommittedOffsets(1000000)
.setRetry(kafkaSpoutRetryService)
.setRecordTranslator
(new TupleBuilder(), outputFields, topic )
.build();
Above class follows builder pattern. bootStrapServers is the Kafka broker end point from where the consumer records are to be polled. topic is the kafka topic name. It can be a collection of kafka topic ( multiple topic or a Pattern ( regular expression ) as well. consumerGroupId would set the kafka consumer group id ( group.id). setFirstPollOffsetStrategy allows you to set from where the consumer records should be fetched. This takes an enum as input and below is the description for the same. EARLIEST - spout will fetch the first offset of the partition, irrespective of commit
LATEST - spout will fetch records greater than the last offset in partition, irrespective of commit.
UNCOMMITTED_EARLIEST - spout will fetch the first offset of the parition, if there is no commit
UNCOMMITTED_LATEST - spout will fetch records from the last offset, if there is no commit.
kafkaSpoutRetryService impl is provided below. This is making use of ExponentialBackOff. This setRetry provides a pluggable interface if in case you would want to have failed tuples retry differently. KafkaSpoutRetryService kafkaSpoutRetryService = new KafkaSpoutRetryExponentialBackoff(KafkaSpoutRetryExponentialBackoff.TimeInterval.microSeconds(500),
KafkaSpoutRetryExponentialBackoff.TimeInterval.milliSeconds(2), Integer.MAX_VALUE, KafkaSpoutRetryExponentialBackoff.TimeInterval.seconds(10));
setRecordTranslator provides a mechanism through which we can specify how the kafka consumer records should be converted to tuples. In the above given e.x the TupleBuilder is implementing Func interface. Below is the sample impl of apply method that needs to be overridden. OutputFields is the list of the fields that will be emitted in tuple. Please note there are multiple ways to set translate records to tuple. Please go through storm kafka client documentation for more details. public List<Object> apply(ConsumerRecord<String, String> consumerRecord) {
try {
String records[] = consumerRecord.value().split('|')
return Arrays.asList(records);
} catch (Exception e) {
LOGGER.debug("Failed to Parse {}. Throwing Exception {}", consumerRecord.value() , e.getMessage() );
e.printStackTrace();
}
return null;
} Once the above step is complete, topology can include above created spoutConf as below. TopologyBuilder builder = new TopologyBuilder();
Config conf = new Config();
conf.setNumWorkers(1);
builder.setSpout(KAFKA_SPOUT, new KafkaSpout(spoutConf), 1); Reference: https://github.com/apache/storm/blob/1.x-branch/docs/storm-kafka-client.md
... View more
Labels:
02-09-2017
09:55 AM
11 Kudos
Problem There are time we would want to remove a ZK node in a secure cluster which is ACL protected. Something as below ACLs [zk: xyz.com:2181(CONNECTED) 0] getAcl /infra-solr 'sasl,'infra-solr : cdrwa 'world,'anyone : r
[zk: xyz.com:2181(CONNECTED) 0] rmr /test
Authentication is not valid : /test
Here only read privilege is available to rest. Soln Goto zookeeper home. for e.x cd /usr/hdp/current/zookeeper-server Run below command java -cp "./zookeeper.jar:lib/slf4j-api-1.6.1.jar" org.apache.zookeeper.server.auth.DigestAuthenticationProvider super:password SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. super:password->super:DyNYQEQvajljsxlhf5uS4PJ9R28= Copy the super:DyNYQEQvajljsxlhf5uS4PJ9R28= text and login to Ambari and goto zookeeper config. Add below to zookeeper-env template config export SERVER_JVMFLAGS="$SERVER_JVMFLAGS -Dzookeeper.DigestAuthenticationProvider.superDigest=super:DyNYQEQvajljsxlhf5uS4PJ9R28=" Save and Restart Zookeeper Launch zookeeper cli ( /usr/hdp/current/zookeeper-client/bin/zkCli.sh -server xyz.com ) addauth as below addauth digest super:password Now try rmr /test -- This should work. Note Please be careful while running these on production systems.
... View more
01-24-2017
09:14 AM
4 Kudos
I have created a cluster with component MYSQL_SERVER via cloudbreak. What is the default password for the 'hive' user created ? We do not specify this via Ambari blueprint definition either
... View more
Labels:
- Labels:
-
Apache Ambari
-
Hortonworks Cloudbreak
01-18-2017
09:25 AM
4 Kudos
@Davide Ferrari Unfortunately this is a bug. To get out of this situation you would need to stop the running topologies and clear up the nodes maintained by storm in zookeeper and restart storm. Below paths in zookeeper
/storm/storms
/storm/assignments
Please note I would be sceptical of doing this in production without understanding the impact.
... View more
01-12-2017
09:13 AM
6 Kudos
@vamsi valiveti Have we explored oozie co-ordinator ? does that not solve the problem https://oozie.apache.org/docs/3.1.3-incubating/CoordinatorFunctionalSpec.html
... View more
01-05-2017
06:56 AM
1 Kudo
Below link seems to be not working. what is the right link for the same ? http://sequenceiq.com/cloudbreak-docs/release-1.2.3/api/
... View more
Labels:
- Labels:
-
Hortonworks Cloudbreak