Member since
02-22-2016
11
Posts
32
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1266 | 03-28-2017 07:59 PM |
11-15-2018
09:57 PM
Did you try without checking "Enable SSL" option?
... View more
08-31-2017
10:31 PM
5 Kudos
Note: The ODBC connector is supported for Windows version of Tableau only.
Download Hortonworks driver from the link below https://hortonworks.com/downloads/#addons
Chose Hortonworks ODBC Driver for Apache Hive (v2.1.10) from Hortonworks Data Platform Add-Ons section (Tested Windows 64 bit) Download and run/install the driver. Create a Data Source Name in ODBC Administrator:
Open Data Sources (ODBC) from Control Panel/Administrative Tools. Highlight "Sample Hortonworks Hive DSN" from System DSN tab and click "Configure" button Enter the following info
Host(s): <HOST_NAME> Port: <PORT> Database: Default Mechanism: select "User name password" from dropdown User name: <USER_NAME> Password: <PASSWORD> Delegation UID: Thrift Transport: select "HTTP" from dropdown Click on HTTP Options
HTTP Path: gateway/default/hive Click on SSL Options
Check "Enable SSL" checkbox Check "Allow Common name host name mismatch" checkbox - This option specifies whether a CA-issued SSL certificate name must match the host name
of the Hive server Trusted Certificates: The full path of the .pem file containing trusted CA certificates for verifying the server when
using SSL. Click "Test" on the bottom of the dialog box to test Once successful, click "OK" button From Tableau
On the Connect screen
Click "More ..." from "To a Server" section Choose "Other Databases (ODBC)"
DSN: Choose "Sample Hortonworks Hive DSN" from dropdown Click "Connect" button. (Once connected, "Connection Attributes" section will be activated/enabled) On the Connection Attributes Section
Server: <HOST_NAME> Port: <PORT> Database: Default Username: <USER_NAME> Password: <PASSWORD> Click on "Sign In"
... View more
Labels:
03-28-2017
07:59 PM
1 Kudo
@skothari It is a ranger kafka plugin limitation. Please take a look at this https://community.hortonworks.com/content/kbentry/91546/how-to-auto-create-topics-in-ranger-enabled-kafka.html
... View more
03-28-2017
05:32 PM
4 Kudos
In the current
version of Kafka, when Kafka cluster is enabled with ranger
authorizer(authorizer.class.name=org.apache.ranger.authorization.kafka.authorizer.RangerKafkaAuthorizer)
it is not possible to auto create topics as a non super user even if the auto
create topic flag is set to true. In other words, Kafka create topic authorization
can not be done at a topic level. For example,
create a ranger policy as below, Topic
AutoCreateTopic_Test* with all permissions to a non super user. Run the command line
Kafka producer script to create a non existing topic, /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh
--broker-list < > --topic AutoCreateTopic_Test01--security-protocol PLAINTEXTSASL [2017-02-24 19:10:30,232] WARN Error
while fetching metadata [{TopicMetadata for topic test4 ->
No partition metadata for topic test4 due to
kafka.common.TopicAuthorizationException}] for topic [test4]: class
kafka.common.TopicAuthorizationException (kafka.producer.BrokerPartitionInfo)
[2017-02-24 19:10:30,706] ERROR Error in handling batch of 1 events
(kafka.producer.async.ProducerSendThread)
kafka.common.FailedToSendMessageException: Failed to send messages after 3
tries.
at
kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:91)
at
kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:105)
at
kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:88)
at
kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:68)
at scala.collection.immutable.Stream.foreach(Stream.scala:547)
at
kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:67)
at kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:45) This is
because Topic creation is currently a cluster level privilege. Thus it
requires access privileges over all topics in a cluster, i.e. *. Workaround: Simple
workaround is to add a ranger policy with create permissions over all topics
in a cluster, i.e. *. Create a new
ranger policy like shown above. PlaceHolderTopicName as the name suggest it is
just a random topic name to distinguish this ranger policy with the default
ranger policy associated with Topic “*”. Add users, groups and give only create
permissions. Once the policy get refreshed, users in this policy should be able
to auto create topics. Roadmap items: Please find
the apache kafka jira’s related to address this limitation in the future kafka
releases.
https://issues.apache.org/jira/browse/KAFKA-2945 https://issues.apache.org/jira/browse/KAFKA-2946 References: https://cwiki.apache.org/confluence/display/RANGER/Kafka+Plugin#KafkaPlugin-WhydoIhavetograntcreateaccesstoalltopics(via*)toallowforauto-creationtoworkforproducersand/orconsumers?
... View more
Labels:
04-22-2016
04:57 PM
One way is to generate hash using the date and 1 or 2 primary keys and use it as partition column which should reduce the number of partitions.
... View more
02-22-2016
10:34 PM
22 Kudos
Kerberos cross realm trust for distcp
This article is to demonstrate how to setup cross realm trust for distcp between two secure HDP clusters with their own Kerberos realms(KDC’s).
Prerequisites
Both HDP clusters must be running JDK 1.7 or higher. JDK 1.6 has some known issues
Lets assume first HDP DEV cluster realm : HDPDEV.DEV.COM
Lets assume second HDP QA cluster realm : HDPQA.QA.COM
Step 1 :
To set up cross realm trust between HDPDEV.DEV.COM and HDPQA.QA.COM, for example a client of realm HDPDEV.DEV.COM to access a service in realm HDPDQA.QA.COM, both realms must share a key for a principal name krbtgt/ HDPDQA.QA.COM@ HDPDEV.DEV.COM and both keys must have the same key version number associated with them.
Cross realm trust is unidirectional by default. So for clients in HDPQA.QA.COM also to have access services in HDPDEV.DEV.com, both realms must share a key for principal krbtgt/ HDPDDEV.DEV.COM@ HDPQA.QA.COM.
Add both krbtgt principals on both clusters
#HDP DEV Cluster
kadmin.local : addprinc krbtgt/ HDPDQA.QA.COM@ HDPDEV.DEV.COM
kadmin.local : addprinc krbtgt/ HDPDDEV.DEV.COM@ HDPQA.QA.COM
#HDP QA cluster
Kadmin.local : addprinc krbtgt/ HDPDQA.QA.COM@ HDPDEV.QA.COM
kadmin.local : addprinc krbtgt/ HDPDDEV.DEV.COM@ HDPQA.QA.COM
Note: On both clusters verify both entries have matching kvno and encryption types using kadmin.local : getprinc <principal_name>.
Step 2:
Next step is to set hadoop.security.auth_to_local parameter in both clusters. This parameter helps to map the principal to user. One issue here is that the SASL RPC client requires that the remote server’s Kerberos principal must match the server principal in its own configuration. Therefore, the same principal name must be assigned to the applicable NameNodes in the source and the destination cluster. For example, if the Kerberos principal name of the NameNode in the source cluster is nn/host1@HDPDDEV.DEV.COM, the Kerberos principal name of the NameNode in destination cluster must be nn/host2@HDPDQA.QA.COM, rather than nn2/host2@realm, for example
In Dev cluster add :
<property>
<name>hadoop.security.auth_to_local</name>
<value>
RULE:[2:$1@$0](nn@.*HDPQA.QA.COM s/@.*/hdfs/
RULE:[2:$1@$0](rm@.*HDPDQA.QA.COM s/@.*/yarn/
RULE:[1:$1@$0](.*@HDPDQA.QA.COM)s/@.*//
RULE:[2:$1@$0](.*@HDPDQA.QA.COM s/@.*//
</value>
</property>
In QA cluster add :
<property>
<name>hadoop.security.auth_to_local</name>
<value>
RULE:[2:$1@$0](nn@.*HDPDEV.DEV.COM s/@.*/hdfs/
RULE:[2:$1@$0](rm@.*HDPDDEV.DEV.COM s/@.*/yarn/
RULE:[1:$1@$0](.*@HDPDDEV.DEV.COM)s/@.*//
RULE:[2:$1@$0](.*@HDPDEV.DEV.COM s/@.*//
</value>
</property>
To test the mapping, use org.apache.hadoop.security.HadoopKerberosName.
For example,
[root@localhost]$ hadoop org.apache.hadoop.security.HadoopKerberosName nn/localhost@HDPDEV.DEV.COM
Name: nn/localhost@HDPDEV.DEV.COM to hdfs
Step 3:
Configure complex trust relationships. There are two ways to do it. One way is to configure a shared hierarchy of names. This is the default and simple method. The other way is to explicitly change capaths section in krb5.conf file. This is complicated but more flexible.
Configure paths in krb5.conf :
Configure the capaths section of /etc/krb5.conf, so that clients which have credentials for one realm will be able to look up which realm is next in the chain which will eventually lead to the being able to authenticate to servers.
Edit the /etc/krb5.conf files on both clusters (all nodes) to map the domain to the realm.
For example,
In Dev Cluster :
[capaths]
HDPDDEV.DEV.COM ={
HDPDQA.QA.COM = .
}
In QA cluster:
[capaths]
HDPDQA.QA.COM = {
HDPDDEV.DEV.COM = .
}
The value “.” is used if there are no intermediate realms.
Step 4 :
Set dfs.namenode.kerberos.principal.patternparameter in hdfs-site.xml to *. This is a client-side RegEx that can be configured to control allowed realms to authenticate with.
If this parameter is not set,
java.io.IOException: Failed on local exception: java.io.IOException: java.lang.IllegalArgumentException: Server has invalid Kerberos principal: nn/hdm1.qa.com@HDP.DEV.COM; Host Details : local host is: "sdw1.dev.com/10.181.22.130"; destination host is: "hdm1.qa.com":8020;
Step 5 :
Test trust is setup by running hdfs commands from DEV cluster to QA cluster and vice versa.
Example:
On the DEV cluster, kinit userA@HDPDEV.DEV.COM and then issue hdfs commands:
hdfs dfs –ls hdfs://<NameNode_FQDN_forQACluster>:8020/tmp
hdfs dfs -put /tmp/test.txt hdfs://<NameNode_FQDN_forQACluster>:8020/tmp
Do a similar test on QA cluster.
Step 6 :
Running distcp to copy a file from DEV to QA cluster
hadoop distcp hdfs:// <NameNode_FQDN_forDEVCluster>:8020/tmp/test.txt
hdfs://<NameNode_FQDN_forQACluster>:8020/tmp/
... View more
Labels: