Member since
09-24-2015
10
Posts
16
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
10113 | 01-23-2017 02:24 AM | |
2398 | 11-08-2016 07:16 PM | |
1296 | 04-20-2016 01:41 PM |
02-09-2017
07:19 PM
6 Kudos
https://youtu.be/-HMyEpDJeGg Configuring Ambari 2.4.2 and HDP 2.5 for Kerberos using AD as the KDC
Add bonus coverage of adding a new datanode to a HDP cluster that is secured.
There are empty OUs created
in AD to store hadoop principals/hadoop nodes (HadoopServices) Hadoopadmin user has
administrative credentials with delegated control of "Create, delete, and
manage user accounts" on above OU Delegate OU permissions to hadoopadmin
for OU=HadoopServices. In 'Active Directory Users and Computers' app: right click HadoopServices Delegate Control Next Add hadoopadmin checknames OK Select "Create, delete, and manage
user accounts" OK KDC: KDC host: ad01.prod.hortonworks.net Realm name: PROD.HORTONWORKS.NET LDAP url: ldaps://ad01.prod.hortonworks.net Container DN:
OU=HadoopServices,DC=prod,DC=hortonworks,DC=net Domains: prod.hortonworks.net Kadmin: Kadmin host: ad01.prod.hortonworks.net Admin principal:
hadoopadmin@PROD.HORTONWORKS.NET Admin password: xxxxxx
... View more
Labels:
01-24-2017
08:57 PM
yes as the kafka user
... View more
01-23-2017
02:24 AM
When you use a script, command, or API to create a topic, an entry is created under ZooKeeper. The only user with access to ZooKeeper is the service account running Kafka (by default, kafka ). Therefore, the first step toward creating a Kafka topic on a secure cluster is to run kinit , specifying the Kafka service keytab. The second step is to create the topic.
Run kinit , specifying the Kafka service keytab. For example: kinit -k -t /etc/security/keytabs/kafka.service.keytab kafka/c6401.ambari.apache.org@EXAMPLE.COM Next, create the topic. Run the kafka-topics.sh command-line tool with the following options: /bin/kafka-topics.sh --zookeeper <hostname>:<port> --create --topic <topic-name> --partitions <number-of-partitions> --replication-factor <number-of-replicating-servers> For example: /bin/kafka-topics.sh --zookeeper c6401.ambari.apache.org:2181 --create --topic test_topic --partitions 2 --replication-factor 2
Created topic "test_topic".
... View more
11-08-2016
07:16 PM
@Mike Garris Yes you can use LUKS as disk level encryption. This will encrypted the data blocks at the Linux level. This will not encrypted the data at the HDFS filesystem level. Many people have easily and successfully deployed HDFS with LUKS encrypted disk. The preference would to install and configure Linux and LUKS at the same time and then just install HDFS after as you would with a normal HDP install.
... View more
09-19-2016
03:25 PM
3 Kudos
The user that has the kerberos ticket will be the authenticated user you can confirm kdestroy kinit as hr1 then klist to check then beeline beeline -u ' jdbc:hive2://localhost:10000/default;principal=hive/securityLab02@XXX.local' all actions will be of the authenticated user via kerberos please see this article https://community.hortonworks.com/questions/22897/kerberos-principal-should-have-3-parts.html
... View more
08-04-2016
04:16 PM
1 Kudo
Check your Linux box and see if it has connectivity to your repo . yum is failing to get the package from the repo. Confirm your proxy is working as you intended it. -DhttpProxyHost and -DhttpProxyPort The Ambari url failure might be related to your desktop is not proxied either - the url check is initiated at the browser. Please post screenshots of errors . grep -i baseurl /etc/yum.repos.d/*.repos each result will be a url that all of your linux boxes will need access to via the network. yum clean all yum repolist -v - this will show failures if you have a network issue. yum install unzip
... View more
08-04-2016
03:56 PM
Check your Linux box and see if it has connectivity to your repo . yum is failing to get the package from the repo. grep -i baseurl /etc/yum.repos.d/*.repos each result will be a url that all of your linux boxes will need access to via the network. yum clean all yum repolist -v yum install unzip
... View more
04-20-2016
01:41 PM
5 Kudos
Does ranger creates unix groups during AD/LDAP sync? No - the usersync just brings in the users and groups for you to see and to be able to create Ranger policy based on the known users and groups . It does not create them it just reads from your defined source be it unix , AD/LDAP . Curious if the unix
groups are used (based on sync) for authorization or native AD/LDAP
groups. You create policy and this will let you control access not authorization. The underlying linux filesystem still needs to have SSSD or winBind/samba setup to show the same groups on the filesystem and the group names need to be the same . Ranger User sync will not create these groups in linux or hdfs.
... View more
02-05-2016
07:04 PM
1 Kudo
Kafka stores the broker.id in a file at time install of meta.properties and it will be forever that number until you delete that file is located kafka logs.dir/meta.properties #Thu Feb 04 11:50:40 EST 2016
version=0
broker.id=1003 Stop Kafka and Delete and it should recreate
... View more