Member since
05-16-2016
76
Posts
44
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1881 | 03-10-2016 08:52 PM |
07-11-2016
03:28 PM
Thanks Sunile. The replication Oozie WF is defined in target cluster, and WF will be run by RMs in target cluster, so there should be no problem then.
... View more
06-27-2016
02:30 PM
1 Kudo
@ScipioTheYounger As off now the only supported policy is with respect "udf " and "tables" in hive. Schema read is not supported.
... View more
06-13-2016
06:02 AM
4 Kudos
@ScipioTheYounger, as described in the document you linked, you'd want to change ha.zookeeper.acl in core-site.xml to this: <property>
<name>ha.zookeeper.acl</name>
<value>sasl:nn:rwcda</value>
</property> Then, you'd want to run the following to reformat ZooKeeper for NameNode HA, which would reinitialize the znode used by NameNode HA to coordinate automatic failover. hdfs zkfc -formatZK -force The tricky part, as you noticed, is getting that command to authenticate with SASL. The ZooKeeper and SASL guide in the Apache documentation discusses implementation and configuration of SASL in ZooKeeper in detail. For this particular command, you can use this procedure. First, create a JAAS configuration file at /etc/hadoop/conf/hdfs_jaas.conf: Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
useTicketCache=false
keyTab="/etc/security/keytabs/nn.service.keytab"
principal="nn/<HOST>@EXAMPLE.COM";
}; Note that the <HOST> will be different depending on the NameNode hostnames in your environment. Likewise, you'll need to change EXAMPLE.COM to the correct Kerberos realm. Next, edit /etc/hadoop/conf/hadoop-env.sh, and add the following line to enable SASL when running the zkfc command. export HADOOP_ZKFC_OPTS="-Dzookeeper.sasl.client=true -Dzookeeper.sasl.client.username=zookeeper -Djava.security.auth.login.config=/etc/hadoop/conf/hdfs_jaas.conf -Dzookeeper.sasl.clientconfig=Client ${HADOOP_ZKFC_OPTS}" Then, run the "hdfs zkfc -formatZK -force" command.
... View more
05-18-2016
12:33 AM
1 Kudo
@ScipioTheYounger Only the Hive users can kill the job or the instance on which user job is running will have privileges to kill the job. If you want the By setting hive.server2.enable.doAs=true will give access to the user who running job can kill.
... View more
06-13-2016
06:13 AM
@ScipioTheYounger, I expect this is similar to another question you asked.
https://community.hortonworks.com/questions/35574/switch-namenode-ha-zookeeper-access-from-no-securi.html I'll repeat the same information here for simplicity. change ha.zookeeper.acl in core-site.xml to this: <property>
<name>ha.zookeeper.acl</name>
<value>sasl:nn:rwcda</value>
</property> Then, you'd want to run the following to reformat ZooKeeper for NameNode HA, which would reinitialize the znode used by NameNode HA to coordinate automatic failover. hdfs zkfc -formatZK -force The tricky part, as you noticed, is getting that command to authenticate with SASL. The ZooKeeper and SASL guide in the Apache documentation discusses implementation and configuration of SASL in ZooKeeper in detail. For this particular command, you can use this procedure. First, create a JAAS configuration file at /etc/hadoop/conf/hdfs_jaas.conf: Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
useTicketCache=false
keyTab="/etc/security/keytabs/nn.service.keytab"
principal="nn/<HOST>@EXAMPLE.COM";
}; Note that the will be different depending on the NameNode hostnames in your environment. Likewise, you'll need to change EXAMPLE.COM to the correct Kerberos realm. Next, edit /etc/hadoop/conf/hadoop-env.sh, and add the following line to enable SASL when running the zkfc command. export HADOOP_ZKFC_OPTS="-Dzookeeper.sasl.client=true -Dzookeeper.sasl.client.username=zookeeper -Djava.security.auth.login.config=/etc/hadoop/conf/hdfs_jaas.conf -Dzookeeper.sasl.clientconfig=Client ${HADOOP_ZKFC_OPTS}" Then, run the "hdfs zkfc -formatZK -force" command.
... View more
05-17-2016
12:40 AM
Looks like you've resolved your question, but for other readers interested in IPC for Hive queries there's a new diagram in the Spark Guide: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.2/bk_spark-guide/content/ch_accessing-spark-sql.html.
... View more
04-25-2016
07:03 PM
@wayne2chicago thanks, Also if my answer if matching to your expectation then can you please accept it as answer?
... View more
04-13-2016
06:55 PM
Thanks. back tick works.
... View more
11-08-2016
02:28 PM
@Alejandro Fernandez does this work the same for Ranger 0.6.0 or is that specific to Ranger 0.4.0?
... View more
06-27-2016
06:09 PM
http://hortonworks.com/blog/best-practices-for-hive-authorization-using-apache-ranger-in-hdp-2-2/
... View more
- « Previous
-
- 1
- 2
- Next »