Member since
09-11-2015
115
Posts
126
Kudos Received
15
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3049 | 08-15-2016 05:48 PM | |
2858 | 05-31-2016 06:19 PM | |
2397 | 05-11-2016 03:10 PM | |
1857 | 05-10-2016 07:06 PM | |
4746 | 05-02-2016 06:25 PM |
10-30-2015
06:21 PM
Fixed, thank you
... View more
10-30-2015
05:51 PM
Now that you mention my omission of HIVE_CONF_DIR, I realize it's simpler to override the hive.server2 properties rather than duplicate the entire configs. "Two HS2 instances on a single host" has been updated with this change.
... View more
10-30-2015
05:34 PM
Good catch, thanks. It's corrected now.
... View more
10-30-2015
05:42 AM
2 Kudos
Glad to hear that HIVE-5312 will allow a single HS2 instance to run both modes simultaneously. In the meantime you have a couple options...
Two HS2 instances on a single host, different modes on different ports Two HS2 instances on different hosts, different modes on different or same port Two HS2 instances on a single host Note: the second instance will not be managed by Ambari Start HS2 manually, and override transport mode and port properties: su - hive /usr/hdp/current/hive-server2/bin/hiveserver2 \
-hiveconf hive.metastore.uris=' ' \
-hiveconf hive.server2.transport.mode=http \
-hiveconf hive.server2.thrift.http.port=10001 \
>/var/log/hive/hiveserver2.out 2> /var/log/hive/hiveserver2.log & Alternatively, you may duplicate the config directory[1] and set environment variable HIVE_CONF_DIR instead of overriding the hive.server2 properties with -hiveconf. [1] HDP 2.3+: /etc/hive/conf/conf.server | HDP < 2.3: /etc/hive/conf.server Two HS2 instances on different hosts Note: using Ambari is preferable, however you can apply the manual steps from the previous section for clusters managed by Ambari 1.x or without Ambari
Add a HS2 instance to the desired host using Ambari Add a new Hive config group for the host where the new HS2 instance was deployed Modify the config group properties: hive.server2.transport.mode & hive.server2.thrift.http.port Manage the new HS2 component using Ambari Standard values:
hive.server2.transport.mode=binary & hive.server2.thrift.port=10000 hive.server2.transport.mode=http & hive.server2.thrift.http.port=10001
... View more
10-30-2015
04:42 AM
2 Kudos
What other services are best to colocate on a host with Zookeeper, and how does this change as number of hosts increases? Does it make sense not to run it on a host with HA services, since those are what it protects? If running on a NodeManager, what adjustments should be made to memory available for YARN containers?
... View more
Labels:
- Labels:
-
Apache YARN
10-30-2015
04:27 AM
6 Kudos
Authorization Models applicable to the Hive CLI
Hive provides a few different authorization models plus Apache Ranger, as described in the Hive Authorization section of the HDP System Administration Guide. Hive CLI is subject to the following two models-- Hive default (Insecure) - Any user can run GRANT statements - DO NOT USE Storage-based (Secure) - Authorization at the level of databases/tables/partitions, based on HDFS permissions (and ACLs in HDP 2.2.0+)
Frequently Asked Questions about Hive CLI Security
Can I set restrictive permissions on the hive executable (shell wrapper script) and hive-cli jar?No, components such as Sqoop and Oozie may fail. Additionally, a user can run their own copy of the hive client from anywhere they can set execution privileges. To avoid this limitation, migrate to the Beeline CLI and utilize HiveServer2, and restrict access to the cluster through a gateway such as Knox. Can Ranger be used to enforce permissions for Hive CLI users?HDFS policies can be created in Ranger, and the Hive Metastore Server can enforce HDFS permissions (and ACLs in HDP 2.2+) using storage-based authorization. The user executing hive-cli can bypass authorization mechanisms by overriding properties on the command line, so the Ranger Hive plugin does not enforce permissions for Hive CLI users.
Related Tutorials Secure JDBC and ODBC Clients’ Access to HiveServer2 using Apache Knox Manage Security Policy for Hive & HBase with Knox & Ranger
... View more
Labels:
10-30-2015
04:13 AM
5 Kudos
A datanode is considered stale when: dfs.namenode.stale.datanode.interval < last contact < (2 * dfs.namenode.heartbeat.recheck-interval) In the NameNode UI Datanodes tab, a stale datanode will stand out due to having a larger value for Last contact among live datanodes (also available in JMX output). When a datanode is stale, it will be given lowest priority for reads and writes. Using default values, the namenode will consider a datanode stale when its heartbeat is absent for 30 seconds. After another 10 minutes without a heartbeat (10.5 minutes total), a datanode is considered dead. Relevant properties include: dfs.heartbeat.interval - default: 3 seconds
dfs.namenode.stale.datanode.interval - default: 30 seconds
dfs.namenode.heartbeat.recheck-interval - default: 5 minutes dfs.namenode.avoid.read.stale.datanode - default: true
dfs.namenode.avoid.write.stale.datanode - default: true This feature was introduced by HDFS-3703.
... View more
10-30-2015
03:02 AM
1 Kudo
This bug is fixed in all HDP releases after (but not including) HDP 2.2.8. It is fixed in 2.3.0+
... View more
10-29-2015
09:10 PM
4 Kudos
container-executor.cfg YARN containers in a secure cluster use the operating system facilities to offer execution isolation for containers. Secure containers execute under the credentials of the job user. The operating system enforces access restriction for the container. The container must run as the user that submitted the application. Therefore it is recommended to never submit jobs from a superuser account (HDFS or Linux) when LinuxContainerExecutor is used. To prevent superusers from submitting jobs, the container executor configuration (/etc/hadoop/conf/container-executor.cfg) includes the properties banned.users and min.user.id. Attempting to submit a job that violates either of these settings will result in an error indicating the AM container failed to launch:
INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl:
Application application_1234567890123_4567 failed 2 times due to AM
Container for appattempt_1234567890123_4567_000002 exited with exitCode: -1000 Followed by one of these two diagnostic messages: Diagnostics: Application application_1234567890123_4567 initialization failed (exitCode=255) with output:
Requested user hdfs is not whitelisted and has id 507,which is below the minimum allowed 1000
Diagnostics: Application application_1234567890123_4567 initialization failed (exitCode=255) with output: Requested user hdfs is banned Although it is possible to modify these properties, leaving the default values is recommended for security reasons. yarn-site.xml
yarn.nodemanager.linux-container-executor.group - A special group (e.g. hadoop) with executable permissions for the container executor, of which the NodeManager Unix user is the group member and no ordinary application user is. If any application user belongs to this special group, security will be compromised. This special group name should be specified for the configuration property. Learn more about YARN Secure Containers from the Apache Hadoop docs.
... View more
Labels:
- « Previous
- Next »