Member since
10-07-2015
21
Posts
32
Kudos Received
3
Solutions
07-07-2017
03:36 PM
5 Kudos
Hybrid Procedural SQL On Hadoop (HPL/SQL) is a tool that implements procedural SQL for Hive. Lately, many people have investigated running HPL/SQL on HDP 2.6 and have encountered problems. These instructions tell you how to work around these problems so that you can experiment with HPL/SQL. In HDP 2.6, HPL/SQL is considered a "technical preview" and is not supported by Hortonworks support subscriptions. There are known limitations in HPL/SQL that may make it unsuitable for your needs, so test thoroughly before you decide HPL/SQL is right for you. These instructions require cluster changes which are not appropriate for production clusters and should only be done on development clusters or sandboxes. We'll cover 2 ways of using HPL/SQL:
Using HiveServer Interactive (Preferred) Using an embedded metastore In either approach, you need to edit /etc/hive2/conf/hive-env.sh and change line 30 from: export HIVE_CONF_DIR=/usr/hdp/current/hive-server2-hive2/conf/conf.server to export HIVE_CONF_DIR=${HIVE_CONF_DIR:-/usr/hdp/current/hive-server2-hive2/conf/conf.server} Again, do not do this on a production cluster. Note that hive-env.sh will be overwritten every time you restart HiveServer Interactive and this modification will need to be repeated for HPL/SQL to be used. Option 1 (Preferred): Using HPL/SQL with HiveServer Interactive:
First, start HiveServer Interactive through Ambari and edit hive-env.sh as mentioned above. After editing hive-env.sh you will need to place this configuration into /usr/hdp/current/hive-server2-hive2/conf/hplsql-site.xml <configuration>
<property>
<name>hplsql.conn.default</name>
<value>hiveconn</value>
</property>
<property>
<name>hplsql.conn.hiveconn</name>
<value>org.apache.hive.jdbc.HiveDriver;jdbc:hive2://ambari.example.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2-hive2</value>
</property>
<property>
<name>hplsql.conn.convert.hiveconn</name>
<value>true</value>
</property>
</configuration>
You will need to replace the value for hplsql.conn.hiveconn with the value of HiveServer2 Interactive JDBC URL as shown in the Hive service page in Ambari. Proceed to the Validation Phase below. Option 2: Using Embedded Metastore
To use an embedded metastore, HPL/SQL clients will need access to the database backing the metastore (e.g. MySQL), so will need a hive-site.xml that contains credentials to the database. Ambari sets up two hive-site.xml files, one without passwords in /etc/hive2/conf and one with passwords in /etc/hive2/conf/conf.server, only visible to certain users. You will need the one with credentials.
Because of this security problem, use this approach only if you can't use HiveServer for some reason.
Run these commands to clone the Hive configurations, including passwords: sudo cp -r /etc/hive2/conf/conf.server conf
sudo chmod -R 755 conf
sudo cp /etc/hive2/2.6.1.0-129/0/hive-env.sh conf
Edit conf/hive-site.xml and change the value of hadoop.security.credential.provider.path to jceks://file/home/vagrant/conf/hive-site.jceks
export HIVE_CONF_DIR=/home/vagrant/conf (you will need subtitute your actual path here) Finally, place this configuration in /home/vagrant/conf/hplsql-site.xml (again, substitute your actual path) <configuration>
<property>
<name>hplsql.conn.default</name>
<value>hiveconn</value>
</property>
<property>
<name>hplsql.conn.hiveconn</name>
<value>org.apache.hive.jdbc.HiveDriver;jdbc:hive2://</value>
</property>
<property>
<name>hplsql.conn.convert.hiveconn</name>
<value>true</value>
</property>
</configuration> If you decided to look at the Embedded Metastore route, hopefully you read these instructions and decided the HiveServer Interactive route is a better choice. Validation Phase:
To confirm your setup, run: /usr/hdp/current/hive-server2-hive2/bin/hplsql -e 'select "hello world";' If your setup is correct you will see hello world printed to your console. For more information, HPL/SQL includes excellent documentation (http://www.hplsql.org/doc) and you should consult this for most questions.
... View more
Labels:
07-01-2017
03:53 PM
2 Kudos
This failure to
start HiveServer Interactive / Hive LLAP is due to a known problem in Ambari 2.5 where certain keytab files are not generated if you enable LLAP after
your cluster is Kerberized. The Ambari Kerberos wizard generates keytabs for
all services that are present as of the time the Kerberos Wizard is run. If
HiveServer Interactive is not enabled when the wizard runs, certain essential keytabs will not be
present when you try to enable HiveServer Interactive / LLAP, nor are they generated at that time. There are two
options for resolving this problem: Regenerate
keytabs using the Ambari Kerberos wizard, refer to the Ambari
documentation for this process. On all cluster
nodes, copy the hive.service.keytab to hive.llap.zk.sm.keytab. If your keytabs
are stored in the default location, cp
/etc/security/keytabs/hive.service.keytab
/etc/security/keytabs/hive.llap.zk.sm.keytab
... View more
Labels:
09-13-2016
08:50 AM
4 Kudos
If you're not using Tez UI to debug your Hive queries you need to start today. Ambari installs Tez UI by default, but some people need a standalone option. Tez UI is easy to install and can go anywhere, meaning there's no reason for you to wonder why that Hive query won't run faster.
Why You Should Use Tez UI: Whether in Ambari or Standalone, here's some of the things you're missing if you don't use Tez UI: 1. Inventory of all queries executed, with search and filter.
2. Drill into a query and see the actual query text. Great if a BI tool is generating long-running queries.
3. Vertex Swimlane. This one is new in HDP 2.5 but it helps you quickly pinpoint the long-running parts of your query and optimize them. How To Install Tez UI Standalone: The Tez UI Installation Instructions give the details on installing Tez UI, but it can be a bit hard to put the pieces together. In this How-To we will cover installing Tez UI into Apache httpd on an HDP 2.5 installation. The installation instructions don't change much for different web servers or for app servers like Tomcat. Installing the UI in Apache httpd assuming a CentOS base OS. Install Tez and httpd: yum install httpd tez Create a directory for the UI: mkdir /var/www/html/tez-ui Change into this new directory: cd /var/www/html/tez-ui Extract the Tez UI WAR into this new directory: unzip /usr/hdp/current/tez-client/ui/tez-ui-0.7.0.2.5.0.0-1245.war Open /var/www/html/tez-ui/scripts/configs.js in a text editor and modify these values. Set timelineBaseUrl to your Application Timeline Server (ATS) endpoint. Usually something like: http://ats.example.com:8188 Set RMWebUrl to your Resource Manager (RM) endpoint. Usually something like http://rm.example.com:8088 Configuring Tez: You need to tell Tez to log its history to ATS using the logging service class. Also you should tell Tez where the UI is located, so it can cross link to the Tez UI from within the Resource Manager UI. All together these are needed in your tez-site.xml: <property>
<description>Enable Tez to use the Timeline Server for History Logging</description>
<name>tez.history.logging.service.class</name>
<value>org.apache.tez.dag.history.logging.ats.ATSHistoryLoggingService</value>
</property>
<property>
<description>URL for where the Tez UI is hosted</description>
<name>tez.tez-ui.history-url.base</name>
<value>http://httpd.example.com:80/tez-ui/</value>
</property> That's it. Now you can load Tez UI in your browser. Example: Here's an example configs.env and screenshot of the resulting UI. Notice that timeline and rm have both been modified from the original: [vagrant@hdp250 scripts]$ pwd
/var/www/html/tez-ui/config
[vagrant@hdp250 scripts]$ cat configs.env
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
ENV = {
hosts: {
/*
* Timeline Server Address:
* By default TEZ UI looks for timeline server at http://localhost:8188, uncomment and change
* the following value for pointing to a different address.
*/
timeline: "http://hdp250.example.com:8188",
/*
* Resource Manager Address:
* By default RM REST APIs are expected to be at http://localhost:8088, uncomment and change
* the following value to point to a different address.
*/
rm: "http://hdp250.example.com:8088",
/*
* Resource Manager Web Proxy Address:
* Optional - By default, value configured as RM host will be taken as proxy address
* Use this configuration when RM web proxy is configured at a different address than RM.
*/
//rmProxy: "http://localhost:8088",
},
/*
* Time Zone in which dates are displayed in the UI:
* If not set, local time zone will be used.
* Refer http://momentjs.com/timezone/docs/ for valid entries.
*/
//timeZone: "UTC",
/*
* yarnProtocol:
* If specified, this protocol would be used to construct node manager log links.
* Possible values: http, https
* Default value: If not specified, protocol of hosts.rm will be used
*/
//yarnProtocol: "<value>",
}; This configuration was used to generate the first screenshot in this document, above. CORS: On a multi-node cluster, chances are you will need to enable CORS in ATS to properly view data. If you connect and see an error about CORS not enabled, see the Apache Documentation for the relevant configurations you will need to set, most importantly you will need to enable yarn.timeline-service.http-cross-origin.enabled Note On Secure Clusters: On secure clusters your client web browser will need to be configured to talk to the Kerberized cluster. Depending on your browser you may need to kinit your client or use a plugin. A good way to confirm connectivity is to first see if your browser can connect to the Resource Manager UI. If yes, you should also be able to connect to the standalone Tez UI.
... View more
Labels: