Member since
10-07-2015
21
Posts
32
Kudos Received
3
Solutions
07-07-2017
03:36 PM
5 Kudos
Hybrid Procedural SQL On Hadoop (HPL/SQL) is a tool that implements procedural SQL for Hive. Lately, many people have investigated running HPL/SQL on HDP 2.6 and have encountered problems. These instructions tell you how to work around these problems so that you can experiment with HPL/SQL. In HDP 2.6, HPL/SQL is considered a "technical preview" and is not supported by Hortonworks support subscriptions. There are known limitations in HPL/SQL that may make it unsuitable for your needs, so test thoroughly before you decide HPL/SQL is right for you. These instructions require cluster changes which are not appropriate for production clusters and should only be done on development clusters or sandboxes. We'll cover 2 ways of using HPL/SQL:
Using HiveServer Interactive (Preferred) Using an embedded metastore In either approach, you need to edit /etc/hive2/conf/hive-env.sh and change line 30 from: export HIVE_CONF_DIR=/usr/hdp/current/hive-server2-hive2/conf/conf.server to export HIVE_CONF_DIR=${HIVE_CONF_DIR:-/usr/hdp/current/hive-server2-hive2/conf/conf.server} Again, do not do this on a production cluster. Note that hive-env.sh will be overwritten every time you restart HiveServer Interactive and this modification will need to be repeated for HPL/SQL to be used. Option 1 (Preferred): Using HPL/SQL with HiveServer Interactive:
First, start HiveServer Interactive through Ambari and edit hive-env.sh as mentioned above. After editing hive-env.sh you will need to place this configuration into /usr/hdp/current/hive-server2-hive2/conf/hplsql-site.xml <configuration>
<property>
<name>hplsql.conn.default</name>
<value>hiveconn</value>
</property>
<property>
<name>hplsql.conn.hiveconn</name>
<value>org.apache.hive.jdbc.HiveDriver;jdbc:hive2://ambari.example.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2-hive2</value>
</property>
<property>
<name>hplsql.conn.convert.hiveconn</name>
<value>true</value>
</property>
</configuration>
You will need to replace the value for hplsql.conn.hiveconn with the value of HiveServer2 Interactive JDBC URL as shown in the Hive service page in Ambari. Proceed to the Validation Phase below. Option 2: Using Embedded Metastore
To use an embedded metastore, HPL/SQL clients will need access to the database backing the metastore (e.g. MySQL), so will need a hive-site.xml that contains credentials to the database. Ambari sets up two hive-site.xml files, one without passwords in /etc/hive2/conf and one with passwords in /etc/hive2/conf/conf.server, only visible to certain users. You will need the one with credentials.
Because of this security problem, use this approach only if you can't use HiveServer for some reason.
Run these commands to clone the Hive configurations, including passwords: sudo cp -r /etc/hive2/conf/conf.server conf
sudo chmod -R 755 conf
sudo cp /etc/hive2/2.6.1.0-129/0/hive-env.sh conf
Edit conf/hive-site.xml and change the value of hadoop.security.credential.provider.path to jceks://file/home/vagrant/conf/hive-site.jceks
export HIVE_CONF_DIR=/home/vagrant/conf (you will need subtitute your actual path here) Finally, place this configuration in /home/vagrant/conf/hplsql-site.xml (again, substitute your actual path) <configuration>
<property>
<name>hplsql.conn.default</name>
<value>hiveconn</value>
</property>
<property>
<name>hplsql.conn.hiveconn</name>
<value>org.apache.hive.jdbc.HiveDriver;jdbc:hive2://</value>
</property>
<property>
<name>hplsql.conn.convert.hiveconn</name>
<value>true</value>
</property>
</configuration> If you decided to look at the Embedded Metastore route, hopefully you read these instructions and decided the HiveServer Interactive route is a better choice. Validation Phase:
To confirm your setup, run: /usr/hdp/current/hive-server2-hive2/bin/hplsql -e 'select "hello world";' If your setup is correct you will see hello world printed to your console. For more information, HPL/SQL includes excellent documentation (http://www.hplsql.org/doc) and you should consult this for most questions.
... View more
Labels:
07-01-2017
03:53 PM
2 Kudos
This failure to
start HiveServer Interactive / Hive LLAP is due to a known problem in Ambari 2.5 where certain keytab files are not generated if you enable LLAP after
your cluster is Kerberized. The Ambari Kerberos wizard generates keytabs for
all services that are present as of the time the Kerberos Wizard is run. If
HiveServer Interactive is not enabled when the wizard runs, certain essential keytabs will not be
present when you try to enable HiveServer Interactive / LLAP, nor are they generated at that time. There are two
options for resolving this problem: Regenerate
keytabs using the Ambari Kerberos wizard, refer to the Ambari
documentation for this process. On all cluster
nodes, copy the hive.service.keytab to hive.llap.zk.sm.keytab. If your keytabs
are stored in the default location, cp
/etc/security/keytabs/hive.service.keytab
/etc/security/keytabs/hive.llap.zk.sm.keytab
... View more
Labels:
09-06-2017
02:38 AM
Hi Carter Tez UI doesn't seems to be working with Kerberos cluster. https://issues.apache.org/jira/browse/TEZ-3418 Can you share if any specific procedure to make it work? Also with Tez UI 0.7.0, couldn't find scripts/configs.js file to update timelineBaseUrl and RMWebUrl. Are we referring to /var/www/html/tez-ui/config/configs.env?
... View more