Member since
10-07-2015
21
Posts
32
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5279 | 03-16-2017 02:27 PM | |
4714 | 12-22-2016 10:41 PM | |
1349 | 10-29-2015 01:39 AM |
07-07-2017
03:36 PM
5 Kudos
Hybrid Procedural SQL On Hadoop (HPL/SQL) is a tool that implements procedural SQL for Hive. Lately, many people have investigated running HPL/SQL on HDP 2.6 and have encountered problems. These instructions tell you how to work around these problems so that you can experiment with HPL/SQL. In HDP 2.6, HPL/SQL is considered a "technical preview" and is not supported by Hortonworks support subscriptions. There are known limitations in HPL/SQL that may make it unsuitable for your needs, so test thoroughly before you decide HPL/SQL is right for you. These instructions require cluster changes which are not appropriate for production clusters and should only be done on development clusters or sandboxes. We'll cover 2 ways of using HPL/SQL:
Using HiveServer Interactive (Preferred) Using an embedded metastore In either approach, you need to edit /etc/hive2/conf/hive-env.sh and change line 30 from: export HIVE_CONF_DIR=/usr/hdp/current/hive-server2-hive2/conf/conf.server to export HIVE_CONF_DIR=${HIVE_CONF_DIR:-/usr/hdp/current/hive-server2-hive2/conf/conf.server} Again, do not do this on a production cluster. Note that hive-env.sh will be overwritten every time you restart HiveServer Interactive and this modification will need to be repeated for HPL/SQL to be used. Option 1 (Preferred): Using HPL/SQL with HiveServer Interactive:
First, start HiveServer Interactive through Ambari and edit hive-env.sh as mentioned above. After editing hive-env.sh you will need to place this configuration into /usr/hdp/current/hive-server2-hive2/conf/hplsql-site.xml <configuration>
<property>
<name>hplsql.conn.default</name>
<value>hiveconn</value>
</property>
<property>
<name>hplsql.conn.hiveconn</name>
<value>org.apache.hive.jdbc.HiveDriver;jdbc:hive2://ambari.example.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2-hive2</value>
</property>
<property>
<name>hplsql.conn.convert.hiveconn</name>
<value>true</value>
</property>
</configuration>
You will need to replace the value for hplsql.conn.hiveconn with the value of HiveServer2 Interactive JDBC URL as shown in the Hive service page in Ambari. Proceed to the Validation Phase below. Option 2: Using Embedded Metastore
To use an embedded metastore, HPL/SQL clients will need access to the database backing the metastore (e.g. MySQL), so will need a hive-site.xml that contains credentials to the database. Ambari sets up two hive-site.xml files, one without passwords in /etc/hive2/conf and one with passwords in /etc/hive2/conf/conf.server, only visible to certain users. You will need the one with credentials.
Because of this security problem, use this approach only if you can't use HiveServer for some reason.
Run these commands to clone the Hive configurations, including passwords: sudo cp -r /etc/hive2/conf/conf.server conf
sudo chmod -R 755 conf
sudo cp /etc/hive2/2.6.1.0-129/0/hive-env.sh conf
Edit conf/hive-site.xml and change the value of hadoop.security.credential.provider.path to jceks://file/home/vagrant/conf/hive-site.jceks
export HIVE_CONF_DIR=/home/vagrant/conf (you will need subtitute your actual path here) Finally, place this configuration in /home/vagrant/conf/hplsql-site.xml (again, substitute your actual path) <configuration>
<property>
<name>hplsql.conn.default</name>
<value>hiveconn</value>
</property>
<property>
<name>hplsql.conn.hiveconn</name>
<value>org.apache.hive.jdbc.HiveDriver;jdbc:hive2://</value>
</property>
<property>
<name>hplsql.conn.convert.hiveconn</name>
<value>true</value>
</property>
</configuration> If you decided to look at the Embedded Metastore route, hopefully you read these instructions and decided the HiveServer Interactive route is a better choice. Validation Phase:
To confirm your setup, run: /usr/hdp/current/hive-server2-hive2/bin/hplsql -e 'select "hello world";' If your setup is correct you will see hello world printed to your console. For more information, HPL/SQL includes excellent documentation (http://www.hplsql.org/doc) and you should consult this for most questions.
... View more
Labels:
07-01-2017
03:53 PM
2 Kudos
This failure to
start HiveServer Interactive / Hive LLAP is due to a known problem in Ambari 2.5 where certain keytab files are not generated if you enable LLAP after
your cluster is Kerberized. The Ambari Kerberos wizard generates keytabs for
all services that are present as of the time the Kerberos Wizard is run. If
HiveServer Interactive is not enabled when the wizard runs, certain essential keytabs will not be
present when you try to enable HiveServer Interactive / LLAP, nor are they generated at that time. There are two
options for resolving this problem: Regenerate
keytabs using the Ambari Kerberos wizard, refer to the Ambari
documentation for this process. On all cluster
nodes, copy the hive.service.keytab to hive.llap.zk.sm.keytab. If your keytabs
are stored in the default location, cp
/etc/security/keytabs/hive.service.keytab
/etc/security/keytabs/hive.llap.zk.sm.keytab
... View more
Labels:
07-01-2017
03:17 PM
The right way to think about LATERAL VIEW is that it allows a table-generating function (UDTF) to be treated as a table source, so that it can be used like any other table, including selects, joins and more.
LATERAL VIEW is often used with explode, but explode is just one UDTF of many, a full list is available in the documentation.
To take an example:
select tf1.*, tf2.*
from (select 0) t
lateral view explode(map('A',10,'B',20,'C',30)) tf1
lateral view explode(map('A',10,'B',20,'C',30)) tf2;
This results in:
tf1.key
tf1.value
tf2.key
tf2.value
A
10
A
10
A
10
B
20
A
10
C
30
B
20
A
10
(5 rows were truncated)
The thing to see here is that this query is a cross product join between the tables tf1 and tf2. The LATERAL VIEW syntax allowed me to treat them as tables. The original question used "AS" syntax, which automatically maps the generated table's columns to column aliases. In my view it is much more powerful to leave them as tables and use their fully qualified table correlation identifiers.
These tables can be used in joins as well:
select tf1.*, tf2.*
from (select 0) t
lateral view explode(map('A',10,'B',20,'C',30)) tf1
lateral view explode(map('A',10,'B',20,'C',30)) tf2 where tf1.key = tf2.key;
Now we get:
tf1.key
tf1.value
tf2.key
tf2.value
A
10
A
10
B
20
B
20
C
30
C
30
... View more
09-06-2017
02:38 AM
Hi Carter Tez UI doesn't seems to be working with Kerberos cluster. https://issues.apache.org/jira/browse/TEZ-3418 Can you share if any specific procedure to make it work? Also with Tez UI 0.7.0, couldn't find scripts/configs.js file to update timelineBaseUrl and RMWebUrl. Are we referring to /var/www/html/tez-ui/config/configs.env?
... View more
07-11-2017
10:26 PM
1 Kudo
If you are using Hive 2 or later (including Hive LLAP), you no longer need the dummy table, statements like: INSERT INTO table test_array SELECT 1, array('a','b'); will work.
... View more
10-27-2015
05:41 PM
@carter@hortonworks.com
Yes, the only way it worked is when I used the -D settings. However I have since been told that in order for Hadoop to use the cert, we should import into $JAVA_HOME/jre/lib/security/cacerts instead of /etc/pki/java/cacerts which we thought was the default. So apparently if you are using any trustStore besides $JAVA_HOME/jre/lib/security/cacerts you would need the -D settings. I haven't had a chance to test this as the folks I am working with got it to work with the -D settings, using /etc/java/cacerts and do not want to make any further changes.
... View more