Member since
09-11-2015
23
Posts
25
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2511 | 06-02-2017 10:59 AM | |
1668 | 12-22-2016 04:20 PM |
10-29-2017
11:14 AM
Is postgres started? if you su to user postgres then do 'psql' and '\l' to list databases do you get an error?
... View more
06-11-2017
10:34 AM
Problem: Phoenix query using PERCENTILE_CONT fails with NullPointerException For example: CREATE TABLE IF NOT EXISTS P_C (
COL1 INTEGER NOT NULL PRIMARY KEY,
COL2 INTEGER
);
SELECT PERCENTILE_CONT (0.99) WITHIN GROUP (ORDER BY COL2 ASC) FROM P_C;
java.lang.NullPointerException
at org.apache.phoenix.expression.aggregator.PercentileClientAggregator.evaluate(PercentileClientAggregator.java:82)
at org.apache.phoenix.schema.KeyValueSchema.toBytes(KeyValueSchema.java:112)
at org.apache.phoenix.schema.KeyValueSchema.toBytes(KeyValueSchema.java:93)
at org.apache.phoenix.expression.aggregator.Aggregators.toBytes(Aggregators.java:109)
at org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:44)
at org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:778)
at sqlline.BufferedRows.<init>(BufferedRows.java:37)
at sqlline.SqlLine.print(SqlLine.java:1650)
at sqlline.Commands.execute(Commands.java:833)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:808)
at sqlline.SqlLine.begin(SqlLine.java:681)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292) Solution: This is BUG-82044. There is no solution as yet.
... View more
Labels:
06-11-2017
10:27 AM
1 Kudo
Problem: hbase hbck fails with the following messages zookeeper.ClientCnxn: SASL configuration failed: javax.security.auth.login.LoginException: Zookeeper client cannot authenticate using the 'Client' section of the supplied JAAS configuration: '/usr/hdp/current/hbase-client/conf/hbase_regionserver_jaas.conf' because of a RuntimeException: java.lang.SecurityException: java.io.IOException: /usr/hdp/current/hbase-client/conf/hbase_regionserver_jaas.conf (No such file or directory)
017-06-11 10:11:36,293 ERROR [main] master.TableLockManager: Unexpected ZooKeeper error when listing children
org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth for /hbase-secure/table-lock
at org.apache.zookeeper.KeeperException.create(KeeperException.java:113)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
Solution: Identify the location of the jaas file and set it in the HBASE_SERVER_JAAS_OPTS parameter, make sure it exists, then set the HBASE_SERVER_JAAS_OPTS parameter. grep -i jaas /etc/hbase/conf/hbase-env.sh
export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -Xmx512m -Djava.security.auth.login.config=/usr/hdp/current/hbase-client/conf/hbase_master_jaas.conf $JDK_DEPENDED_OPTS"
# ls -altr /usr/hdp/current/hbase-client/conf/hbase_master_jaas.conf
-rw-r--r-- 1 hbase root 209 May 8 01:53 /usr/hdp/current/hbase-client/conf/hbase_master_jaas.conf
export HBASE_SERVER_JAAS_OPTS=-Djava.security.auth.login.config=/usr/hdp/current/hbase-client/conf/hbase_master_jaas.conf
Now re-run the hbase hbck.
... View more
Labels:
06-11-2017
10:07 AM
Problem: Running a query in Hive View in Ambari, rows 1 to 100 are shown, then the next page starts at 102. For example, first screen: Second screen: Resolution: This is bug AMBARI-19666 which is fixed in Ambari 2.4.3 and later.
... View more
Labels:
06-11-2017
09:55 AM
Performing a sqoop import using 'hive-import' into a data type char() or varchar() fails with 17/06/05 14:04:44 INFO mapreduce.Job: Task Id : attempt_1496415095220_0016_m_000002_0, Status : FAILED
Error: java.lang.ClassNotFoundException: org.apache.hadoop.hive.serde2.SerDeException
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
Working example. Create the Teradata and hive tables as follows: Terdata> create table td_importme_into_hive (col1 int not null primary key, col2 char(30));
Hive> create table td_import (col1 int, col2 char(30));
Execute sqoop import --connection-manager {connection info} \
--table td_importme_into_hive --hive-import --hive-table td_import \
-m 1 --split-by col1
This will fail as char / varchar are not supported Hive datatypes for sqoop import from Teradata. Create the hive table with datatype string instead of char() or varchar().
... View more
Labels:
06-11-2017
09:50 AM
Problem: Certain arithmetic operations return the wrong result in Phoenix. For example: 0: jdbc:phoenix:> select 2/4 ;
+----+
| 0 |
+----+
| 0 |
+----+
1 row selected (0.014 seconds)
Cause: This is because the result is returned as an int rather than a decimal See PHOENIX-3312
Workaround: Provide decimal values within the SQL e.g. 0: jdbc:phoenix:> select 2.0/4.0 ;
+------+
| 0.5 |
+------+
| 0.5 |
+------+
1 row selected (0.011 seconds)
... View more
Labels:
06-11-2017
09:48 AM
Question: Are there any functions which allow date arithmetic with phoenix?
Answer: this is done as a fraction of a day.
e.g. to add 12 hours to a day:
> select now(), now() + (0.5) ;
+---------------------------------+---------------------------------+
| DATE '2017-06-04 16:55:07.989' | DATE '2017-06-05 04:55:07.989' |
+---------------------------------+---------------------------------+
| 2017-06-04 16:55:07.989 | 2017-06-05 04:55:07.989 |
+---------------------------------+---------------------------------+
To add 15 minutes you would use (15/1440). ie.
0: jdbc:phoenix:> select now(), now() + (0.010416666666667) ;
+---------------------------------+---------------------------------+
| DATE '2017-06-04 16:56:31.492' | DATE '2017-06-04 17:11:31.492' |
+---------------------------------+---------------------------------+
| 2017-06-04 16:56:31.492 | 2017-06-04 17:11:31.492 |
+---------------------------------+---------------------------------+
1 row selected (0.024 seconds)
See also:
RMP-9148 "Provide Date arithmetic functions for Phoenix"
... View more
Labels:
06-02-2017
10:59 AM
2 Kudos
Can you check the permissions on your ambari files, especially the repo file?
... View more
03-28-2017
08:44 AM
3 Kudos
Assuming you start with a kerberized HDP cluster with Hbase installed. First check what your hbase service principal is i.e. klist -kt /etc/security/keytabs/hbase.service.keytab
Keytab name: FILE:hbase.service.keytab
KVNO Timestamp Principal
---- ----------------- --------------------------------------------------------
2 12/20/16 13:51:21 hbase/hdp253k1.hdp@HDP.COM
2 12/20/16 13:51:21 hbase/hdp253k1.hdp@HDP.COM
2 12/20/16 13:51:21 hbase/hdp253k1.hdp@HDP.COM
2 12/20/16 13:51:21 hbase/hdp253k1.hdp@HDP.COM
2 12/20/16 13:51:21 hbase/hdp253k1.hdp@HDP.COM
In Ambari head to Hbase -> Configs -> Advanced -> Custom Hbase-Site.xml and add the following new parameters with the keytab / principal substituted for your environment. If they already exist for your cluster set the values as indicated: hbase.rest.authentication.type=kerberos
hbase.master.kerberos.principal=hbase/_HOST@HDP.COM
hbase.master.keytab.file=/etc/security/keytabs/hbase.service.keytab
hadoop.proxyuser.HTTP.groups=*
hadoop.proxyuser.HTTP.hosts=*
hbase.security.authorization=true
hbase.rest.authentication.kerberos.keytab=/etc/security/keytabs/spnego.service.keytab
hbase.rest.authentication.kerberos.principal=HTTP/_HOST@HDP.COM
hbase.security.authentication=kerberos
hbase.rest.kerberos.principal=hbase/_HOST@HDP.COM
hbase.rest.keytab.file=/etc/security/keytabs/hbase.service.keytab
In Ambari -> HDFS, confirm that the following are set and if not add them to 'Custom core-site.xml' hadoop.proxyuser.HTTP.groups=*
hadoop.proxyuser.HTTP.hosts=*
Restart the affected HBase & HDFS services. On the command line on the HBase master, kinit with the service keytab and start the REST server: su - hbase
kinit -kt hbase.service.keytab hbase/hdp253k1.hdp@HDP.COM
/usr/hdp/current/hbase-master/bin/hbase-daemon.sh start rest -p 17000 --infoport 17050
Test the REST server without / with a ticket as follows: # kdestroy
# klist
klist: No credentials cache found (ticket cache FILE:/tmp/krb5cc_0)
# curl --negotiate -u : 'http://hdp253k1.hdp:17000/status/cluster'
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"/>
<title>Error 401 Authentication required</title>
# kinit -kt hbase.service.keytab hbase/hdp253k1.hdp@HDP.COM
# curl --negotiate -u : 'http://aw253k1:17000/status/cluster'
3 live servers, 0 dead servers, 10.6667 average load
3 live servers
hdp253k1.hdp:16020 1490688381983
requests=0, regions=11
heapSizeMB=120 maxHeapSizeMB=502
... View more
Labels:
03-17-2017
03:58 PM
Depends on whether we are talking about under HDFS or at the OS level, but the answer to both is Yes... For accounts under HDFS, the home directory is determined by the value of dfs.user.home.dir.prefix. see: https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml For OS users you can pre-create your service users before install, then set ignore_groupsusers_create=true in cluster-env.xml, and then launch the installer. The service users and groups are here: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_command-line-installation/content/create-system-users-and-groups.html
... View more
Labels: