Member since
09-10-2015
32
Posts
29
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3401 | 10-04-2015 10:36 PM | |
1233 | 09-30-2015 04:59 PM | |
7538 | 09-26-2015 05:24 PM |
10-24-2021
08:37 PM
How to install docker version of sandbox on Mac for hdp-sandbox 2.6.5? I kept getting "502 bad gateway" when try to run Ambari page. Even when I am on Dashboard page, I end up with many red flags.
... View more
01-23-2017
10:00 PM
I did, but it looks like the jar is not being automatically copeied on restart of Hive to other nodes
... View more
11-19-2015
12:25 AM
@Andrew Watson I understood Hortonworks is going to support 1.5.1 in December and not 1.5.2, that would be the reason to use 1.5.1 instead of 1.5.2.
... View more
03-15-2017
04:03 PM
Hi @Saptak Sen, I want the same way... so execute programs from Eclipse under my HDP Sandbox 2.5. Please let me know step by step all configurations to be applied from my windows environment Thanks
... View more
11-21-2015
01:39 AM
1 Kudo
This happens if you previously were on Hive0.12 and metastore database was created by autoCreateSchema as @Deepesh. So, to start with, first of all set datanucleus.autoCreateSchema to false. Contact Hortonworks Support (support.hortonworks.com) while doing in production and ensure you have backed up Hive Metastore Database before doing this. I have faced this issue many times in past while upgrade. And then, I resolve this by performing below steps. ###### Modify/correct table schemas and indexes
######Note: These are empty tables with wrong schema in Hive0.12.0 metastore schema created by autocreation. (Note: This is example of HDP2.2.8 and assumes MySQL as Metastore Datbase)
mysql -u hive -p
Enter Password:
mysql> use hive;
Database changed
DROP INDEX PCS_STATS_IDX ON PART_COL_STATS;
DROP TABLE TAB_COL_STATS;
DROP TABLE PART_COL_STATS;
#######Recreate these tables and index
-- Table structure for table `TAB_COL_STATS`
--
CREATE TABLE IF NOT EXISTS `TAB_COL_STATS` (
`CS_ID` bigint(20) NOT NULL,
`DB_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
`TABLE_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
`COLUMN_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
`COLUMN_TYPE` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
`TBL_ID` bigint(20) NOT NULL,
`LONG_LOW_VALUE` bigint(20),
`LONG_HIGH_VALUE` bigint(20),
`DOUBLE_HIGH_VALUE` double(53,4),
`DOUBLE_LOW_VALUE` double(53,4),
`BIG_DECIMAL_LOW_VALUE` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin,
`BIG_DECIMAL_HIGH_VALUE` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin,
`NUM_NULLS` bigint(20) NOT NULL,
`NUM_DISTINCTS` bigint(20),
`AVG_COL_LEN` double(53,4),
`MAX_COL_LEN` bigint(20),
`NUM_TRUES` bigint(20),
`NUM_FALSES` bigint(20),
`LAST_ANALYZED` bigint(20) NOT NULL,
PRIMARY KEY (`CS_ID`),
CONSTRAINT `TAB_COL_STATS_FK` FOREIGN KEY (`TBL_ID`) REFERENCES `TBLS` (`TBL_ID`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
--
-- Table structure for table `PART_COL_STATS`
--
CREATE TABLE IF NOT EXISTS `PART_COL_STATS` (
`CS_ID` bigint(20) NOT NULL,
`DB_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
`TABLE_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
`PARTITION_NAME` varchar(767) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
`COLUMN_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
`COLUMN_TYPE` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
`PART_ID` bigint(20) NOT NULL,
`LONG_LOW_VALUE` bigint(20),
`LONG_HIGH_VALUE` bigint(20),
`DOUBLE_HIGH_VALUE` double(53,4),
`DOUBLE_LOW_VALUE` double(53,4),
`BIG_DECIMAL_LOW_VALUE` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin,
`BIG_DECIMAL_HIGH_VALUE` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin,
`NUM_NULLS` bigint(20) NOT NULL,
`NUM_DISTINCTS` bigint(20),
`AVG_COL_LEN` double(53,4),
`MAX_COL_LEN` bigint(20),
`NUM_TRUES` bigint(20),
`NUM_FALSES` bigint(20),
`LAST_ANALYZED` bigint(20) NOT NULL,
PRIMARY KEY (`CS_ID`),
CONSTRAINT `PART_COL_STATS_FK` FOREIGN KEY (`PART_ID`) REFERENCES `PARTITIONS` (`PART_ID`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
CREATE INDEX PCS_STATS_IDX ON PART_COL_STATS (DB_NAME,TABLE_NAME,COLUMN_NAME,PARTITION_NAME) USING BTREE;
######Now, Edit this file '/usr/hdp/2.2.8.0-3150/hive/scripts/metastore/upgrade/mysql/upgrade-0.12.0-to-0.13.0.mysql.sql' to correct path from relative path for following files otherwise metastore upgrade would fail as file paths will not resolve:
###replace '016-HIVE-6386.mysql.sql' with /usr/hdp/2.2.8.0-3150/hive/scripts/metastore/upgrade/mysql/016-HIVE-6386.mysql.sql
####replace '017-HIVE-6458.mysql.sql' with
/usr/hdp/2.2.8.0-3150/hive/scripts/metastore/upgrade/mysql/017-HIVE-6458.mysql.sql
replace '018-HIVE-6757.mysql.sql' with /usr/hdp/2.2.8.0-3150/hive/scripts/metastore/upgrade/mysql/018-HIVE-6757.mysql.sql
####replace
'hive-txn-schema-0.13.0.mysql.sql' with
/usr/hdp/2.2.8.0-3150/hive/scripts/metastore/upgrade/mysql/hive-txn-schema-0.13.0.mysql.sql
###On Hive Metastore Node:
cd /usr/hdp/2.2.8.0-3150/hive/scripts/metastore/upgrade/mysql/
vi /usr/hdp/2.2.8.0-3150/hive/scripts/metastore/upgrade/mysql/upgrade-0.12.0-to-0.13.0.mysql.sql
###Upgrade Hive Metastore Database
######
mysql -u hive -p
Enter Password:
mysql> use hive;
Database changed
mysql> source /usr/hdp/2.2.8.0-3150/hive/scripts/metastore/upgrade/mysql/upgrade-0.12.0-to-0.13.0.mysql.sql
..............
..............
..............
mysql>exit;
#######
/usr/hdp/2.2.8.0-3150/hive/bin/schematool -upgradeSchema -dbType mysql -userName hive -passWord gacluster -verbose
Metastore connection URL: jdbc:mysql://gpgbyhppn02.srv.gapac.com/hive
Metastore Connection Driver : com.mysql.jdbc.Driver
Metastore connection User: hive
Starting upgrade metastore schema from version 0.13.0 to 0.14.0
Upgrade script upgrade-0.13.0-to-0.14.0.mysql.sql
Looking for pre-0-upgrade-0.13.0-to-0.14.0.mysql.sql in /usr/hdp/2.2.8.0-3150/hive/scripts/metastore/upgrade/mysql
Connecting to jdbc:mysql://hive.srv.test.com/hive
Connected to: MySQL (version 5.1.66)
Driver: MySQL-AB JDBC Driver (version mysql-connector-java-5.1.17-SNAPSHOT ( Revision: ${bzr.revision-id} ))
Transaction isolation: TRANSACTION_READ_COMMITTED
0: jdbc:mysql://hive.srv.test.com/hiv> !autocommit on
Autocommit status: true
0: jdbc:mysql://hive.srv.test.com/hiv> SELECT 'Upgrading MetaStore schema from 0.13.0 to 0.14.0' AS ' '
+---------------------------------------------------+--+
| |
+---------------------------------------------------+--+
| Upgrading MetaStore schema from 0.13.0 to 0.14.0 |
+---------------------------------------------------+--+
1 row selected (0.015 seconds)
0: jdbc:mysql://hive.srv.test.com/hiv> CREATE TABLE IF NOT EXISTS `PART_COL_STATS` ( `CS_ID` bigint(20) NOT NULL, `DB_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, `TABLE_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, `PARTITION_NAME` varchar(767) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, `COLUMN_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, `COLUMN_TYPE` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, `PART_ID` bigint(20) NOT NULL, `LONG_LOW_VALUE` bigint(20), `LONG_HIGH_VALUE` bigint(20), `DOUBLE_HIGH_VALUE` double(53,4), `DOUBLE_LOW_VALUE` double(53,4), `BIG_DECIMAL_LOW_VALUE` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin, `BIG_DECIMAL_HIGH_VALUE` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin, `NUM_NULLS` bigint(20) NOT NULL, `NUM_DISTINCTS` bigint(20), `AVG_COL_LEN` double(53,4), `MAX_COL_LEN` bigint(20), `NUM_TRUES` bigint(20), `NUM_FALSES` bigint(20), `LAST_ANALYZED` bigint(20) NOT NULL, PRIMARY KEY (`CS_ID`), CONSTRAINT `PART_COL_STATS_FK` FOREIGN KEY (`PART_ID`) REFERENCES `PARTITIONS` (`PART_ID`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1
No rows affected (0.002 seconds)
0: jdbc:mysql://hive.srv.test.com/hiv> CREATE INDEX PCS_STATS_IDX ON PART_COL_STATS (DB_NAME,TABLE_NAME,COLUMN_NAME,PARTITION_NAME) USING BTREE
No rows affected (0.298 seconds)
0: jdbc:mysql://hive.srv.test.com/hiv> UPDATE VERSION SET SCHEMA_VERSION='0.14.0', VERSION_COMMENT='Hive release version 0.14.0' where VER_ID=1
1 row affected (0.101 seconds)
0: jdbc:mysql://hive.srv.test.com/hiv> SELECT 'Finished upgrading MetaStore schema from 0.13.0 to 0.14.0' AS ' '
+------------------------------------------------------------+--+
| |
+------------------------------------------------------------+--+
| Finished upgrading MetaStore schema from 0.13.0 to 0.14.0 |
+------------------------------------------------------------+--+
1 row selected (0.002 seconds)
0: jdbc:mysql://hive.srv.test.com/hiv> !closeall
Closing: 0: jdbc:mysql://hive.srv.test.com/hive
beeline>
Completed upgrade-0.13.0-to-0.14.0.mysql.sql
schemaTool completed
######Now, Start Hive Service from Ambari UI##
... View more
10-07-2015
08:58 AM
Why are the dates in the log from 7/24/2014? Is this an old issue that hasn't been solved and you are reposting it, or is your clock incorrect? If your clock is incorrect, than you will have Kerberos issues since time is a big factor in determining the validity of the credentials. The clocks on the hosts need to be within 5 minutes of the host that contains the KDC, else bad things will happen. If this is an old issue and you are using HDP 2.1, then I assume you are using Ambari 1.6.x. In this version of Ambari, you must have set up Kerberos manually. Since there is a lot of room for error, you should go back and make sure you didn't miss a step or incorrect create a keytab file. Unless you create the keytab file for a particular principal using kadmin.local, the password for the account will get regenerated. This will cause issues if you create multiple keytab files for the same principal - the 2nd time you generate a keytab file, the 1st keytab file will become obsolete; the 3rd time you generate a keytab file, the 2nd keytab file will become obsolete, etc... Also, make sure all of the configs were set properly. By incorrectly setting a principal name or keytab file location, one or more services will fail to authenticate. Finally, check the ACLs on the keytab files to make sure that the relevant service(s) can read them. If a service is running as the local hdfs user, but the keytab file is only readable by root, than the service cannot read the keytab file and authentication will fail.
... View more
04-10-2019
09:05 AM
@Saptak Sen sry but have u already known how to build rpm with hdp git opensource?glad for your reply
... View more
10-04-2015
10:36 PM
Before Ranger was integrated with Sandbox, the dfs.perm in Sandbox was set to false. The reason was to allow Hue and some other use cases to create databases and tables.
After Ranger was integrated, we emulated the same behavior by creating a global policy to allow everyone. If they go through the Sandbox Security tutorials, the first step is to disable the global policy (for each component). If you disable the global HDFS policy in Ranger which allows everyone, then you should see what you expect from HDFS security permissions.
... View more