Member since
09-18-2015
216
Posts
208
Kudos Received
49
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1067 | 09-13-2017 06:04 AM | |
2143 | 06-27-2017 06:31 PM | |
2065 | 06-27-2017 06:27 PM | |
9004 | 11-04-2016 08:02 PM | |
9226 | 05-25-2016 03:42 PM |
11-21-2015
01:48 AM
hadoop-httpfs is required for making Hue work with Namenode HA but it looks like it is broken in HDP2.2.8
... View more
Labels:
11-21-2015
01:39 AM
1 Kudo
This happens if you previously were on Hive0.12 and metastore database was created by autoCreateSchema as @Deepesh. So, to start with, first of all set datanucleus.autoCreateSchema to false. Contact Hortonworks Support (support.hortonworks.com) while doing in production and ensure you have backed up Hive Metastore Database before doing this. I have faced this issue many times in past while upgrade. And then, I resolve this by performing below steps. ###### Modify/correct table schemas and indexes
######Note: These are empty tables with wrong schema in Hive0.12.0 metastore schema created by autocreation. (Note: This is example of HDP2.2.8 and assumes MySQL as Metastore Datbase)
mysql -u hive -p
Enter Password:
mysql> use hive;
Database changed
DROP INDEX PCS_STATS_IDX ON PART_COL_STATS;
DROP TABLE TAB_COL_STATS;
DROP TABLE PART_COL_STATS;
#######Recreate these tables and index
-- Table structure for table `TAB_COL_STATS`
--
CREATE TABLE IF NOT EXISTS `TAB_COL_STATS` (
`CS_ID` bigint(20) NOT NULL,
`DB_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
`TABLE_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
`COLUMN_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
`COLUMN_TYPE` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
`TBL_ID` bigint(20) NOT NULL,
`LONG_LOW_VALUE` bigint(20),
`LONG_HIGH_VALUE` bigint(20),
`DOUBLE_HIGH_VALUE` double(53,4),
`DOUBLE_LOW_VALUE` double(53,4),
`BIG_DECIMAL_LOW_VALUE` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin,
`BIG_DECIMAL_HIGH_VALUE` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin,
`NUM_NULLS` bigint(20) NOT NULL,
`NUM_DISTINCTS` bigint(20),
`AVG_COL_LEN` double(53,4),
`MAX_COL_LEN` bigint(20),
`NUM_TRUES` bigint(20),
`NUM_FALSES` bigint(20),
`LAST_ANALYZED` bigint(20) NOT NULL,
PRIMARY KEY (`CS_ID`),
CONSTRAINT `TAB_COL_STATS_FK` FOREIGN KEY (`TBL_ID`) REFERENCES `TBLS` (`TBL_ID`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
--
-- Table structure for table `PART_COL_STATS`
--
CREATE TABLE IF NOT EXISTS `PART_COL_STATS` (
`CS_ID` bigint(20) NOT NULL,
`DB_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
`TABLE_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
`PARTITION_NAME` varchar(767) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
`COLUMN_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
`COLUMN_TYPE` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL,
`PART_ID` bigint(20) NOT NULL,
`LONG_LOW_VALUE` bigint(20),
`LONG_HIGH_VALUE` bigint(20),
`DOUBLE_HIGH_VALUE` double(53,4),
`DOUBLE_LOW_VALUE` double(53,4),
`BIG_DECIMAL_LOW_VALUE` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin,
`BIG_DECIMAL_HIGH_VALUE` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin,
`NUM_NULLS` bigint(20) NOT NULL,
`NUM_DISTINCTS` bigint(20),
`AVG_COL_LEN` double(53,4),
`MAX_COL_LEN` bigint(20),
`NUM_TRUES` bigint(20),
`NUM_FALSES` bigint(20),
`LAST_ANALYZED` bigint(20) NOT NULL,
PRIMARY KEY (`CS_ID`),
CONSTRAINT `PART_COL_STATS_FK` FOREIGN KEY (`PART_ID`) REFERENCES `PARTITIONS` (`PART_ID`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
CREATE INDEX PCS_STATS_IDX ON PART_COL_STATS (DB_NAME,TABLE_NAME,COLUMN_NAME,PARTITION_NAME) USING BTREE;
######Now, Edit this file '/usr/hdp/2.2.8.0-3150/hive/scripts/metastore/upgrade/mysql/upgrade-0.12.0-to-0.13.0.mysql.sql' to correct path from relative path for following files otherwise metastore upgrade would fail as file paths will not resolve:
###replace '016-HIVE-6386.mysql.sql' with /usr/hdp/2.2.8.0-3150/hive/scripts/metastore/upgrade/mysql/016-HIVE-6386.mysql.sql
####replace '017-HIVE-6458.mysql.sql' with
/usr/hdp/2.2.8.0-3150/hive/scripts/metastore/upgrade/mysql/017-HIVE-6458.mysql.sql
replace '018-HIVE-6757.mysql.sql' with /usr/hdp/2.2.8.0-3150/hive/scripts/metastore/upgrade/mysql/018-HIVE-6757.mysql.sql
####replace
'hive-txn-schema-0.13.0.mysql.sql' with
/usr/hdp/2.2.8.0-3150/hive/scripts/metastore/upgrade/mysql/hive-txn-schema-0.13.0.mysql.sql
###On Hive Metastore Node:
cd /usr/hdp/2.2.8.0-3150/hive/scripts/metastore/upgrade/mysql/
vi /usr/hdp/2.2.8.0-3150/hive/scripts/metastore/upgrade/mysql/upgrade-0.12.0-to-0.13.0.mysql.sql
###Upgrade Hive Metastore Database
######
mysql -u hive -p
Enter Password:
mysql> use hive;
Database changed
mysql> source /usr/hdp/2.2.8.0-3150/hive/scripts/metastore/upgrade/mysql/upgrade-0.12.0-to-0.13.0.mysql.sql
..............
..............
..............
mysql>exit;
#######
/usr/hdp/2.2.8.0-3150/hive/bin/schematool -upgradeSchema -dbType mysql -userName hive -passWord gacluster -verbose
Metastore connection URL: jdbc:mysql://gpgbyhppn02.srv.gapac.com/hive
Metastore Connection Driver : com.mysql.jdbc.Driver
Metastore connection User: hive
Starting upgrade metastore schema from version 0.13.0 to 0.14.0
Upgrade script upgrade-0.13.0-to-0.14.0.mysql.sql
Looking for pre-0-upgrade-0.13.0-to-0.14.0.mysql.sql in /usr/hdp/2.2.8.0-3150/hive/scripts/metastore/upgrade/mysql
Connecting to jdbc:mysql://hive.srv.test.com/hive
Connected to: MySQL (version 5.1.66)
Driver: MySQL-AB JDBC Driver (version mysql-connector-java-5.1.17-SNAPSHOT ( Revision: ${bzr.revision-id} ))
Transaction isolation: TRANSACTION_READ_COMMITTED
0: jdbc:mysql://hive.srv.test.com/hiv> !autocommit on
Autocommit status: true
0: jdbc:mysql://hive.srv.test.com/hiv> SELECT 'Upgrading MetaStore schema from 0.13.0 to 0.14.0' AS ' '
+---------------------------------------------------+--+
| |
+---------------------------------------------------+--+
| Upgrading MetaStore schema from 0.13.0 to 0.14.0 |
+---------------------------------------------------+--+
1 row selected (0.015 seconds)
0: jdbc:mysql://hive.srv.test.com/hiv> CREATE TABLE IF NOT EXISTS `PART_COL_STATS` ( `CS_ID` bigint(20) NOT NULL, `DB_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, `TABLE_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, `PARTITION_NAME` varchar(767) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, `COLUMN_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, `COLUMN_TYPE` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, `PART_ID` bigint(20) NOT NULL, `LONG_LOW_VALUE` bigint(20), `LONG_HIGH_VALUE` bigint(20), `DOUBLE_HIGH_VALUE` double(53,4), `DOUBLE_LOW_VALUE` double(53,4), `BIG_DECIMAL_LOW_VALUE` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin, `BIG_DECIMAL_HIGH_VALUE` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin, `NUM_NULLS` bigint(20) NOT NULL, `NUM_DISTINCTS` bigint(20), `AVG_COL_LEN` double(53,4), `MAX_COL_LEN` bigint(20), `NUM_TRUES` bigint(20), `NUM_FALSES` bigint(20), `LAST_ANALYZED` bigint(20) NOT NULL, PRIMARY KEY (`CS_ID`), CONSTRAINT `PART_COL_STATS_FK` FOREIGN KEY (`PART_ID`) REFERENCES `PARTITIONS` (`PART_ID`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1
No rows affected (0.002 seconds)
0: jdbc:mysql://hive.srv.test.com/hiv> CREATE INDEX PCS_STATS_IDX ON PART_COL_STATS (DB_NAME,TABLE_NAME,COLUMN_NAME,PARTITION_NAME) USING BTREE
No rows affected (0.298 seconds)
0: jdbc:mysql://hive.srv.test.com/hiv> UPDATE VERSION SET SCHEMA_VERSION='0.14.0', VERSION_COMMENT='Hive release version 0.14.0' where VER_ID=1
1 row affected (0.101 seconds)
0: jdbc:mysql://hive.srv.test.com/hiv> SELECT 'Finished upgrading MetaStore schema from 0.13.0 to 0.14.0' AS ' '
+------------------------------------------------------------+--+
| |
+------------------------------------------------------------+--+
| Finished upgrading MetaStore schema from 0.13.0 to 0.14.0 |
+------------------------------------------------------------+--+
1 row selected (0.002 seconds)
0: jdbc:mysql://hive.srv.test.com/hiv> !closeall
Closing: 0: jdbc:mysql://hive.srv.test.com/hive
beeline>
Completed upgrade-0.13.0-to-0.14.0.mysql.sql
schemaTool completed
######Now, Start Hive Service from Ambari UI##
... View more
11-21-2015
01:02 AM
1 Kudo
It looks like, Hue could not bring up the new version..
Here is the script for pick up the versions. /usr/lib/hue/tools/ fill_versions.sh Please note that the script is only checked once, so the wrong information will not be updated automatically. Here is the workaround.
Go to /usr/lib/hue/
Move VERSIONS to VERSIONS.org
then restart hue service. service hue stop
cd /usr/lib/hue
mv VERSIONS VERSIONS.org
service hue start Then, you can see the following message.. [root@sandbox hue]# service hue start
Detecting versions of components...
HUE_VERSION=2.6.1-3150
HDP=2.2.8
Hadoop=2.6.0
Pig=0.14.0
Hive-Hcatalog=0.14.0
Oozie=4.1.0
Ambari-server=2.1-377
HBase=0.98.4
Knox=0.5.0
Storm=0.9.3
Falcon=0.6.0
Starting hue: [ OK ] Once, done you will see correct Hue UI.
... View more
11-20-2015
06:20 PM
Labels:
- Labels:
-
Cloudera Hue
11-20-2015
06:05 PM
40 Kudos
To Fix under-replicated blocks in HDFS, below is quick instruction to use: ####Fix under-replicated blocks### su - <$hdfs_user>
bash-4.1$ hdfs fsck / | grep 'Under replicated' | awk -F':' '{print $1}' >> /tmp/under_replicated_files
-bash-4.1$ for hdfsfile in `cat /tmp/under_replicated_files`; do echo "Fixing $hdfsfile :" ; hadoop fs -setrep 3 $hdfsfile; done
... View more
Labels:
11-12-2015
07:07 AM
3 Kudos
@Alex Miller Yes, it is supported with HDP. I have deployed it in production for HBase for a couple of large customers in recent past.
... View more
11-10-2015
02:04 AM
4 Kudos
Below are steps to replace disk in slave nodes or to perform maintenance of slave nodes servers. 1. Decommission the Datanode and all services running on it (i.e. NodeManager, HBase RegionServer, Datanode etc). Refer below docs for the same. https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.1.0/bk_ambari-user-guide/content/decommissioning_masters_and_slaves.html https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.0/bk_administration/content/ch_slave_nodes.html 2. Replace the disks or perform any other tasks for server maintenance. 3. Recommission the node. 4. Start all services components on the node. 5. Run Fsck utility for HDFS to ensure that HDFS is in healthy state. FSCK reports usually show a few over replicated blocks after a datanode is recommissioned which would automatically be fixed over time.
... View more
11-10-2015
01:59 AM
@Neeraj Should We keep this answer or remove it. Looks like @vsomani@hortonworks.com changed the question. I have created an article out of it. http://community.hortonworks.com/articles/3131/replacing-disk-on-datanode-hosts.html
... View more
11-08-2015
09:05 AM
2 Kudos
@vsomani@hortonworks.com
Steps to replace disk in slavenodes or to perform maintenance of slavenode servers remains the same irrespective of Hadoop distribution. We don't have dedicated steps in our doc AFAIK. But below should be the steps. 1. Decommission the Datanode and all services running on it i.e. NodeManager, HBase RegionServer, Datanode etc. Below is reference for the same. http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.0/bk_Ambari_Users_Guide/content/_decommissioning_masters_and_slaves_.html http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_Sys_Admin_Guides/content/ch_slave_nodes.html 2. Replace the disks or perform any other tasks for server maintenace. 3. Recommission the node. 4. Start all services components on the node. 5. Run Fsck for HDFS to ensure that HDFS is in healthy state. FSCK report might show a few over replicated blocks which would automatically be fixed.
... View more
11-07-2015
02:24 AM
@Neeraj This issue is reproducible in customer environments, I have faced this a couple of times in customer environments during upgrade in past one month. This happens upgrading clusters with Hive0.12 or earlier which were created using datanucleus.autoCreateSchema.
... View more