Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

upgrade hive schema failed

avatar
Contributor

hi did a CDH upgrade from CDH5.2 to CDH5.10 with cloudera manager. and it failed in the step of upgrade hive schema, shows the error as below:

Error: Specified key was too long; max key length is 1000 bytes (state=42000,code=1071)
Closing: 0: jdbc:mysql://*:3306/metastore?useUnicode=true&characterEncoding=UTF-8
org.apache.hadoop.hive.metastore.HiveMetaException: Upgrade FAILED! Metastore state would be inconsistent !!
org.apache.hadoop.hive.metastore.HiveMetaException: Upgrade FAILED! Metastore state would be inconsistent !!
	at org.apache.hive.beeline.HiveSchemaTool.doUpgrade(HiveSchemaTool.java:229)
	at org.apache.hive.beeline.HiveSchemaTool.doUpgrade(HiveSchemaTool.java:197)
	at org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:468)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.io.IOException: Schema script failed, errorcode 2
	at org.apache.hive.beeline.HiveSchemaTool.runBeeLine(HiveSchemaTool.java:358)
	at org.apache.hive.beeline.HiveSchemaTool.runBeeLine(HiveSchemaTool.java:326)
	at org.apache.hive.beeline.HiveSchemaTool.doUpgrade(HiveSchemaTool.java:224)
	... 8 more
*** schemaTool failed ***

could anyone help me?
6 REPLIES 6

avatar
Champion
I have seen this error on the HDP side. I had created the databases with utf8 charset and the scripts did not like that. On CDH I still use charset utf8.

It seems that you have MySQL and charset utf8 is in use.

My fix was to change them to latin1 but I was working with fresh databases and a fresh install.

avatar
Contributor

 i tried to change the character to latin1, it didnt work. and i tried to change the storage engine to InnoDB, it didnt work either.

avatar
Champion
Try this out. I think you need to use the version for which you are trying to upgrade to, but you can only do that if the parcel for it has been pushed to all of the nodes already. This may help you get the exact Hive version needed for the next command.

schematool -dbType mysql -info.

This will generate the upgrade scripts that the wizard is running. What I would try to do is create a copy of your metastore db and then manually run scripts to see exactly were it hits the error.

schematool -dbType mysql -upgradeSchemaFrom <old-hive-version> -dryRun

It just hit me, the DB was changed but not the tables. They will keep the original charset they were created with. Once you have the problematic table(s) you can changes those with this command.

I tried finding any thing for Cloudera on if you should be making a smaller jump as that is the case sometimes but I couldn't find anything. You could try that as well, see if you can upgrade to something lower like CDH 5.5.x and then going to CDH 5.10.x.

ALTER TABLE tbl_name
[[DEFAULT] CHARACTER SET charset_name]
[COLLATE collation_name]

avatar
Contributor

with your command, it shows:

17/02/22 10:19:19 WARN conf.HiveConf: HiveConf of name hive.metastore.local does not exist
Metastore connection URL:        jdbc:derby:;databaseName=metastore_db;create=true
Metastore Connection Driver :    org.apache.derby.jdbc.EmbeddedDriver
Metastore connection User:       APP
Hive distribution version:       1.1.0
org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version.
*** schemaTool failed ***

 

so we would go for hive 1.1.0, and the old version is 0.13. I'm not sure whether it matters.

 

and besides, i can see the error from the log where it hit:

 

jdbc:mysql://10.214.1.244:3306/metastore> CREATE TABLE IF NOT EXISTS `PART_CO 
L_STATS` ( `CS_ID` bigint(20) NOT NULL, `DB_NAME` varchar(128) CHARACTER SET lat 
in1 COLLATE latin1_bin NOT NULL, `TABLE_NAME` varchar(128) CHARACTER SET latin1  
COLLATE latin1_bin NOT NULL, `PARTITION_NAME` varchar(767) CHARACTER SET latin1  
COLLATE latin1_bin NOT NULL, `COLUMN_NAME` varchar(128) CHARACTER SET latin1 COL 
LATE latin1_bin NOT NULL, `COLUMN_TYPE` varchar(128) CHARACTER SET latin1 COLLAT 
E latin1_bin NOT NULL, `PART_ID` bigint(20) NOT NULL, `LONG_LOW_VALUE` bigint(20 
), `LONG_HIGH_VALUE` bigint(20), `DOUBLE_HIGH_VALUE` double(53,4), `DOUBLE_LOW_V 
ALUE` double(53,4), `BIG_DECIMAL_LOW_VALUE` varchar(4000) CHARACTER SET latin1 C 
OLLATE latin1_bin, `BIG_DECIMAL_HIGH_VALUE` varchar(4000) CHARACTER SET latin1 C 
OLLATE latin1_bin, `NUM_NULLS` bigint(20) NOT NULL, `NUM_DISTINCTS` bigint(20),  
`AVG_COL_LEN` double(53,4), `MAX_COL_LEN` bigint(20), `NUM_TRUES` bigint(20), `N 
UM_FALSES` bigint(20), `LAST_ANALYZED` bigint(20) NOT NULL, PRIMARY KEY (`CS_ID` 
), CONSTRAINT `PART_COL_STATS_FK` FOREIGN KEY (`PART_ID`) REFERENCES `PARTITIONS 
` (`PART_ID`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1
0: jdbc:mysql://10.214.1.244:3306/metastore> CREATE INDEX PCS_STATS_IDX ON PART_ 
COL_STATS (DB_NAME,TABLE_NAME,COLUMN_NAME,PARTITION_NAME)
Exit code: 1

 i think the marked sql script is where it hit error.

 

do you have any idea about the mysql script? thanks in advance.

avatar
Champion
Ok, had to get my MySQL DBA hat back out.

InnoDB max key length is 767 bytes and MyISAM's is 1000. Latin1 is a character to byte equivalent while utf8 requires additional bytes per character. So the key that is trying to add to this table at a minimum is 1151 bytes.

I just checked out my CDH 5.8.2 metastore and see the same index and same column sizes. So I don't have idea why it is an issue for you. Can you try upgrading to a lower CDH version first?

avatar
Contributor

maybe not possible, as you know, it's almost done for the upgrade , and i have upgraded hdfs metadata. and i think i cant roll back without cloudera manager enterprise version.