Support Questions

Find answers, ask questions, and share your expertise

Phoenix 4.6 on HDP 2.3 and 2.2

avatar
Rising Star

I would like to upgrade to the latest phoenix version on two clusters, one stack is 2.3 the other is 2.2. Is this supported and what's the recommended upgrade method? Thanks

1 ACCEPTED SOLUTION

avatar

Upgrading any component out of HDP is not recommended , however, you can upgrade to phoenix 4.6 with just changing phoenix-server.jar on all nodes with phoenix-4.6 server jar and upgrading client jar to phoenix-4.6 . Updates in the schema will be automatically taken care by new phoenix-client jar.

And just look for upgrade option in bin/psql.py.

u,--upgrade Upgrades tables specified as arguments by rewriting them with the correct row key for descending columns. If no arguments are specified, then tables that need to be upgraded will be displayed without being upgraded. Use the -b option to bypass the rewrite if you know that your data does not need to be upgrade. This would only be the case if you have not relied on auto padding for BINARY and CHAR data, but instead have always provided data up to the full max length of the column. See PHOENIX-2067 and PHOENIX-2120 for more information. Note that phoenix.query.timeoutMs and hbase.regionserver.lease.period parameters must be set very high to prevent timeouts when upgrading.

if you have data affected by above jira tickets.

View solution in original post

5 REPLIES 5

avatar
Master Mentor

HWX does not support upgrade of individual components except for Spark and that's a recent change. You need to upgrade the whole stack, Phoenix 4.6 is not supported by HWX yet, that said, nothing stops you from upgrading Phoenix yourself. Refer to the Phoenix docs for upgrade information https://phoenix.apache.org/upgrading.html

Phoenix 4.6 has been out for some time and 4.7 is going to be out really soon, if you don't want to lose support with HWX, I suggest you wait for the next maintenance release and/or major release and 4.6 should be included.

avatar

Upgrading any component out of HDP is not recommended , however, you can upgrade to phoenix 4.6 with just changing phoenix-server.jar on all nodes with phoenix-4.6 server jar and upgrading client jar to phoenix-4.6 . Updates in the schema will be automatically taken care by new phoenix-client jar.

And just look for upgrade option in bin/psql.py.

u,--upgrade Upgrades tables specified as arguments by rewriting them with the correct row key for descending columns. If no arguments are specified, then tables that need to be upgraded will be displayed without being upgraded. Use the -b option to bypass the rewrite if you know that your data does not need to be upgrade. This would only be the case if you have not relied on auto padding for BINARY and CHAR data, but instead have always provided data up to the full max length of the column. See PHOENIX-2067 and PHOENIX-2120 for more information. Note that phoenix.query.timeoutMs and hbase.regionserver.lease.period parameters must be set very high to prevent timeouts when upgrading.

if you have data affected by above jira tickets.

avatar
Expert Contributor

HDP follows different folder structure and it also creates various symlinks for jars and folders. If you had done it previously or you have any idea how to go about this, can you please share the details?

avatar
Master Mentor
@Brenden Cobb

http://hortonworks.com/hdp/whats-new/

Phoenix 4.4 as of HDP 2.3 so if you want to test 4.6 then you have to download from phoenix site "not supported as of now"

http://apache.spinellicreations.com/phoenix/

avatar
Expert Contributor

Adding to @Ankit Singhal's answer, we did try to replace the jars and it worked. I have a write-up here: https://superuser.blog/upgrading-apache-phoenix-hdp/