Created on 03-15-2016 12:00 PM - edited 09-16-2022 03:09 AM
Hello,
I downloaded Cloudera QuickStart VM CDH 5.5 and brought it up and running.
Later I tried using Apache Phoenix parcel as mentioned in the blogpost here
https://blog.cloudera.com/blog/2015/11/new-apache-phoenix-4-5-2-package-from-cloudera-labs/
However after choosing parcel version 1.2, finally at the distribution level, I got following error
"Dependency not satisfied for release CLABS_PHOENIX(4.5.2-1.clabs_phoenix1.2.0.p0.774): CDH (at least 5.5)."
Any idea why is this error coming since I am already on CDH 5.5 ?
Created 03-20-2016 10:13 PM
Hello,
This is what worked for me.
Increased the Region Server heap size it to 500 MiB, redeployed client configuration and restarted HBase.
I was able to get sql line phoenix script working for me.
Created 03-15-2016 12:06 PM
The reason is that you can't mix a Linux packages install with a parcels install. By default, the VM uses Linux packages so that you don't have to use CM unless you have the memory. There's a button on the desktop to migrate CDH to a parcel installation. After you've run that, the Phoenix parcel install should work.
Created 03-15-2016 12:07 PM
Via CLI, that script can be invoked as `sudo /home/cloudera/parcels`.
Created 03-15-2016 12:12 PM
Created 03-15-2016 12:30 PM
It's a button on the desktop in the QuickStart VM - I saw you were using Docker, so I added the CLI equivalent of 'sudo /home/cloudera/parcels' above <- that should run the same script that will install the parcel that matches the QuickStart image, remove the Linux packages, etc..
Created 03-15-2016 12:42 PM
Thanks for the quick feedback.
I am still not sure about what exactly needs to be done.
I have a docker container that is running QuickStart VM for CDH 5.5.
Now I have added a parcel that "needs" to be a package. You have given me a CLI command, thanks for that, but not sure what exactly I am supposed to?
I am referring to http://www.cloudera.com/documentation/enterprise/5-5-x/topics/cm_ig_migrating_parcels_to_packages.ht... but it does not seem to be applicable to QuickStart Dockerized VM for CDH 5.5.
So do I download a parcel and then go to CLI and issue command that you specified?
Can you please help me detailing out what needs to be done?
Apprceiate your help.
Created 03-15-2016 12:56 PM
Created 03-15-2016 10:33 PM
Hi, Thanks for your feedback.
I am able to parcel-ify CDH and then abel to accept distribute and activate Phoenix parcel via "Cloudera manager". However after that when I try to run phoenix-sqlline.py,
[root@quickstart conf]# /usr/bin/phoenix-sqlline.py localhost:2181 Setting property: [isolation, TRANSACTION_READ_COMMITTED] issuing: !connect jdbc:phoenix:localhost:2181 none none org.apache.phoenix.jdbc.PhoenixDriver Connecting to jdbc:phoenix:localhost:2181 16/03/16 05:28:47 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 16/03/16 05:28:48 WARN impl.MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-phoenix.properties,hadoop-metrics2.properties
It comes here and simply hangs.
I tried few suggestions like attempting commands like
a. phoenix-sqlline.py localhost:2181:/hbase
b. Adding salt bucket as 0 (as suggested in https://community.cloudera.com/t5/Cloudera-Labs/phoenix-sqlline-not-working/td-p/28787) but no luck.
Any idea how do I proceed?
Created 03-16-2016 08:11 AM
Can you check that Zookeeper (and HDFS and HBase, for that matter) are running in Cloudera Manager? Port 2181 is ZooKeeper and it seems like it's not able to connect to that. Because running every service requires quite a lot of memory for a VM, when you migrate to Cloudera Manager or switch to parcels, it won't start every service for you. If you go to Cloudera Manager and log in, the home screen should show a table of all the services in the cluster. Make sure ZooKeeper, HDFS and HBase are marked with a green dot. Otherwise, they may need to started or restarted. If they're marked with a question mark, usually that means one of the "Management Services" (really, these are just parts of CM represented as separate services) need to be restarted.
Created on 03-16-2016 10:45 AM - edited 03-16-2016 10:48 AM
Hi, I am able to see HDFS and Zookeeper both being green. HBase comes up for few seconds and then reports "Region Server" as "Unexpectedly Exited" not sure what do I need to do? Also even if sometimes it manages to stay green for longer duration, as soon as I issue phoenix-sqlline.py localhost:2181 command, it turns RED indicating that it exited. So how to proceed?
Created 03-16-2016 12:25 PM
Your best bet to figure out why it's failing is to check the log for the RegionServer role. Click on the HBase service and down the left hand side you'll see the RegionServer. You'll want to open that, go to the "Processes" tab, and the click "See Role Log Details". Most recent messages will be at the bottom, and my guess is the error should be in the last few entries. (I might be missing a link or tab or something in that navigation - hopefully this is clear enough for you to find it!)
Created 03-16-2016 12:27 PM
One possibility to have in mind is memory issues. The VM is a very compact environment, and it only gets tested with fairly small demo datasets. If you've loaded other data into HBase prior to trying to access it via Phoenix, you might need to do some tweaking of memory configuration in HBase or add more memory to the VM to get it to work as reliably as it ordinarily would.
Created 03-16-2016 12:28 PM
Created 03-16-2016 04:43 PM
I do not see any errors in the logs, Region Server Logs are full of just INFO statements, no ERROR, FATAL etc. There were couple of benign WARNs
I tried to follow the steps mentioned in the
http://search-hadoop.com/m/9UY0h2NwOxFS0iIc&subj=Re+Re+CANNOT+connect+to+Phoenix+in+CDH+5+5+1
And kept having issues with query
hbase(main):001:0> disable 'SYSTEM.SEQUENCE'
ERROR: Connection refused
Here is some help for this command:
Start disable of named table:
hbase> disable 't1'
hbase> disable 'ns1:t1'
Please help.
Created 03-20-2016 10:13 PM
Hello,
This is what worked for me.
Increased the Region Server heap size it to 500 MiB, redeployed client configuration and restarted HBase.
I was able to get sql line phoenix script working for me.