Member since
03-23-2017
41
Posts
5
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1215 | 01-19-2018 08:05 AM | |
5963 | 12-01-2017 06:46 PM | |
5630 | 04-19-2017 06:32 AM |
12-01-2017
06:46 PM
1 Kudo
you may want to increase value of phoenix.coprocessor.maxServerCacheTimeToLiveMs Maximum living time (in milliseconds) of server caches. A cache entry expires after this amount of time has passed since last access. Consider adjusting this parameter when a server-side IOException(“Could not find hash cache for joinId”) happens. Getting warnings like “Earlier hash cache(s) might have expired on servers” might also be a sign that this number should be increased. http://phoenix.apache.org/tuning.html
... View more
11-23-2017
02:11 AM
Thanks all for help! I carried out steps as I mentioned in question. Please include `chown -R` operation too before starting services as mentioned by @rmaruthiyodan We did it with approx 5min of downtime though, If anyone else carries this operation out without downtime/in rolling fashion, please let community know.
... View more
11-21-2017
07:34 AM
@Karthik Palanisamy I am trying to figure a rolling based approach and in my comment above, I suggest repeating step 2,3,4 for each zookeeper one after other. Is that correct way to go about this?
... View more
11-21-2017
06:54 AM
Thanks @rmaruthiyodan , So let me confirm the steps:
change dataDir conf in ambari . (dataLogDir is not separately configured.) shutdown zk node. copy contents to new dir, change permission of folder (myid and version-2/ ) start zk repeat 2-4 for other two zk. yes we have 3 zookeeper nodes. I wanted to ask if above steps can be executed while HBase is running. (they should be)
... View more
11-20-2017
07:42 AM
1 Kudo
Hello, Currently our zookeeper dataDir is at `/dfs/1/hadoop/zookeeper/` but unfortunately, `/dfs/1/` is the HDFS disk mount. Hence in current scenario, for us it is not possible to swap disks for HDFS as zookeeper is also using it. We wanted to move zookeeper dataDir to some other place like `/usr/lib/zookeeper` but I am not quite sure of what steps needs to be taken. Here's what I think should work. Create new dir. Stop zookeeper and hbase. copy data from old zk datadir to new zk datadir change zk conf to point dataDir to new dir start zk and hbase. Here what I'm unsure of is if copying the data is the correct way to do this. We do not have staging cluster hence seeking help from community 🙂 Much thanks! Sanket.
... View more
Labels:
11-18-2017
06:02 AM
Adding to @Ankit Singhal's answer, we did try to replace the jars and it worked. I have a write-up here: https://superuser.blog/upgrading-apache-phoenix-hdp/
... View more
11-18-2017
05:59 AM
I managed to upgrade it to 4.10 as 4.7 had some serious bugs which would not allow it to be usable. https://superuser.blog/upgrading-apache-phoenix-hdp/ UPDATE: we were on HDP 2.5 and bug was related to count() method. It may have been resolved in newer patched version of phoenix available with latest HDP 2.6
... View more
05-30-2017
08:04 AM
copied from zookeeper docs:
tickTime: the length of a single tick, which is the basic time unit used by ZooKeeper, as measured in milliseconds. It is used to regulate heartbeats, and timeouts. For example, the minimum session timeout will be two ticks.
so minimum is twice the tickTime and maximum is 20 times of the same. And yes I agree for most systems default 40s shall suffice, but for exceptional case, one needs to increase. (as also recommanded in HBase book: http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired)
... View more
05-29-2017
07:05 AM
Adding to @Josh Elser 's answer, if you choose to increase timeout for zk session by increasing ticktime, adding those values in HBase conf won't work. It'll take zk ticktime in account for calculation (ticktime * 20) I was facing the same problem and later I wrote about it here: https://superuser.blog/hbase-dead-regionserver/
... View more