Support Questions
Find answers, ask questions, and share your expertise
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Why region server goes down ?

Why region server goes down ?

Rising Star


I'm totally new to both Hadoop & Hbase, and just starting playing around with it...

I want to store timeseries (param values, like temperature, pressure, ...) in HBase, and I imagined following schema :

One parameter will be stored in one single row, and values will be stored in columns (timestamp) for this row.

Due to current volume of data for each parameter, this design could lead to several millions (up to 100 millions approx) of columns for a single row (and to several millions of rows but I guess this is not a problem...)

I tried to write such data using hortonworks sandbox (by default, one region server), and basically, here is the code I execute :


Put put = new Put(Bytes.toBytes("MyParam")); for (Sample sample : samples) {

put.addColumn(Bytes.toBytes("singleFam"), Bytes.toBytes(sample.getTimestamp()), Bytes.toBytes(sample.getValue())); }



The problem is when the number of columns exceeds 1 or 2 millions, region server usually stops during write operation, and I don't understand why...Is there a limit for put size operations ? for number of columns in a row ?

Any help would be greatly appreciated ...


Re: Why region server goes down ?

@Sebastien Chausson

The way you are storing data looks little different. Can't you make the timestamp row key and parameters are columns? As for your code snippet the parameter name is row key and timestamp is column name which is not good.


	List<Put> puts = new ArrayList<Put>();
	for(Sample sample : samples) {
	Put put = new Put(Bytes.toBytes(sample.getTimestamp())); 
	put.addColumn(Bytes.toBytes("singleFam"), Bytes.toBytes(sample.getParamName()), Bytes.toBytes(sample.getValue()));

Or you can you Phoenix for your use case with schema like timestamp is your primary key and parameters as columns.

For time series data there is chance that a region server become hotspot and may shutdown. You can use salting feature provided by Phoenix

Row timestamp feature also good feature can be utilized for your use case.



Re: Why region server goes down ?

hello sebastien

At a very high first level you can think of Hbase as a hastable. In order to get throughput habse will distribute you rowkeys in smaller chunks, "region" and these N regions will be served by Regionservers. Your schema should take this into account, by this I mean if your rowkeys are timestamps they will continuously increase so you will always write to the latest region and only get the throughput of one region and thus not get the benefit of distribution. Do read up on region split:

Back to your problem: millions of columns or rows is not an issue. this beeing said you need to help the system a little bit. When you write to Hbase you will write to a memory buffer the memstore and this one will regularly flush. When that happens behind the scene maintenance can happen, like compactions etc.. If not accounted for this can lead to load on the machine and it can fail to heartbeat it's state. Hbase can then suppose the region is dead and will take it down. Look in your logs if you see things like zookeeper timeout

ressources that can help you understand overall

If you share you schema and log error there are probably some architecture of config elements to tweak to help you out


Re: Why region server goes down ?

Rising Star

Thanks for your answer,

If I understand properly the first part of your post, you "confirm" that using parameter name as row key could be better design than using timestamp as row key. Great.

Now for the second part, I need additional help : what log should I consider here : I took a look at zookeeper logs, as well at hbase master & region logs, but nothing happened in these logs while running my code...?

How can I share my schema ?

Sorry for dumb questions...

Re: Why region server goes down ?

Rising Star

Hi Rajeshbabu,

Thanks for your answer.

About your first suggestion : yes, I can use timestamp as row key, but what will it change exactly ? As I may consider millions of parameters as well, I'm afraid I will end in the same situation, no ?

And about Phoenix use, I don't figure out why using phoenix would help ? For me phoenix is "only" an additional layer to be able to run SQL query over HBase storage, so it does not answer to my schema design question...but maybe I'm wrong...

Thanks again for your help