Support Questions

Find answers, ask questions, and share your expertise
Announcements
Check out our newest addition to the community, the Cloudera Data Analytics (CDA) group hub.

Best option for data node sizing: 24x 2 TB disk vs 12x10 TB disk

Contributor

Hi,

I have a hard time to decide which option would be better for Hot Data Nodes sizing:

A) 12 x 10 TB 7.2k, single 12G Raid Controller

Pros: Total cost per GB would be less

Cons: Less number of data nodes, It takes more to replicate data in the case of node failure

B) 24 x 2 TB 7.2k, single 12G Raid Controller

Pros: Higher number of data nodes, It takes less to replicate data in the case of node failure

Cons: 1 single disk controller might create a bottleneck for 24 disks especially in the case of using single drive raid-0 to take advantage of Raid controller caching, total cost per GB would be higher

I was wondering how bad would be to use a data node with a 56x10TB disk with 2 Raid Controller for a cold storage?

4 REPLIES 4

@Ali

Going with larger drive sizes is often tied to a density requirement. In other words, you need a specific amount of storage space, but are limited in the number of servers you can deploy. As a general rule, you will get better performance from 1-2TB drives.

However, using larger drives specifically for cold storage is a reasonable approach. If you are using storage tiers and proper labeling, larger drives should work ok. Just remember that cold storage doesn't typically have the same performance requirement as warm or hot storage.

As you already highlighted, recovering from disk failures will take longer when you use larger drives. I would not recommend using a data node with 56x 10TB drives. I think you would be better off with 3 data nodes of 24 x 8TB drives.

Contributor

@Michael Young

Thank you very much. Unfortunately, the hardware vendor we have married to does not provide any option like 24x8 TB. I have to choose between 12x(10/8/6TB) or 56x(10/8/6) TB. What do you think? Should I go for 12x10TB for the cold storage?

@Ali

I think it depends on your tolerance for disk failure recovery times and overall performance of the cold storage. It /will/ work for you. I'm not a fan of having very large drives, but that doesn't mean you can't do it. You have to balance your needs with your cost and environmental constraints.

Super Guru

@Ali

I will suggest going with 24x2TB. Think about how Hadoop works and what happens in case of failure.

1. More drives mean you have less data per spindle. This is more distribution of data and better I/O performance. If disk fails, you lose only 2TB of data vs 10 TB per disk. That brings to second and more important point.

2. When a node failure occurs, you lose 48 TB of data theoretically (practically less). Hadoop will automatically try to re-replicate lost blocks. If you lose a node with 48 TB, you need to re-replicate 48 TB of data which by the way will take easy 8-10 hours. If you lose a 120 TB of data because of one failure, Hadoop will take probably more than a day to re-replicate loss blocks. Your node will probably come back before that.

I wrote the following article, to explain this second point, when considering Hardware for Hadoop:

https://community.hortonworks.com/content/kbentry/48878/hadoop-data-node-density-tradeoff.html

Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.