Author |
Thread Statistics | Show CCP posts - 0 post(s) |

Lazuran
Gallente Time And ISK Sink Corporation
|
Posted - 2007.12.28 12:10:00 -
[1]
RAID-1 ftw. ...
Disclaimer: I do not speak for the fanbois. |

Lazuran
Gallente Time And ISK Sink Corporation
|
Posted - 2007.12.28 12:51:00 -
[2]
Originally by: Brackun I would've thought RAID-5 would be better, it's what most NAS boxes seem to default to anyway - and they're dedicated storage devices.
RAID-5 needs 3 drives or more and is kinda expensive to do (parity calculations). It "wastes" only 1 drive so it's a good default for NAS.
RAID-6 needs 4 drives or more and is even more expensive to do (more parity calculations). It is also rather "new" (as in not widely available for a long time yet).
RAID-1 is extremely cheap to implement and requires 2 drives, most current PCs support it out of the box at the BIOS level.
Disclaimer: I do not speak for the fanbois. |

Lazuran
Gallente Time And ISK Sink Corporation
|
Posted - 2007.12.28 13:18:00 -
[3]
Edited by: Lazuran on 28/12/2007 13:19:27
Originally by: Chribba anyone know if RAMdrives will fail after a while too? since the problem with db-drives is the massive write/reads, making a RAMdrive would go poof on the RAM after a while too or anyone know?
If you mean those Gigabyte i-ram drives, the answer is probably no: no mechanical parts and RAM can be rewritten for a long time (no limits known to me). But i-ram is rather small.
If you mean SSDs, the have a finite lifetime and I doubt the manufacturers' MTBF claims of 2m hours (vs. 500k hours for mechanical drives). These must be extrapolated failure rates for short operation intervals i.e. 2000 drives running for 1000 hours => ~1 broken = 2m hours MTBF (simplified). However, Flash memory has a finite lifespan (measured in times a block can be written), so such an extrapolation is not correct. It's also probably based on assumptions of writes/hour for laptops and such, so not nearly what you'd be doing for database work.
You best tradeoff for cost/space/life is probably a raid-1 of 2 32GB ATA/SATA-SSDs. They are very fast for DB work, but make sure you do some ageing on one of the 2 SSDs before you set up the RAID or they might both fail at the same time (identical writes, finite number of total writes ...).
Disclaimer: I do not speak for the fanbois. |

Lazuran
Gallente Time And ISK Sink Corporation
|
Posted - 2007.12.28 14:39:00 -
[4]
Edited by: Lazuran on 28/12/2007 14:41:37
Originally by: Dark Shikari
Originally by: Lazuran Edited by: Lazuran on 28/12/2007 13:19:27
Originally by: Chribba anyone know if RAMdrives will fail after a while too? since the problem with db-drives is the massive write/reads, making a RAMdrive would go poof on the RAM after a while too or anyone know?
If you mean those Gigabyte i-ram drives, the answer is probably no: no mechanical parts and RAM can be rewritten for a long time (no limits known to me). But i-ram is rather small.
If you mean SSDs, the have a finite lifetime and I doubt the manufacturers' MTBF claims of 2m hours (vs. 500k hours for mechanical drives). These must be extrapolated failure rates for short operation intervals i.e. 2000 drives running for 1000 hours => ~1 broken = 2m hours MTBF (simplified). However, Flash memory has a finite lifespan (measured in times a block can be written), so such an extrapolation is not correct. It's also probably based on assumptions of writes/hour for laptops and such, so not nearly what you'd be doing for database work.
I'm pretty sure that SSD tech has gotten to the point where solid state drives last longer overall than ordinary hard drives. The "finite lifespan" crap is basically nothing more than FUD at this point, I think.
That is basically what manufacturers claim, as well as those reviewers who look at the MTBF numbers without asking themselves whether any of these drives have been running for 2 million hours yet. ;-)
It is a fact that Flash-based SSD has a finite number of write cycles for each flash element. This limit has seen a dramatic increase in the past few years, but still varies greatly among the manufacturers. These drives now remap overly used blocks to increase the durability and use other neat tricks, but in the end the MTBF is a fictional number based on assumptions made by the manufacturer.
Now when Samsung claims 2m hours MTBF, but their product is aimed at the laptop market rather than the server market, they will assume typical laptop usage, which is several orders of magnitude less write-intensive than DB server usage. So you cannot compare a SCSI disk for servers with MTBF of 800k with a SSD for laptops with MTBF of 2m and conclude that the latter is 2.5 times more durable. It isn't. And a lot of reviewers make the mistake of assuming that MTBF is based on the assumption of continuous use, it isn't.
Anandtech writes about this: This means an average user can expect to use the drive for about 10 years under normal usage conditions or around five years in a 100% power-on state with an active/idle duty cycle at 90%. These numbers are subject to change depending upon the data management software algorithms and actual user patterns.
This is not higher than current server disks, where typical usage patterns are much more write intensive...
PS. the finite lifespan is of course a technological limit and cannot be removed ... current flash SSD's offer around 100k write/erase cycles per sector apparently.
Disclaimer: I do not speak for the fanbois. |

Lazuran
Gallente Time And ISK Sink Corporation
|
Posted - 2007.12.29 00:23:00 -
[5]
Originally by: Chribba
Originally by: Cergorach [curious]How big is this database?[/curious]
Only some 1GiB so not big at all.
With such a tiny DB you should have no performance issues at all with a decent RDBMS and a server with more RAM than that...
Disclaimer: I do not speak for the fanbois. |
|
|