Pages: 1 2 :: [one page] |
|
Author |
Thread Statistics | Show CCP posts - 0 post(s) |
|

Chribba
Otherworld Enterprises Otherworld Empire
|
Posted - 2007.12.28 11:20:00 -
[1]
The database drive of EVE-Files went poof again, am on my way over to replace the disk and restore the database(s) but decided to have EVE-Files (and other sites) to remain offline until it's all been fixed. Sorry for the trouble everyone.
/c
Secure 3rd party service ■ the Love project |
|

Pilk
Axiom Empire
|
Posted - 2007.12.28 11:28:00 -
[2]
We still love you.
I just wish you'd love(ship) me back. 
--P
Kosh: The avalanche has already started. It is too late for the pebbles to vote. Tyrrax's bet status: UNPAID. |

Wiggy69
Silver Aria
|
Posted - 2007.12.28 11:28:00 -
[3]
I'm sure we'll cope, you provide a fantastic service for nothing, I'm sure it's inevitable that things will break at one time or another.
GO CHRIBBA! TO THE CHRIBBAMOBILE!  -----
Wiggy's Bad Spelling and Grammar Complaints Department |

Sadayiel
Caldari Dragon Highlords Ministry Of Amarrian Secret Service
|
Posted - 2007.12.28 11:36:00 -
[4]
AWWW poor chribba it¦s seems that now piwats even gank his harddrives 
well be happy we¦ll keep some veldspar for you hun 
|

Lazuran
Gallente Time And ISK Sink Corporation
|
Posted - 2007.12.28 12:10:00 -
[5]
RAID-1 ftw. ...
Disclaimer: I do not speak for the fanbois. |

Dave White
Beagle Corp
|
Posted - 2007.12.28 12:25:00 -
[6]
Originally by: Lazuran RAID-1 ftw. ...
Was thinking this 
Originally by: Illyria Ambri Goonie posts are like coke... sure its entertaining in the beginning.. but the more you get the lower your IQ becomes.
|

Cergorach
Amarr The Helix Foundation
|
Posted - 2007.12.28 12:31:00 -
[7]
Originally by: Lazuran RAID-1 ftw. ...
RAID6 is even saver (you can loose two drives in the array without a problem), it is a bit on the expensive side (not many RAID cards support it and those who do are expensive).
|

Iva Soreass
Personal Vendetta
|
Posted - 2007.12.28 12:43:00 -
[8]
GL Mr C hope you can sort it.
|

Brackun
the united
|
Posted - 2007.12.28 12:45:00 -
[9]
I would've thought RAID-5 would be better, it's what most NAS boxes seem to default to anyway - and they're dedicated storage devices.
|

Blafbeest
Gallente North Eastern Swat Pandemic Legion
|
Posted - 2007.12.28 12:46:00 -
[10]
its my fault, can't stop downloading so i blew ur hdd sorry ><
|
|

Lazuran
Gallente Time And ISK Sink Corporation
|
Posted - 2007.12.28 12:51:00 -
[11]
Originally by: Brackun I would've thought RAID-5 would be better, it's what most NAS boxes seem to default to anyway - and they're dedicated storage devices.
RAID-5 needs 3 drives or more and is kinda expensive to do (parity calculations). It "wastes" only 1 drive so it's a good default for NAS.
RAID-6 needs 4 drives or more and is even more expensive to do (more parity calculations). It is also rather "new" (as in not widely available for a long time yet).
RAID-1 is extremely cheap to implement and requires 2 drives, most current PCs support it out of the box at the BIOS level.
Disclaimer: I do not speak for the fanbois. |
|

Chribba
Otherworld Enterprises Otherworld Empire
|
Posted - 2007.12.28 13:03:00 -
[12]
anyone know if RAMdrives will fail after a while too? since the problem with db-drives is the massive write/reads, making a RAMdrive would go poof on the RAM after a while too or anyone know?
Secure 3rd party service ■ the Love project |
|

Mobius Dukat
Merch Industrial GoonSwarm
|
Posted - 2007.12.28 13:10:00 -
[13]
Edited by: Mobius Dukat on 28/12/2007 13:11:37
Originally by: Chribba anyone know if RAMdrives will fail after a while too? since the problem with db-drives is the massive write/reads, making a RAMdrive would go poof on the RAM after a while too or anyone know?
Going by the Basic principal that RAM drives are just RAM sticks in a box.
No. They have no moving parts, very little heat build up...
Ask yourself "how often does a RAM stick fail in your server?" isn't often i'll bet :)
However, you lose power to the server and you lose your memory resident database [unless you have one of those s****y alternate power supply drives]
You'd have to have some process to backup said memory resident database every few minutes. But again that said, if they work as they do in principal..then unless you shut the machine down it shouldn't be a problem :)
|

Franga
NQX Innovations
|
Posted - 2007.12.28 13:12:00 -
[14]
Originally by: Chribba anyone know if RAMdrives will fail after a while too? since the problem with db-drives is the massive write/reads, making a RAMdrive would go poof on the RAM after a while too or anyone know?
Are you talking about the 'solid-state' drives?
Originally by: Rachel Vend ... with 100% reliability in most cases ...
General Aesthetics Changes Thread |
|

Chribba
Otherworld Enterprises Otherworld Empire
|
Posted - 2007.12.28 13:17:00 -
[15]
Originally by: Franga
Originally by: Chribba anyone know if RAMdrives will fail after a while too? since the problem with db-drives is the massive write/reads, making a RAMdrive would go poof on the RAM after a while too or anyone know?
Are you talking about the 'solid-state' drives?
I am talking about either SSD's or RAMdrives eg (Gigabytes or virtual ones). iirc SSDs are not that good for databases.
Secure 3rd party service ■ the Love project |
|

Lazuran
Gallente Time And ISK Sink Corporation
|
Posted - 2007.12.28 13:18:00 -
[16]
Edited by: Lazuran on 28/12/2007 13:19:27
Originally by: Chribba anyone know if RAMdrives will fail after a while too? since the problem with db-drives is the massive write/reads, making a RAMdrive would go poof on the RAM after a while too or anyone know?
If you mean those Gigabyte i-ram drives, the answer is probably no: no mechanical parts and RAM can be rewritten for a long time (no limits known to me). But i-ram is rather small.
If you mean SSDs, the have a finite lifetime and I doubt the manufacturers' MTBF claims of 2m hours (vs. 500k hours for mechanical drives). These must be extrapolated failure rates for short operation intervals i.e. 2000 drives running for 1000 hours => ~1 broken = 2m hours MTBF (simplified). However, Flash memory has a finite lifespan (measured in times a block can be written), so such an extrapolation is not correct. It's also probably based on assumptions of writes/hour for laptops and such, so not nearly what you'd be doing for database work.
You best tradeoff for cost/space/life is probably a raid-1 of 2 32GB ATA/SATA-SSDs. They are very fast for DB work, but make sure you do some ageing on one of the 2 SSDs before you set up the RAID or they might both fail at the same time (identical writes, finite number of total writes ...).
Disclaimer: I do not speak for the fanbois. |

Kastar
Paragon Horizons Intergalactic Brotherhood
|
Posted - 2007.12.28 13:36:00 -
[17]
Thank for your commitment Chribba 
-----------------------------------------------
|

Franga
NQX Innovations
|
Posted - 2007.12.28 13:40:00 -
[18]
Originally by: Chribba
Originally by: Franga
Originally by: Chribba anyone know if RAMdrives will fail after a while too? since the problem with db-drives is the massive write/reads, making a RAMdrive would go poof on the RAM after a while too or anyone know?
Are you talking about the 'solid-state' drives?
I am talking about either SSD's or RAMdrives eg (Gigabytes or virtual ones). iirc SSDs are not that good for databases.
Okay then, it that case I echo Mobius Dukat's post regarding them. Please note, however, I have never used them or have any experience. Any of my knowledge on this subject comes from magazines and/or reviews I have read.
Originally by: Rachel Vend ... with 100% reliability in most cases ...
General Aesthetics Changes Thread |

Meirre K'Tun
Nuclear Halo Insurgency
|
Posted - 2007.12.28 13:57:00 -
[19]
aren't ram drives rather small compared to normal ones?
|

Lanu
Caldari The Black Rabbits The Gurlstas Associates
|
Posted - 2007.12.28 14:16:00 -
[20]
Chribba <3 __________________
:CRY: FIX IT :CRY: |
|

Dark Shikari
Caldari Imperium Technologies Firmus Ixion
|
Posted - 2007.12.28 14:26:00 -
[21]
Originally by: Lazuran Edited by: Lazuran on 28/12/2007 13:19:27
Originally by: Chribba anyone know if RAMdrives will fail after a while too? since the problem with db-drives is the massive write/reads, making a RAMdrive would go poof on the RAM after a while too or anyone know?
If you mean those Gigabyte i-ram drives, the answer is probably no: no mechanical parts and RAM can be rewritten for a long time (no limits known to me). But i-ram is rather small.
If you mean SSDs, the have a finite lifetime and I doubt the manufacturers' MTBF claims of 2m hours (vs. 500k hours for mechanical drives). These must be extrapolated failure rates for short operation intervals i.e. 2000 drives running for 1000 hours => ~1 broken = 2m hours MTBF (simplified). However, Flash memory has a finite lifespan (measured in times a block can be written), so such an extrapolation is not correct. It's also probably based on assumptions of writes/hour for laptops and such, so not nearly what you'd be doing for database work.
I'm pretty sure that SSD tech has gotten to the point where solid state drives last longer overall than ordinary hard drives. The "finite lifespan" crap is basically nothing more than FUD at this point, I think.
23 Member
EVE Video makers: save bandwidth! Use the H.264 AutoEncoder! (updated) |

Lazuran
Gallente Time And ISK Sink Corporation
|
Posted - 2007.12.28 14:39:00 -
[22]
Edited by: Lazuran on 28/12/2007 14:41:37
Originally by: Dark Shikari
Originally by: Lazuran Edited by: Lazuran on 28/12/2007 13:19:27
Originally by: Chribba anyone know if RAMdrives will fail after a while too? since the problem with db-drives is the massive write/reads, making a RAMdrive would go poof on the RAM after a while too or anyone know?
If you mean those Gigabyte i-ram drives, the answer is probably no: no mechanical parts and RAM can be rewritten for a long time (no limits known to me). But i-ram is rather small.
If you mean SSDs, the have a finite lifetime and I doubt the manufacturers' MTBF claims of 2m hours (vs. 500k hours for mechanical drives). These must be extrapolated failure rates for short operation intervals i.e. 2000 drives running for 1000 hours => ~1 broken = 2m hours MTBF (simplified). However, Flash memory has a finite lifespan (measured in times a block can be written), so such an extrapolation is not correct. It's also probably based on assumptions of writes/hour for laptops and such, so not nearly what you'd be doing for database work.
I'm pretty sure that SSD tech has gotten to the point where solid state drives last longer overall than ordinary hard drives. The "finite lifespan" crap is basically nothing more than FUD at this point, I think.
That is basically what manufacturers claim, as well as those reviewers who look at the MTBF numbers without asking themselves whether any of these drives have been running for 2 million hours yet. ;-)
It is a fact that Flash-based SSD has a finite number of write cycles for each flash element. This limit has seen a dramatic increase in the past few years, but still varies greatly among the manufacturers. These drives now remap overly used blocks to increase the durability and use other neat tricks, but in the end the MTBF is a fictional number based on assumptions made by the manufacturer.
Now when Samsung claims 2m hours MTBF, but their product is aimed at the laptop market rather than the server market, they will assume typical laptop usage, which is several orders of magnitude less write-intensive than DB server usage. So you cannot compare a SCSI disk for servers with MTBF of 800k with a SSD for laptops with MTBF of 2m and conclude that the latter is 2.5 times more durable. It isn't. And a lot of reviewers make the mistake of assuming that MTBF is based on the assumption of continuous use, it isn't.
Anandtech writes about this: This means an average user can expect to use the drive for about 10 years under normal usage conditions or around five years in a 100% power-on state with an active/idle duty cycle at 90%. These numbers are subject to change depending upon the data management software algorithms and actual user patterns.
This is not higher than current server disks, where typical usage patterns are much more write intensive...
PS. the finite lifespan is of course a technological limit and cannot be removed ... current flash SSD's offer around 100k write/erase cycles per sector apparently.
Disclaimer: I do not speak for the fanbois. |

Raaki
The Arrow Project Morsus Mihi
|
Posted - 2007.12.28 14:53:00 -
[23]
Everything breaks if you wait long enough.
It's redundancy you want, not something that is claimed to be unbreakable.
|

Bish Ounen
Gallente Omni-Core Freedom Fighters Ethereal Dawn
|
Posted - 2007.12.28 15:26:00 -
[24]
Edited by: Bish Ounen on 28/12/2007 15:26:17 Chribba,
I would check to see what kind of drives your DB servers are using. Ideally you should be running a RAID 5 array on SCSI drives, NOT SATA drives!
Many lower-end datacenters have been swapping out for SATA RAID systems due to the significant cost saving over SCSI, and the much improved lifespan of the new SATA drives over older IDE and EIDE drives (now commonly called PATA). The problem with making a change to a SATA drive array is that while they are fine for raw data storage, due to the low amounts of read-write operations, for a DB server with HIGH amounts of read-writes they can burn out very very fast.
SCSI was designed for the extreme environment of a database server and the millions upon millions of read-write operations per day that a busy DB server can perform. Using a RAID5 array in SCSI is the very best long-term fault-tolerant setup you can get. It costs more, but it's better than losing the DB drives every 6 months.
----------------------------------- How much would it cost to roll back to RevII CCP?
|

Shar'Tuk TheHated
|
Posted - 2007.12.28 15:27:00 -
[25]
<3 Chribba
DRINK RUM It fights scurvy & boosts morale!
THE BEATINGS WILL CONTINUE UNTIL MORALE IMPROVES! |

Regat Kozovv
Caldari E X O D U S Imperial Republic Of the North
|
Posted - 2007.12.28 15:34:00 -
[26]
Originally by: Bish Ounen Edited by: Bish Ounen on 28/12/2007 15:26:17 Chribba,
I would check to see what kind of drives your DB servers are using. Ideally you should be running a RAID 5 array on SCSI drives, NOT SATA drives!
Many lower-end datacenters have been swapping out for SATA RAID systems due to the significant cost saving over SCSI, and the much improved lifespan of the new SATA drives over older IDE and EIDE drives (now commonly called PATA). The problem with making a change to a SATA drive array is that while they are fine for raw data storage, due to the low amounts of read-write operations, for a DB server with HIGH amounts of read-writes they can burn out very very fast.
SCSI was designed for the extreme environment of a database server and the millions upon millions of read-write operations per day that a busy DB server can perform. Using a RAID5 array in SCSI is the very best long-term fault-tolerant setup you can get. It costs more, but it's better than losing the DB drives every 6 months.
-----------------------------------
SCSI is an interface, it does not necessarily indicate an enterprise-class drive. However, I see your point, and I think anyone would be hard pressed to find a SCSI drive not designed for a server environment. In any case, SCSI is largly being supplanted by SAS anyways...
That being said, there are SATA drives designed for the enterprise, and I'm not necessarily referring to 10k Raptor drives. They are still much more cost effective than SCSI/SAS, and unless you really need the high performance, you can get away with SATA just fine. The key really is in how the disks are arranged. Most good enclosures should let you hotswap SATA disks in RAID 5 with no issues.
I'm sure Chirbba has a decent solution in place. However, if that is not the case, (and if he's having to take down the server to replace the drive I suspect this might be the case) then perhaps we should all chip in for some newer hardware. That stuff ain't cheap! =)
|

Bish Ounen
Gallente Omni-Core Freedom Fighters Ethereal Dawn
|
Posted - 2007.12.28 16:07:00 -
[27]
Originally by: Regat Kozovv
SCSI is an interface, it does not necessarily indicate an enterprise-class drive. However, I see your point, and I think anyone would be hard pressed to find a SCSI drive not designed for a server environment. In any case, SCSI is largly being supplanted by SAS anyways...
That being said, there are SATA drives designed for the enterprise, and I'm not necessarily referring to 10k Raptor drives. They are still much more cost effective than SCSI/SAS, and unless you really need the high performance, you can get away with SATA just fine. The key really is in how the disks are arranged. Most good enclosures should let you hotswap SATA disks in RAID 5 with no issues.
I'm sure Chirbba has a decent solution in place. However, if that is not the case, (and if he's having to take down the server to replace the drive I suspect this might be the case) then perhaps we should all chip in for some newer hardware. That stuff ain't cheap! =)
Heh, yeah, I know it's an interface. (Small Computer Systems Interface, to be precise) But I don't know of any modern SCSI uses outside of hard drives. I suppose I should have been a bit more specific though.
You are also correct about SAS supplanting most SCSI implementations, much like SATA supplanted PATA. But it's still basically the same thing. A SCSI drive, just with an updated data transfer interface.
I do disagree with you on one point though. While SATA drives have improved greatly in the past few years, a SCSI/SAS drive will still outperform/outlast a SATA drive any day. The manufacturing tolerances and quality control are just simply higher for SCSI/SAS, even compared to the so-called "enterprise" SATA drives.
Ultimately, I wouldn't trust ANY SATA implementation on a DB server that my job depended on keeping running. For a general storage NAS, Yes. For a high-transaction DB server? No way in Hades.
On your other comment, I too wonder about his implementation if he has to take the entire server down to fix the DB. Unless he's just talking about the web front-end being offlined while he restores the DB from backup. That doesn't involve physically shutting down a server and popping open the chassis. Honestly, I would be VERY surprised if his DB runs on the same machine as his webserver.
I'm thinking Chribba needs to run a donation drive so he can rent out a slot at a proper datacenter with a fully redundant fiber-channel SAN and full virtualization of all OSes. That should keep him up and running almost indefinitely.
----------------------------------- How much would it cost to roll back to RevII CCP?
|

Dark Shikari
Caldari Imperium Technologies Firmus Ixion
|
Posted - 2007.12.28 16:16:00 -
[28]
Originally by: Lazuran
Originally by: Dark Shikari
Originally by: Lazuran Edited by: Lazuran on 28/12/2007 13:19:27
Originally by: Chribba anyone know if RAMdrives will fail after a while too? since the problem with db-drives is the massive write/reads, making a RAMdrive would go poof on the RAM after a while too or anyone know?
If you mean those Gigabyte i-ram drives, the answer is probably no: no mechanical parts and RAM can be rewritten for a long time (no limits known to me). But i-ram is rather small.
If you mean SSDs, the have a finite lifetime and I doubt the manufacturers' MTBF claims of 2m hours (vs. 500k hours for mechanical drives). These must be extrapolated failure rates for short operation intervals i.e. 2000 drives running for 1000 hours => ~1 broken = 2m hours MTBF (simplified). However, Flash memory has a finite lifespan (measured in times a block can be written), so such an extrapolation is not correct. It's also probably based on assumptions of writes/hour for laptops and such, so not nearly what you'd be doing for database work.
I'm pretty sure that SSD tech has gotten to the point where solid state drives last longer overall than ordinary hard drives. The "finite lifespan" crap is basically nothing more than FUD at this point, I think.
That is basically what manufacturers claim, as well as those reviewers who look at the MTBF numbers without asking themselves whether any of these drives have been running for 2 million hours yet. ;-)
It is a fact that Flash-based SSD has a finite number of write cycles for each flash element. This limit has seen a dramatic increase in the past few years, but still varies greatly among the manufacturers. These drives now remap overly used blocks to increase the durability and use other neat tricks, but in the end the MTBF is a fictional number based on assumptions made by the manufacturer.
Now when Samsung claims 2m hours MTBF, but their product is aimed at the laptop market rather than the server market, they will assume typical laptop usage, which is several orders of magnitude less write-intensive than DB server usage. So you cannot compare a SCSI disk for servers with MTBF of 800k with a SSD for laptops with MTBF of 2m and conclude that the latter is 2.5 times more durable. It isn't. And a lot of reviewers make the mistake of assuming that MTBF is based on the assumption of continuous use, it isn't.
Anandtech writes about this: This means an average user can expect to use the drive for about 10 years under normal usage conditions or around five years in a 100% power-on state with an active/idle duty cycle at 90%. These numbers are subject to change depending upon the data management software algorithms and actual user patterns.
This is not higher than current server disks, where typical usage patterns are much more write intensive...
PS. the finite lifespan is of course a technological limit and cannot be removed ... current flash SSD's offer around 100k write/erase cycles per sector apparently.
Better said, current consumer-grade SSD drives last longer than consumer-grade hard drives. A good modern flash drive lasts two years under continuous read-write. An ordinary consumer-grade hard disk probably wouldn't stand a year under that.
23 Member
EVE Video makers: save bandwidth! Use the H.264 AutoEncoder! (updated) |

Talking Elmo
Gallente Hogans Heroes
|
Posted - 2007.12.28 16:16:00 -
[29]
RAID 0+1 for DB's fellas, 5 and 6 are way too slow for writes.
And SATA drives for a DB server are just fine.
|

Regat Kozovv
Caldari E X O D U S Imperial Republic Of the North
|
Posted - 2007.12.28 16:26:00 -
[30]
Edited by: Regat Kozovv on 28/12/2007 16:26:13
Originally by: Bish Ounen
Heh, yeah, I know it's an interface. (Small Computer Systems Interface, to be precise) But I don't know of any modern SCSI uses outside of hard drives. I suppose I should have been a bit more specific though.
You are also correct about SAS supplanting most SCSI implementations, much like SATA supplanted PATA. But it's still basically the same thing. A SCSI drive, just with an updated data transfer interface.
I do disagree with you on one point though. While SATA drives have improved greatly in the past few years, a SCSI/SAS drive will still outperform/outlast a SATA drive any day. The manufacturing tolerances and quality control are just simply higher for SCSI/SAS, even compared to the so-called "enterprise" SATA drives.
Ultimately, I wouldn't trust ANY SATA implementation on a DB server that my job depended on keeping running. For a general storage NAS, Yes. For a high-transaction DB server? No way in Hades.
On your other comment, I too wonder about his implementation if he has to take the entire server down to fix the DB. Unless he's just talking about the web front-end being offlined while he restores the DB from backup. That doesn't involve physically shutting down a server and popping open the chassis. Honestly, I would be VERY surprised if his DB runs on the same machine as his webserver.
I'm thinking Chribba needs to run a donation drive so he can rent out a slot at a proper datacenter with a fully redundant fiber-channel SAN and full virtualization of all OSes. That should keep him up and running almost indefinitely.
-----------------------------------
I should have been a little clearer, when I said SCSI is just an interface, I really mean that no one makes consumer-grade SCSI drives. =)
And yes, SCSI/SAS will outperform SATA any day. However, for Chirbba's budget, I do wonder if SATA would be more cost effective. Fiber Channel SANs and proper virtualization is great and would definitely be recommended. But who has the resources to do that? I guess we need Chirbba to post his specs.
On a related note, I was reading this just the other day: http://www.techreport.com/discussions.x/13849 Maybe we should buy him one! =)
|
|

Bish Ounen
Gallente Omni-Core Freedom Fighters Ethereal Dawn
|
Posted - 2007.12.28 16:37:00 -
[31]
Originally by: Regat Kozovv
I should have been a little clearer, when I said SCSI is just an interface, I really mean that no one makes consumer-grade SCSI drives. =)
And yes, SCSI/SAS will outperform SATA any day. However, for Chirbba's budget, I do wonder if SATA would be more cost effective. Fiber Channel SANs and proper virtualization is great and would definitely be recommended. But who has the resources to do that? I guess we need Chirbba to post his specs.
On a related note, I was reading this just the other day: http://www.techreport.com/discussions.x/13849 Maybe we should buy him one! =)
Ahhh. Well yes. I think we can absolutely agree on that point, there really isn't any SCSI/SAS consumer-grade equipment out there. I do wonder though if Chribba hasn't simply outstripped the capabilities of most consumer-grade hardware. I mean, didn't he JUST replace this equipment about 2 months ago? That seems like an awfully short lifespan for any server, unless he just got a Lemon hard drive.
I suspect it may very well be time for him to just rent space in a proper datacenter. I'm sure there must be companies that specialize in high-availability/high-uptime web and data solutions that don't cost an arm and a leg to rent from?
I'd also like to see some stats on the type of traffic and throughput he gets on a daily basis. It would be easier to suggest solutions that way.
Also, that SUN system is kinda neat, but with all SATA drives I still don't think it would take the thrashing that EVE-Files must take. He's absolutely going to need a SAS RAID setup for his DB, or he's going to be swapping out drives every 2-3 months, if the last drives' survival time are any indication of his transaction levels.
------------------------------------------------- How much would it cost to roll back to RevII CCP?
|

Regat Kozovv
Caldari E X O D U S Imperial Republic Of the North
|
Posted - 2007.12.28 16:51:00 -
[32]
Edited by: Regat Kozovv on 28/12/2007 16:52:04
Originally by: Bish Ounen
Also, that SUN system is kinda neat, but with all SATA drives I still don't think it would take the thrashing that EVE-Files must take. He's absolutely going to need a SAS RAID setup for his DB, or he's going to be swapping out drives every 2-3 months, if the last drives' survival time are any indication of his transaction levels.
-------------------------------------------------
Good point. =)
Maybe use something like the Sun system as general file storage, and move the DB itself to a small, but high performance SAS solution like you described. Either way it'll take some cash.
|

flashfreaking
LFC FreeFall Securities
|
Posted - 2007.12.28 17:42:00 -
[33]
Well, at my job we'r currently using them Suns, and we're very happy about them, but as Chribba knows, the datatraffic on his server is way higher then ours, and the writing/reading is pretty intense, so SSD's seem to be the only way to save data, without having to replace broken drives every month. Disallowed sig graphic. Send an e-mail to [email protected] when it meets the forum signature guidelines. ~Saint |

Lysander Memnos
The Graduates Brutally Clever Empire
|
Posted - 2007.12.28 18:55:00 -
[34]
Replacing hard drives is the cheapest option, so that's the easiest course.
There are a couple things you could do to reduce/mitigate the database load (which might help some on drive lifespans):
- Optimize your sites to reduce the number of database calls (easy to say, hard to do)
- Split your databases among different drives
- Round-robin DNS with database replication (boy this would be fun to implement...not)
- Beg CCP for a RAMSAN loaner

|
|

Chribba
Otherworld Enterprises Otherworld Empire
|
Posted - 2007.12.28 19:02:00 -
[35]
Well hdd is replaced so all fine there, but for some reason the nightly backup of the db isn't really restoring ok - which is why everything is still not working as intended, am trying both left and right to get the backup to restore properly.
Secure 3rd party service ■ the Love project |
|

Linerra Tedora
Amarr Boot To The Head Plunder-Bears
|
Posted - 2007.12.28 19:10:00 -
[36]
hmm couldn't it be possible, to use a ram drive to load up the database from a set of raid drives.
So that accessing the database, is no trouble on the harddrive, and only uses the harddrive when writing data into the base. I guess most of his hits come from people reading parts instead of adding new stuff to the database.
|
|

Chribba
Otherworld Enterprises Otherworld Empire
|
Posted - 2007.12.28 22:07:00 -
[37]
Holding my breath that it will hold a little while longer now (or at least until a new drive arrive). Things "should" be back and working, some files (about 5) was uploaded after the backup of the db was made - those files have been deleted and needs to be re-uploaded.
Sorry for the downtime pilots, let me know of anything not working.
Secure 3rd party service ■ the Love project |
|

Cergorach
Amarr The Helix Foundation
|
Posted - 2007.12.28 22:30:00 -
[38]
[curious]How big is this database?[/curious]
|
|

Chribba
Otherworld Enterprises Otherworld Empire
|
Posted - 2007.12.28 22:36:00 -
[39]
Originally by: Cergorach [curious]How big is this database?[/curious]
Only some 1GiB so not big at all.
Secure 3rd party service ■ the Love project |
|

flashfreaking
LFC FreeFall Securities
|
Posted - 2007.12.28 22:39:00 -
[40]
You sir, deserve a medal, ingame, like the Tournament ones. Hmm, would be good suggestion anyway, for all active community members doing something awesome for the community Disallowed sig graphic. Send an e-mail to [email protected] when it meets the forum signature guidelines. ~Saint |
|

Verite Rendition
Caldari F.R.E.E. Explorer Atrum Tempestas Foedus
|
Posted - 2007.12.28 23:18:00 -
[41]
Originally by: Chribba
Originally by: Cergorach [curious]How big is this database?[/curious]
Only some 1GiB so not big at all.
Great, that's tiny. The RAMdrive idea is a good one given the size you're working with, you don't even need a hardware device to do it. I believe there are a couple of good Linux apps that create a RAMdrive out of the memory pool that would allow you to quickly go that route if you'd like. ---- FREE Explorer Lead Megalomanic EVE Automated Influence Map |

Lazuran
Gallente Time And ISK Sink Corporation
|
Posted - 2007.12.29 00:23:00 -
[42]
Originally by: Chribba
Originally by: Cergorach [curious]How big is this database?[/curious]
Only some 1GiB so not big at all.
With such a tiny DB you should have no performance issues at all with a decent RDBMS and a server with more RAM than that...
Disclaimer: I do not speak for the fanbois. |

Flinx Evenstar
Minmatar Omniscient Order
|
Posted - 2007.12.29 01:07:00 -
[43]
Good job getting it back online Chribba
You are a ******* hero dude CCP should throw you a little bandwidth and some storage space your way tbh 
I really can't imagine EVE without eve-files, keep up the good work xxx
|

Aleksi Maksimovich
Alternative Methods Research Group New Eden Research
|
Posted - 2007.12.29 01:15:00 -
[44]
o/
RL comp tech - have been for 5 years
ssd for servers would scare me
get 6x 80gb sata hdd's (aprox $50 cdn for round #'s)
raid 6 pci card (aprox $450 cdn)
replace drives as necessary - knowing consumer hardware i would keep at least 2 around spare at any time so you can drop 2, and have them go through the RMA process without any downtime.
expensive, yes
downtime, no
considering the number of people who use this service im betting that if you asked for donations to upgrade hardware you'd likely get it in spades m8
~Cheers
__________________________________________________ Originally by: CCP Nozh
It's going to cost double for Amarr and I'm taking away your ability to warp.
|

Yumis
Amarr Vengeance of the Fallen Knights Of the Southerncross
|
Posted - 2007.12.29 02:16:00 -
[45]
Originally by: Mobius Dukat They have no moving parts, very little heat build up...
Ask yourself "how often does a RAM stick fail in your server?" isn't often i'll bet :)
I work in a Data Center with HPC clusters, the DIMMs get really really hot (to hot to touch), I suppose its down to what you do with the server a general office server will idle alot of the day but if you pound it with calucations things break. We loose a stick of RAM every couple of months across a few racks.
Make sure if you buy a SAS drive that you have a SAS controller, they look very similar to SATA, and a SATA drive will work with a SAS controller but not the other way around.
Im not even going to touch the RAID levels as I dont know what situation Chribba has, but im presuming (and presumptions are evil and lead to mistakes) that its colocated? If so he may only have 1U servers so the number of drives available will be low due to the small profile of the server, and a separate NAS/SAN would be out of the question as they charge heavily on size & power usage.
Good luck Chribba, keep up the amazing work you do.
|

Yumis
Amarr Vengeance of the Fallen Knights Of the Southerncross
|
Posted - 2007.12.29 02:21:00 -
[46]
Originally by: Aleksi Maksimovich
ssd for servers would scare me
Scary but its coming, spoke to a few guys at Seagate recently and rumour has it they are stopping making 10K rpm SCSI drives and focusing development on SSD & improving the 15K rpm SAS ones. 
Although if you have a SAN with 200+ spindles you can max out the 4GB fibre connection so they still can perform very well. 
|
|
|
|
Pages: 1 2 :: [one page] |