| Pages: 1 [2] 3 :: one page |
| Author |
Thread Statistics | Show CCP posts - 1 post(s) |

Sable Schroedinger
Gallente Jericho Fraction The Star Fraction
|
Posted - 2006.09.12 08:02:00 -
[31]
I'm starting to suspect that the current performance issues are out of CCP's hands. I implemented SQL server 2005 at work about a month ago - where as we never really had a problem with blocks and deadlocks before, I'm seeing an increased occurrence of it. Its still early days, so I've got little to no information on it all yet, but its starting to look to me like 2005 has some flaws in that area. Its that or its so called "on line" operations are not quite as "on line" as they would have you believe - transaction log backups and index reorganisation to name 2. Could be that if theres a problem we're waiting on MS to release a patch.
p.s. please don't jump in with "implement a proper DB such as X" comments. They're redundant and puerile. --------------------------------------------
Nothing is as cruel as the righteousness of innocents |

Matthew
Caldari BloodStar Technologies
|
Posted - 2006.09.12 08:03:00 -
[32]
50K online really shouldn't be a problem, it's what'll happen if that 50K tries to keep the same proportion of people in Jita and similar systems as we currently see.
Take the reboot this weekend. Before it happened, I saw a queue to get into Jita was being reported as over 750. Think about that - even if everyone who was currently in Jita was magically removed, not everyone in the queue could get in before the system filled again. You have to ask why so many people insisted on waiting in the obviously ridiculous queue, rather than finding something else to do.
Combine the 750 in the queue with the likely 600+ actually in the system, and you've got 1350+ trying to use one system. So if we say there's 70 nodes and 30k online, that's 4.5% of the playerbase trying to cram onto 1.4% of the server. In a clustered system like TQ, that's never going to be pretty.
Expanding the capacity of TQ to handle a distributed load is easy, just add more nodes. It's increasing the single-node capacity that's the problem. ------- There is no magic Wand of Fixing, and it is not powered by forum whines. |

SIlver Light
Minmatar 5punkorp Interstellar Starbase Syndicate
|
Posted - 2006.09.12 08:13:00 -
[33]
Originally by: Kalaan Oratay /me points at the Chinese shard
/me scratches head 
Last word I saw on that was that Serenity wasn't officially CCPs. What they've done is rent out the server and game code to Optic to allow them to run eve-china. There's a few devs over there as advisors, but that's the relationship as I understand it. ------ Proud Member of 5punkorp |

BurnHard
|
Posted - 2006.09.12 08:38:00 -
[34]
Originally by: Sable Schroedinger I'm starting to suspect that the current performance issues are out of CCP's hands. I implemented SQL server 2005 at work about a month ago - where as we never really had a problem with blocks and deadlocks before, I'm seeing an increased occurrence of it. Its still early days, so I've got little to no information on it all yet, but its starting to look to me like 2005 has some flaws in that area. Its that or its so called "on line" operations are not quite as "on line" as they would have you believe - transaction log backups and index reorganisation to name 2. Could be that if theres a problem we're waiting on MS to release a patch.
p.s. please don't jump in with "implement a proper DB such as X" comments. They're redundant and puerile.
Deadlocks are inevitable, it's how you handle them that makes a difference. You can always restructure your schema to make them less likely. But I wouldn't like to be the one doing that on a billion record table ;). On a related note, I've noticed deadlocks and timeouts with my test code on 2005 I didn't get with 2000. It wouldn't suprise me if it needs a patch or two before it's up to standard.
|

Lazuran
|
Posted - 2006.09.12 08:48:00 -
[35]
Edited by: Lazuran on 12/09/2006 08:48:14
Originally by: Sentient Void we had like 20% of all users on at once... shows the higher level of dedication to the game on EVE's standpoint, if you ask me.
Shows more idling and using multiple accounts at the same time.
"The whole of NYC is not 1.0. Some back alley in the Bronx is deep 0.0, while right outside NYPD headquarters is 1.0." -- Slaaght Bana |

Beigehornet
Caldari Digital Fury Corporation
|
Posted - 2006.09.12 09:12:00 -
[36]
hrmmm Nice statement shame it still cant handle the current user base. 24k users pushes the server to the max already and when its over that its almost unusable. Good to know CCP is flashing their IBM bling thats no more than Plastic with Gold plateing.
GG CCP
|

Sable Schroedinger
Gallente Jericho Fraction The Star Fraction
|
Posted - 2006.09.12 09:28:00 -
[37]
Originally by: BurnHard Deadlocks are inevitable, it's how you handle them that makes a difference. You can always restructure your schema to make them less likely. But I wouldn't like to be the one doing that on a billion record table ;). On a related note, I've noticed deadlocks and timeouts with my test code on 2005 I didn't get with 2000. It wouldn't suprise me if it needs a patch or two before it's up to standard.
Whilst deadlocks are inevitable, I agree, the increase I'm seeing is from 1 per 3 months or so to 1 or 2 per day!
However, each time the blocked process is a system one - the aforementioned transaction log backup and index reorganisation (not rebuild). So the waters are a little muddy here. --------------------------------------------
Nothing is as cruel as the righteousness of innocents |

Garia666
Amarr adeptus gattacus Lotka Volterra
|
Posted - 2006.09.12 09:31:00 -
[38]
The hardware is impressive however the problem lies in the code. They dont code anymore like they used to.. Not as in the Old day`s
The Old day`s
After seeing this you will know what i mean..
|

Splagada
Minmatar Tides of Silence
|
Posted - 2006.09.12 11:27:00 -
[39]
"400,000 random I/Os per second."
oh my .... doesnt the smoke from the hds cause problems in the offices? :p -
Tides of Silence recruiting miners and overall fun people |

Matthew
Caldari BloodStar Technologies
|
Posted - 2006.09.12 11:50:00 -
[40]
Originally by: Splagada "400,000 random I/Os per second."
oh my .... doesnt the smoke from the hds cause problems in the offices? :p
That's why they use this instead. It's basically a big-ass box full of RAM, that pretends it's a hard drive. Though I gather that it isn't big enough to hold all the tables, and they had to move some tables off to a normal disk array to make room until they take delivery of another one (which Oveur confirmed they have ordered, but these things probably aren't available on next day delivery!). Might be the reason why the DB isn't quite as nippy as it used to be, but the fix is in transit (hopefully quite literally ) ------- There is no magic Wand of Fixing, and it is not powered by forum whines. |

Lazuran
|
Posted - 2006.09.12 12:03:00 -
[41]
Originally by: Matthew
That's why they use this instead.
It worries me that even this isn't enough anymore, when it is 1-2 orders of magnitude faster than what they had in late 2004 or so, when there was no lag and the number of concurrent users was roughly 20-30% of today's.
Looks like they need to fix more than just the hardware...
"The whole of NYC is not 1.0. Some back alley in the Bronx is deep 0.0, while right outside NYPD headquarters is 1.0." -- Slaaght Bana |

Matthew
Caldari BloodStar Technologies
|
Posted - 2006.09.12 12:15:00 -
[42]
Originally by: Lazuran
Originally by: Matthew
That's why they use this instead.
It worries me that even this isn't enough anymore, when it is 1-2 orders of magnitude faster than what they had in late 2004 or so, when there was no lag and the number of concurrent users was roughly 20-30% of today's.
Looks like they need to fix more than just the hardware...
Afaik the problem they're having with the RAMSAN at the moment isn't one of speed, but of storage capacity. Which means that they have to put some tables on a "standard" fibre channel disk array, which is much slower. Once more capacity arrives and all the tables can go back on them, I would be very surprised if we managed to hit the max speed of the RAMSANs.
The trouble is that in a clustered system like TQ, a bottleneck at any level can cause problems, and you usually have different problems kicking in for different people at diffrent times. For example, the fastest DB imaginable isn't going to help Jita, because that bottleneck is on the SOL node actually running the system simulation.
Without the server diagnostics, we can't do more than make educated guesses about what may be the problem at any given point. ------- There is no magic Wand of Fixing, and it is not powered by forum whines. |

Raquel Smith
Ferengi Commerce Authority
|
Posted - 2006.09.12 12:21:00 -
[43]
Originally by: Par'Gellen
I was going to say something similar but you said it perfectly. It boils down to: 30k users is nice but if the game is lagging and crashing daily then it's not such a big accomplishment. It's actually worse due to more people seeing it lag and crash 
Actually Second Life is as unstable as Eve. ;) My fiancTe plays and is constantly complaining about how regions of SL are crashing or having to go down for a reboot.
|
|

Valar

|
Posted - 2006.09.12 12:28:00 -
[44]
Originally by: Matthew
Afaik the problem they're having with the RAMSAN at the moment isn't one of speed, but of storage capacity. Which means that they have to put some tables on a "standard" fibre channel disk array, which is much slower. Once more capacity arrives and all the tables can go back on them, I would be very surprised if we managed to hit the max speed of the RAMSANs.
The trouble is that in a clustered system like TQ, a bottleneck at any level can cause problems, and you usually have different problems kicking in for different people at diffrent times. For example, the fastest DB imaginable isn't going to help Jita, because that bottleneck is on the SOL node actually running the system simulation.
Without the server diagnostics, we can't do more than make educated guesses about what may be the problem at any given point.
You are right, the problem with the RAMSAN is not the bandwidth, but the capacity. Most of the database is on a 30 disk fiber optic array, a few indexes on a 10 disk fiber optic array and the heavily used stuff, like the items table, a few related tables and indexes that come with 'em, the transaction log and tempdb are on the RAMSAN.
Due to the lack of space on the RAMSAN recently, I've had to move objects from the RAMSAN to the disk arrays, and I've had to take measures so that in case the transaction log grows too much it starts expanding on to the disk array aswell.
The performance hit of moving the objects from the RAMSAN was not as much as we had feared, but when the transaction log has been on the disk array performance has suffered quite a bit. That however has only happened in exceptional circumstances and I always fix that as soon as I notice. ------ Valar Database admin - Server operations team CCP Games How to write a good bugreport |
|

Par'Gellen
Gallente Low Grade Ore
|
Posted - 2006.09.12 13:14:00 -
[45]
Originally by: Valar You are right, the problem with the RAMSAN is not the bandwidth, but the capacity. Most of the database is on a 30 disk fiber optic array, a few indexes on a 10 disk fiber optic array and the heavily used stuff, like the items table, a few related tables and indexes that come with 'em, the transaction log and tempdb are on the RAMSAN.
Due to the lack of space on the RAMSAN recently, I've had to move objects from the RAMSAN to the disk arrays, and I've had to take measures so that in case the transaction log grows too much it starts expanding on to the disk array aswell.
The performance hit of moving the objects from the RAMSAN was not as much as we had feared, but when the transaction log has been on the disk array performance has suffered quite a bit. That however has only happened in exceptional circumstances and I always fix that as soon as I notice.
Thanks for the info Valar. So when will the new RAMSAN arrive and be installed?
Starmaps - An Insta Solution |

Wild Rho
Amarr Imperial Shipment
|
Posted - 2006.09.12 13:24:00 -
[46]
Originally by: Valar
stuff
I am so bloody glad I don't have your job tbh 
WE ARE DYSLEXIC OF BORG. Refutance is systile. Your ass will be laminated. - Jennie Marlboro
|

Helison
Gallente Times of Ancar R i s e
|
Posted - 2006.09.12 13:29:00 -
[47]
Valar, the problems in the last days, were several regions lag extremly: Are these DB-related, or is it caused by a massive overload (or bug) on the SOL-Servers?
|

Gariuys
Evil Strangers Inc.
|
Posted - 2006.09.12 13:33:00 -
[48]
Originally by: Wild Rho
Originally by: Valar
stuff
I am so bloody glad I don't have your job tbh 
100% agreed lol, fun but a nightmare most of the time.
|

Larshus Magrus
Caldari Provisions
|
Posted - 2006.09.12 13:33:00 -
[49]
Originally by: Valar
Originally by: Matthew
Afaik the problem they're having with the RAMSAN at the moment isn't one of speed, but of storage capacity. Which means that they have to put some tables on a "standard" fibre channel disk array, which is much slower. Once more capacity arrives and all the tables can go back on them, I would be very surprised if we managed to hit the max speed of the RAMSANs.
The trouble is that in a clustered system like TQ, a bottleneck at any level can cause problems, and you usually have different problems kicking in for different people at diffrent times. For example, the fastest DB imaginable isn't going to help Jita, because that bottleneck is on the SOL node actually running the system simulation.
Without the server diagnostics, we can't do more than make educated guesses about what may be the problem at any given point.
You are right, the problem with the RAMSAN is not the bandwidth, but the capacity. Most of the database is on a 30 disk fiber optic array, a few indexes on a 10 disk fiber optic array and the heavily used stuff, like the items table, a few related tables and indexes that come with 'em, the transaction log and tempdb are on the RAMSAN.
Due to the lack of space on the RAMSAN recently, I've had to move objects from the RAMSAN to the disk arrays, and I've had to take measures so that in case the transaction log grows too much it starts expanding on to the disk array aswell.
The performance hit of moving the objects from the RAMSAN was not as much as we had feared, but when the transaction log has been on the disk array performance has suffered quite a bit. That however has only happened in exceptional circumstances and I always fix that as soon as I notice.
Although this makes logical sense, the steps taking to fix these large tables do not. The STANDARD (and yes I maintain DB's as large, if not larger that what eve runs, with millions if not hundreds of millions of transactiosn per day) is simply to add more memory to the machine. Get the tables into memory and the problem goes away.
Ok you say but:
1) How do you keep the most used tables in memory? You don't. The OS does. Thats its job. Any good OS worth its salt will have no problems (asuming the DB is tuned correctly) maintaining huge tables in memory. Memmory access is magnitudes faster than accessing tables stored on a hard drive... no matter if its solid state storage or a traditional fibre raid array.
2) How do you have 250+ gigs of memory? My intel xeon box only support16/32 gigs!?! Buy some real hardware. The transactions eve is pushing need massive amounts of ram to scale. Even the new 64 gig 51xx series xeon motherboards arent going to cut it. ite the bullet, buy a real piece of EXPANDABLE INM/Sun hardware with a real OS and jsut be done with it. You are spending silly money on the ramsans which dont FIX anything.. just extend the problem out further.
The real limiting factor here is NOT the RAMSAN storage. Its RAM storage that CCP is trying to gimp around by pushing RAM into a soild state array because the archetecture they are for some reason stubornly married to cannot support the amount of ram needed to run the current application effeciently. ITs not jsut me saying this,. Talk to any DB engineer who works with large data sets... he will tell you exactly the same thing.
|

Gariuys
Evil Strangers Inc.
|
Posted - 2006.09.12 13:41:00 -
[50]
Originally by: Larshus Magrus
Originally by: Valar
Originally by: Matthew
Afaik the problem they're having with the RAMSAN at the moment isn't one of speed, but of storage capacity. Which means that they have to put some tables on a "standard" fibre channel disk array, which is much slower. Once more capacity arrives and all the tables can go back on them, I would be very surprised if we managed to hit the max speed of the RAMSANs.
The trouble is that in a clustered system like TQ, a bottleneck at any level can cause problems, and you usually have different problems kicking in for different people at diffrent times. For example, the fastest DB imaginable isn't going to help Jita, because that bottleneck is on the SOL node actually running the system simulation.
Without the server diagnostics, we can't do more than make educated guesses about what may be the problem at any given point.
You are right, the problem with the RAMSAN is not the bandwidth, but the capacity. Most of the database is on a 30 disk fiber optic array, a few indexes on a 10 disk fiber optic array and the heavily used stuff, like the items table, a few related tables and indexes that come with 'em, the transaction log and tempdb are on the RAMSAN.
Due to the lack of space on the RAMSAN recently, I've had to move objects from the RAMSAN to the disk arrays, and I've had to take measures so that in case the transaction log grows too much it starts expanding on to the disk array aswell.
The performance hit of moving the objects from the RAMSAN was not as much as we had feared, but when the transaction log has been on the disk array performance has suffered quite a bit. That however has only happened in exceptional circumstances and I always fix that as soon as I notice.
Although this makes logical sense, the steps taking to fix these large tables do not. The STANDARD (and yes I maintain DB's as large, if not larger that what eve runs, with millions if not hundreds of millions of transactiosn per day) is simply to add more memory to the machine. Get the tables into memory and the problem goes away.
Ok you say but:
1) How do you keep the most used tables in memory? You don't. The OS does. Thats its job. Any good OS worth its salt will have no problems (asuming the DB is tuned correctly) maintaining huge tables in memory. Memmory access is magnitudes faster than accessing tables stored on a hard drive... no matter if its solid state storage or a traditional fibre raid array.
2) How do you have 250+ gigs of memory? My intel xeon box only support16/32 gigs!?! Buy some real hardware. The transactions eve is pushing need massive amounts of ram to scale. Even the new 64 gig 51xx series xeon motherboards arent going to cut it. ite the bullet, buy a real piece of EXPANDABLE INM/Sun hardware with a real OS and jsut be done with it. You are spending silly money on the ramsans which dont FIX anything.. just extend the problem out further.
The real limiting factor here is NOT the RAMSAN storage. Its RAM storage that CCP is trying to gimp around by pushing RAM into a soild state array because the archetecture they are for some reason stubornly married to cannot support the amount of ram needed to run the current application effeciently. ITs not jsut me saying this,. Talk to any DB engineer who works with large data sets... he will tell you exactly the same thing.
Yes and obviously the guys at CCP never thought of this. And have no reason not to do this, cause they're huge noobs at runnning a DB.
|

Lazuran
|
Posted - 2006.09.12 14:05:00 -
[51]
Originally by: Larshus Magrus
1) How do you keep the most used tables in memory? You don't. The OS does. Thats its job. Any good OS worth its salt will have no problems (asuming the DB is tuned correctly) maintaining huge tables in memory.
An OS, which does this better than an experienced DBA, does not exist yet, as far as I know.
Quote:
2) How do you have 250+ gigs of memory? My intel xeon box only support16/32 gigs!?! Buy some real hardware. The transactions eve is pushing need massive amounts of ram to scale. Even the new 64 gig 51xx series xeon motherboards arent going to cut it. ite the bullet, buy a real piece of EXPANDABLE INM/Sun hardware with a real OS and jsut be done with it. You are spending silly money on the ramsans which dont FIX anything.. just extend the problem out further.
Meh, there are at least 2 "cheap" x86 configurations out there with support for 128GB RAM and 16 CPU cores (e.g. Iwill's Opteron boxes). Perhaps you can go even higher (Opterons can address much more physical RAM, I don't know which existing chipsets support more than 128GB though), but the x86 architecture is somewhat limiting and I'm not aware of an MS SQL port to other hardware. It does support clustering though (but I don't know whether this improves performance or provides only failover-capabilities). And no, I wouldn't want to wait for CCP to port the database to Oracle ;-).
"The whole of NYC is not 1.0. Some back alley in the Bronx is deep 0.0, while right outside NYPD headquarters is 1.0." -- Slaaght Bana |

Isyel
Minmatar Masuat'aa Matari Ushra'Khan
|
Posted - 2006.09.12 15:03:00 -
[52]
I still say i barely ever have any lag (even when there were 150 people in local and some rather big fights with lots of frig action lately going on ). Now, i'm referring to pure server - client relation lag.
I do have issues with lag, but it's mainly graphical, or so it appears. The game stutters like crazy in any moderate fight or POS, while the ship and modules react normally. You turn off the UI and it's better, but still stutters. Now perhaps there is something to do with the normal lag but since modules and commands respond normally, meh. (yes, i have plenty of power to run eve, Oblivion runs smoothly :P). THAT would be my main EvE problem. 
|

Death Kill
Caldari direkte
|
Posted - 2006.09.12 15:22:00 -
[53]
Originally by: Larshus Magrus
Ok you say but:
2) How do you have 250+ gigs of memory?
I dont have a clue about all this tech stuff, but the server at work have over 280gig ram.
Recruitment |

Larshus Magrus
Caldari Provisions
|
Posted - 2006.09.12 15:27:00 -
[54]
Originally by: Gariuys
Yes and obviously the guys at CCP never thought of this. And have no reason not to do this, cause they're huge noobs at runnning a DB.
Apparently, yes. I know your tone was sarcastic, but they do seem to be noobs at running very large databases. If you look around at other very large databases no one does it the way CCP is doing it. That does tend to lend itself to question their approach.
|

Larshus Magrus
Caldari Provisions
|
Posted - 2006.09.12 15:32:00 -
[55]
Originally by: Lazuran Edited by: Lazuran on 12/09/2006 14:20:03
Originally by: Larshus Magrus
1) How do you keep the most used tables in memory? You don't. The OS does. Thats its job. Any good OS worth its salt will have no problems (asuming the DB is tuned correctly) maintaining huge tables in memory.
An OS, which does this better than an experienced DBA, does not exist yet, as far as I know.
Quote:
2) How do you have 250+ gigs of memory? My intel xeon box only support16/32 gigs!?! Buy some real hardware. The transactions eve is pushing need massive amounts of ram to scale. Even the new 64 gig 51xx series xeon motherboards arent going to cut it. ite the bullet, buy a real piece of EXPANDABLE INM/Sun hardware with a real OS and jsut be done with it. You are spending silly money on the ramsans which dont FIX anything.. just extend the problem out further.
Meh, there are at least 2 "cheap" x86 configurations out there with support for 128GB RAM and 16 CPU cores (e.g. Iwill's Opteron boxes). Perhaps you can go even higher (Opterons can address much more physical RAM, I don't know which existing chipsets support more than 128GB though), but the x86 architecture is somewhat limiting and I'm not aware of an MS SQL port to other hardware. It does support clustering though (but I don't know whether this improves performance or provides only failover-capabilities). And no, I wouldn't want to wait for CCP to port the database to Oracle ;-).
Edit: how wrong I was... MS SQL apparently works on HP Superdomes with Itanium CPUs (they support 1TB RAM easily): http://www.tpc.org/results/individual_results/HP/hp_orca1tb_win64_ex.pdf
I was refering to CCP's tendancy to only use Intel x86 hardware on thier DB boxes. If you look at thier current hardware and thier past hardware, they have stuck to xeon's. My guess is that benchmarking M$Sql it might perform better on xeon boxes then optrons.. thats just a guess though.
However you are correct. There are 8 way optron boxes that can handle large amounts of buffered ram. However I hear their are wierd performance issues due to the large parallel signaling lanes. Intel is getting around this by going with serial FBDIMMS. We should see much larger capacity xeon boxes in teh future.
As far as Itanium. A completely viable archetecture. Moving to it would be a good thing. However Miscrosft canceled windows for itanium so they would have to move to an alternate OS.
|

Larshus Magrus
Caldari Provisions
|
Posted - 2006.09.12 15:33:00 -
[56]
Originally by: Death Kill
Originally by: Larshus Magrus
Ok you say but:
2) How do you have 250+ gigs of memory?
I dont have a clue about all this tech stuff, but the server at work have over 280gig ram.
Uh. hats not possible unless its a big boy box. Are you sure you don't mean 280 Megs of ram?
|

Abye
|
Posted - 2006.09.12 15:36:00 -
[57]
Originally by: Larshus Magrus
Originally by: Death Kill
Originally by: Larshus Magrus
Ok you say but:
2) How do you have 250+ gigs of memory?
I dont have a clue about all this tech stuff, but the server at work have over 280gig ram.
Uh. hats not possible unless its a big boy box. Are you sure you don't mean 280 Megs of ram?
Once you stop using X86 toys and use hardware that has been designed for serverwork from the beginning it is not that hard to pull that off.
|

Splagada
Minmatar Tides of Silence
|
Posted - 2006.09.12 15:49:00 -
[58]
Originally by: Valar
You are right, the problem with the RAMSAN is not the bandwidth, but the capacity. Most of the database is on a 30 disk fiber optic array, a few indexes on a 10 disk fiber optic array and the heavily used stuff, like the items table, a few related tables and indexes that come with 'em, the transaction log and tempdb are on the RAMSAN.
Due to the lack of space on the RAMSAN recently, I've had to move objects from the RAMSAN to the disk arrays, and I've had to take measures so that in case the transaction log grows too much it starts expanding on to the disk array aswell.
The performance hit of moving the objects from the RAMSAN was not as much as we had feared, but when the transaction log has been on the disk array performance has suffered quite a bit. That however has only happened in exceptional circumstances and I always fix that as soon as I notice.
i just had a geekgasm -
Tides of Silence recruiting miners and overall fun people |

Jobie Thickburger
Gallente Intergalactic Mining
|
Posted - 2006.09.12 17:42:00 -
[59]
Originally by: Larshus Magrus
Originally by: Gariuys
Yes and obviously the guys at CCP never thought of this. And have no reason not to do this, cause they're huge noobs at runnning a DB.
Apparently, yes. I know your tone was sarcastic, but they do seem to be noobs at running very large databases. If you look around at other very large databases no one does it the way CCP is doing it. That does tend to lend itself to question their approach.
Sometimes it pays to think outside the Box though, Just because the "Tried and True" Method works, Dosen't mean you can't come up with a better way to do it...
Grats CCP on the record and all, but could someone, Espically the OP, Post a link to the source of this infomation? I hate it when people quote and don't tell where it came from. Easy way to make fabricated infomation seem true...
Retired CEO, MGTTG
|

Matthew
Caldari BloodStar Technologies
|
Posted - 2006.09.12 17:51:00 -
[60]
Originally by: Larshus Magrus
Originally by: Gariuys
Yes and obviously the guys at CCP never thought of this. And have no reason not to do this, cause they're huge noobs at runnning a DB.
Apparently, yes. I know your tone was sarcastic, but they do seem to be noobs at running very large databases. If you look around at other very large databases no one does it the way CCP is doing it. That does tend to lend itself to question their approach.
But those databases aren't running at the heart of the TQ cluster, are they? Different uses call for different implementations.
Do you know the interface requirements of the rest of the cluster with the DB? Do you know the patterns of access, read/write ratios, the data structures used? Do you, in fact, know anything at all about the DB they are running that would allow you to make an even remotely informed evaluation of their implementation?
Unless you've somehow stolen Valar's DBA login, I seriously doubt it. ------- There is no magic Wand of Fixing, and it is not powered by forum whines. |
| |
|
| Pages: 1 [2] 3 :: one page |
| First page | Previous page | Next page | Last page |