Pages: [1] :: one page |
|
Author |
Thread Statistics | Show CCP posts - 0 post(s) |
Eleana Tomelac
Gallente Through the Looking Glass
|
Posted - 2008.02.14 21:13:00 -
[1]
Hi,
I found this company : http://www.gpucomputing.eu/ Sounds like they make some interesting stuff using GPUs instead of CPUs for mathematical calculations.
Wouldn't this be a thing that could help the servers on all the stuff it has to calculate? -- Pocket drone carriers (tm) enthousiast !
Assault Frigates MK II |
Dr Slaughter
Rabies Inc.
|
Posted - 2008.02.15 02:12:00 -
[2]
Nvidia do similar stuff and there's even a python port that runs on it. Problem is that the grid management code needs to be sequential (there's a big debate about ways round that) so it's hard to distribute over multiple GPUs.
Grid management code written in machine code so that it fits into the CPUs cache is about the only thing that could be improved at the moment but that's not really a sensible solution either.
I would link to relevant information in other threads but can't be bothered at the moment. Click my picture and see some of the posts if you like. CCP this is not the nerf you are looking for...
[a image was here once but it went away] |
Eleana Tomelac
Gallente Through the Looking Glass
|
Posted - 2008.02.15 10:59:00 -
[3]
I know some things are a pain to fit on multiple CPUs, I programmed a bit in MPI and well, I could see it's not easy at all when you get out of an MPI course example that perfectly fits MPI but no real case.
The thing I was thinking was more that just one GPU can be better than a CPU if properly used. Then, if you have multiple GPUs, well, it will be several grids or several systems depending of how you cut down the calculations. So it may help, or not. And that would make kinda cheaper servers maybe?
My main question is if devs thought about using such concept for the servers, did you study the opportunities of such systems? -- Pocket drone carriers (tm) enthousiast !
Assault Frigates MK II |
Hugh Ruka
Caldari Exploratio et Industria Morispatia
|
Posted - 2008.02.15 11:44:00 -
[4]
wikipedia this:
HavokFX, CUDA, GPGPU, Brook, AMD StreamFire, Folding@Home, CTM ... just to get you started.
There are certain limitations, but most of the above is used to make general comuptation tasks (or a limited set) to run on GPUs.
What I fail to understand is why does CCP NOT use some physics processing accelerators for collision detections, LOS calculations and such. These things could massively speed up the serverside world processing and help with computation induced lag situations. Maybe that is not the actual bottleneck, but it would free up the server CPUs for pure network IO hence speeding up.
Originally by: Aravel Thon
Originally by: Nith Batoxxx Hi my alt just leanred to fly the ferox...............
I am so so terribly sorry...
|
brainball
GoonFleet GoonSwarm
|
Posted - 2008.02.15 13:38:00 -
[5]
Originally by: Hugh Ruka
What I fail to understand is why does CCP NOT use some physics processing accelerators for collision detections, LOS calculations and such. These things could massively speed up the serverside world processing and help with computation induced lag situations. Maybe that is not the actual bottleneck, but it would free up the server CPUs for pure network IO hence speeding up.
The main reason for not doing it client side would be opening up a way for people to cheat, by modifying the calculations in some probably very ingenious way it would be possible for people to do things that they shouldnt be able to do and there is nothing the GMs could do about it short of banning him/her.
|
Hugh Ruka
Caldari Exploratio et Industria Morispatia
|
Posted - 2008.02.15 13:56:00 -
[6]
Originally by: brainball
Originally by: Hugh Ruka
What I fail to understand is why does CCP NOT use some physics processing accelerators for collision detections, LOS calculations and such. These things could massively speed up the serverside world processing and help with computation induced lag situations. Maybe that is not the actual bottleneck, but it would free up the server CPUs for pure network IO hence speeding up.
The main reason for not doing it client side would be opening up a way for people to cheat, by modifying the calculations in some probably very ingenious way it would be possible for people to do things that they shouldnt be able to do and there is nothing the GMs could do about it short of banning him/her.
read the bold part
I did never state this should be done on the client for 2 reasons:
1. folks would need to invest into a physics accel card 2. you cannot do vital processing on client side in MMOs, it would be exploited.
I know I did not state that clearly, however I am not that daft to suggest an MMO client should do more than input/output :-)
Originally by: Aravel Thon
Originally by: Nith Batoxxx Hi my alt just leanred to fly the ferox...............
I am so so terribly sorry...
|
Eleana Tomelac
Gallente Through the Looking Glass
|
Posted - 2008.02.15 15:46:00 -
[7]
Yes, never anything affecting other plauers calculated on client, ot it ends in all cheating! (see diablo II open mode vs diablo II server mode, one was hosted at a player, the other on the servers, it was all cheat on player's comps!)
If something like adding powerful graphic cards on the servers and moving a part of the calculations on them could be done, it could be a huge boost to the computing power of a node.
But that's only IF a part of the lag is because of some too long to run mathematical calculations, other game logic might just not work on GPU faster.
Anyway, we know there's a new cluster being planned, eh, come on, we want to hear about the new stuff you're preparing for us! -- Pocket drone carriers (tm) enthousiast !
Assault Frigates MK II |
Hugh Ruka
Caldari Exploratio et Industria Morispatia
|
Posted - 2008.02.19 09:48:00 -
[8]
Originally by: Eleana Tomelac Yes, never anything affecting other plauers calculated on client, ot it ends in all cheating! (see diablo II open mode vs diablo II server mode, one was hosted at a player, the other on the servers, it was all cheat on player's comps!)
If something like adding powerful graphic cards on the servers and moving a part of the calculations on them could be done, it could be a huge boost to the computing power of a node.
But that's only IF a part of the lag is because of some too long to run mathematical calculations, other game logic might just not work on GPU faster.
Anyway, we know there's a new cluster being planned, eh, come on, we want to hear about the new stuff you're preparing for us!
IMO the nodes are more I/O and memory limited than CPU limited. But physics accel could help achieve true LOS and collisions.
Originally by: Aravel Thon
Originally by: Nith Batoxxx Hi my alt just leanred to fly the ferox...............
I am so so terribly sorry...
|
Matthew
Caldari BloodStar Technologies
|
Posted - 2008.02.19 12:32:00 -
[9]
Originally by: Eleana Tomelac The thing I was thinking was more that just one GPU can be better than a CPU if properly used.
Well, the thing is that GPU's are powerful for some applications because a single GPU houses a large level of parallisation in the first place, much more than a CPU can offer. The tasks really seeing the benefits of GPU acceleration are those which can be broken down into lots of parallel, low-complexity operations.
Maintaining causality (i.e. events happening in the right order) in the game world puts a siginficant limitation on how parallel the server task can be made, which would be the main obstacle in making it able to exploit the strengths of GPUs. ------- There is no magic Wand of Fixing, and it is not powered by forum whines. |
Eleana Tomelac
Gallente Through the Looking Glass
|
Posted - 2008.02.20 15:25:00 -
[10]
Originally by: Matthew
Originally by: Eleana Tomelac The thing I was thinking was more that just one GPU can be better than a CPU if properly used.
Well, the thing is that GPU's are powerful for some applications because a single GPU houses a large level of parallisation in the first place, much more than a CPU can offer. The tasks really seeing the benefits of GPU acceleration are those which can be broken down into lots of parallel, low-complexity operations.
There are things that should happen all simultaneously, like distance calculations, accelerations for everyone, most of the physics engine should work in parallel way. I agree that shooting a guns can't be parallelized easyly as you son't want dead ships to shoot! But there are methods to distribute properly the calculations over parallel units and gather the data afterwards that should never end in worse calculation time than on a single calculation unit. But maybe that's too much reworking of the code and would freeze many stuff CCP would like to do (and we want to see!).
Originally by: Matthew Maintaining causality (i.e. events happening in the right order) in the game world puts a siginficant limitation on how parallel the server task can be made, which would be the main obstacle in making it able to exploit the strengths of GPUs.
Sure, you can't do everything in parallel, but evaluating the part of the calculations which can be parallelized is the first thing to do. But that's something we just can't do because we have no knowledge of the server side code, it has to be left to CCP.
My idea on posting this was just asking CCP 'have you looked into this kind of stuff?'. -- Pocket drone carriers (tm) enthousiast !
Assault Frigates MK II |
|
Arcayan
|
Posted - 2008.02.21 09:36:00 -
[11]
Originally by: Eleana Tomelac If something like adding powerful graphic cards on the servers and moving a part of the calculations on them could be done...
I don't do graphics programming so I am very likely to be wrong here, but I didn't think graphics cards could return the results of their computations? Or is that something I read in a shaders article?
|
Hugh Ruka
Caldari Exploratio et Industria Morispatia
|
Posted - 2008.02.22 14:24:00 -
[12]
Originally by: Arcayan
Originally by: Eleana Tomelac If something like adding powerful graphic cards on the servers and moving a part of the calculations on them could be done...
I don't do graphics programming so I am very likely to be wrong here, but I didn't think graphics cards could return the results of their computations? Or is that something I read in a shaders article?
basicaly your render a tesxture in 32bit mode and interpret the results :-)
Originally by: Aravel Thon
Originally by: Nith Batoxxx Hi my alt just leanred to fly the ferox...............
I am so so terribly sorry...
|
Arcayan
|
Posted - 2008.02.23 02:52:00 -
[13]
Originally by: Hugh Ruka basicaly your render a tesxture in 32bit mode and interpret the results :-)
That's just cheating. lol.
I imagine you'd have to group similar calculations to really see an improvement in processing throughput?
|
Lenus Daragio
Foundry.
|
Posted - 2008.02.24 08:36:00 -
[14]
Edited by: Lenus Daragio on 24/02/2008 08:38:00 GPUs are better at doing certain kinds of math, such as calculating vectors, and complex calculus, where as CPUs are designed to be programmed, and as such are designed to do small, simple mathematical functions very quickly. Trying to do complex math with a CPU results in the one problem being exploded into tons of multiple jobs for the CPU, while the GPU may be able to handle it with relatively few jobs.
Servers do better with CPUs because the jobs they do have to do with programs they run, and especially with Databases, since usually these are multiple small amounts of information that form together as one, where as graphics are multiple large amounts of data that forms to make... multiple large things that look cool.
Suffice to say, the jobs CPUs and GPUs are designed to do is entirely different, and using different processors to complete different tasks which isn't unheard of. This is why Servers usually use specialized server processors, such as Xeon and Opteron (as well as their ability to not break like a piece of crap)
__________________________________________________
|
|
|
|
Pages: [1] :: one page |
First page | Previous page | Next page | Last page |