| Pages: [1] 2 3 :: one page |
| Author |
Thread Statistics | Show CCP posts - 0 post(s) |

Benilopax
Gallente Pulsar Combat Supplies
|
Posted - 2007.11.18 18:20:00 -
[1]
So CCP told us that the server will soon (TM) become a supercomputer with new supercomputer ways of sorting through all the data. (I was difting in and out during the technical stuff) All I could think is that at last one of those things will be used for something other than playing chess. (Or reallife DEFCON if you believe the films.)
Does anyone know a bit about supercomputers that could exlain in simple terms how it would reduce lag?
I also read about the latest "teleportation" of quantum states of atoms, which will lead to Quantum computers.
Does that mean in 10 years time we will be upgrading to one of those? And also if you read the wiki article on EVE the discription of the races explains the use of Quantum computers, life imitating art somewhat?
|

William Darkk
Gallente Vengeance of the Fallen Knights Of the Southerncross
|
Posted - 2007.11.18 18:39:00 -
[2]
Basically the upgrade they're doing means that processors that would currently be devoted to totally empty systems can now help out whatever poor cpu is handling Jita. -------------------------------------------------- <3 my Drones |

Peter VonThal
Raygun Technologies
|
Posted - 2007.11.18 18:51:00 -
[3]
Well it's hard to know exactly what they are doing, because as they said they are pretty much working on their own set of standards to make everything work for a real-time game instead of a simulation over a set time period. But from what I gathered, instead of having nodes that are physically limited to doing different functions for certain amount of star systems, the next-gen EVE cluster will have the ability to not only dynamically change the amount of hardware dedicated to star systems, but also assign different processes in that star system to its own hardware.
So let's say a system is really getting bogged down with activity. Right now that solar system could be limited to sharing resources on a blade server with 10 other star systems. On the other hand, the supercomputer could assign more hardware to only the stressed star system. Then it can maybe assign a CPU to just the market in that star system. And then another CPU to just the combat calculations, etc. until the load is being handled properly.
|

Asestorian
Domination. Cult of War
|
Posted - 2007.11.18 18:53:00 -
[4]
As I understand it the technology CCP wants to use basically means that any resource of any part of their server can be dynamically assigned somewhere else that needs them, theoretically meaning fleet battles of 1000+ people being lag free, at least in terms of server performance. There are limits of course. That one system with the 1000 man fleet battle isn't the only thing going on, so CCP would have to limit the amount of resources that any one system can take. Eventually, unless they have unlimited server power available, you would reach the lag cap for the system again and probably start suffering the same problems we get now. Of course, depending on how it works out that could mean needing several thousand people in one system, but we don't know I guess.. I'm just rambling.
As for quantum computing.. I doubt CCP will ever be using that for EVE. by the time that is available for commercial use at a sensible price, EVE will probably be gone 
---
MOZO
|

Irongut
M'8'S Frontal Impact
|
Posted - 2007.11.18 18:54:00 -
[5]
Quantum computers are being worked on but they're far future tech if they ever work at all. My physics teacher used to tell me (with an evil grin on his face) that optical computers would obsolete my knowledge of electronics within 10 years and that was 18 years ago. They have never appeared and I doubt quantum computers usable by you and I will appear in the next 20 years.
There's nothing special about supercomputers per se, you can make one yourself with a network PCs (easier with Linux but can be done with Windows), and I think the EVE cluster already qualifies for the title. They are used for a lot of things other than playing chess... calculating prime numbers, *****ing encryption schemes, calculating missile trajectories, working out chemical reactions, protein folding, modeling climate change, extraterrestrial signal processing, looking for oil, economic modeling, etc. Basically anything that involves a lot of number crunching and would take a normal PC years to achieve.
There is some tech used in high performance computing (as the field is called) that could help reduce lag. High-speed interconnects between CPUs and nodes will speed up the passage of data. Parallel processing techniques will help split systems across multiple nodes with dynamic load balancing. I don't know much about this myself but a search on HP, IBM and Sun's websites should turn up some details.
Join M8S Racing Team sponsored by Frontier Technologies!
|

Yosarian
Hand Of The Tahiri Namtz'aar k'in
|
Posted - 2007.11.18 19:26:00 -
[6]
Regular clusters scale fine for non-realtime applications, but the problem is, the more machines you add, the more the network traffic between each machine becomes a bottleneck. For realtime applications thats a problem: your calculations are held up by the network.
Additionally you are limited to how much load you can put on an individual server (hardware limits). This means that you can only do a certain amount of processing on a single machine. The problem comes when you need to do a massive series of calculations (eg 500 person fleet battle). One machine can't handle it, but if you spread it over multiple machines then the network between them can't handle it. Hence lag in fleet battles and jita etc...
Supercomputers solve this by reducing the network latency between cpu's to almost nothing. The result is that you can continually add processing power to any task without running into the limit I described above. It's like having a single machine with as many CPU's as you like. This has the potential to make a HUGE difference to lag.
The work needed is to make the computer realtime-oriented. Most supercomputers handle batched calculations, very big maths calculations basically. This one will need to output data on time (ie no lag) for 30K+ players. It's possible, but the software will have to be tweaked.
|

Firkragg
Blue Labs Knights Of the Southerncross
|
Posted - 2007.11.18 19:36:00 -
[7]
Yeh as people have said above the server is going to finally gain the ability to assign more power to were its needed. Atm all they can do is put a system on its own server.
People who claim the servers cant cope with the amount of people playing always look stupid since they usually have no idea how computers work. Atm if i remember correctly the servers are only at about 40% utilisation as not all systems are busy all the time.
|

Tycoon inc
|
Posted - 2007.11.18 20:04:00 -
[8]
A supercomputer is a network of computers that has a process strung out over the network and compiled back into the mainframe for display/compliment (i think i got that one right).
If CCP would spend the buck to buy optical connection throught there network (which isnt cheap) would help the process of info throught to the mainframe but then again if everyone have optical connection the 1000+ system fleet battles would be easier to hand since the computer has to process the information at the average speed that it recieves it throught the world.
|

Taloic
Caldari Black Watch Regiment New Eden Conglomerate
|
Posted - 2007.11.18 20:35:00 -
[9]
So any ideas as to when CCP activates Skynet?
Id sort of like to be prepared you know the basics MRE's, extra ammo, a nice safe place to hide from the machine upriseing. 
|

Plutoinum
German Cyberdome Corp Cult of War
|
Posted - 2007.11.18 20:41:00 -
[10]
Quantum computation works already. But I only remember the state of about 3 years ago, where I partipated in siminars about it for 'fun'.
But at that time it was in the beginning, simple algoriths have been run on a quantum computers with maybe 20 qubits. It was all and probably still is in an experimental stage, so the quantum computer didn't look like something you can buy anytime soon, but looked like physical experiments in a labority with a huge pile of equipment.
Besides that doing calculations on a quantum computer is quite different to what we have now. The algorithms are completely different, so forget the old tools we have and the old thinking, of how a program works and looks like.
Actually the topic of my presentation those days was about quantum complexity and quantum touring machines and it took me like 3 weeks work, until I was in the math and had adopted a thinking that I was able to understand, what's going on. Actually after the 1st look I thought: 'Why the f*** did I sign up for this. I'll never get this.' It was the biggest challenge in my life. 
I'll take a look later again, but I still doubt that we will see pc's based on quantum computing anytime soon. Maybe in 15 or 20 years they are common. Don't know.
|

SiJira
|
Posted - 2007.11.18 20:42:00 -
[11]
Originally by: Taloic So any ideas as to when CCP activates Skynet?
Id sort of like to be prepared you know the basics MRE's, extra ammo, a nice safe place to hide from the machine upriseing. 
nintendo is more likely to achieve a skynet caliber with the satellites and everything ____ __ ________ _sig below_ devs and gms cant modify my sig if they tried! _lies above_ CCP Morpheus was here  Morpheus Fails. You need colors!! -Kaemonn [yellow]Kaem |

Pitt Bull
Caldari
|
Posted - 2007.11.18 20:58:00 -
[12]
Nothing a Cray X1E can't handle!
|

Hireshi
|
Posted - 2007.11.18 22:09:00 -
[13]
Originally by: Pitt Bull Nothing a Cray X1E can't handle!
Cray's use vector processing units.
My incredibly limited understanding, you give the processor a list of operations to perform (say 20, 30 etc) and an initial value and the processor goes round each operation applying it to the result from the preceeding one.
Difficult to see how this could be applied to a real time system like this.
I think they are not talking about supercomputers in the traditional sense, but more a giant cluster of microprocessors with high speed interconnects (like an IBM SP2 or whatever they call it now)
|

Semkhet
KR0M The Red Skull
|
Posted - 2007.11.18 22:29:00 -
[14]
Nowadays PC have decent idle computing resources, specially since more and more visual apps finally make use of gpu's instead of clogging the main CPU.
Maybe distributed computing could be the way to go. In EVE it could basically mean adaptive free CPU capacity. More people connected = more CPU available... Did you ever check the number of folks docked or at POS'es who aren't doing much ?
|

Tarminic
Forsaken Resistance
|
Posted - 2007.11.18 22:31:00 -
[15]
Originally by: Semkhet Nowadays PC have decent idle computing resources, specially since more and more visual apps finally make use of gpu's instead of clogging the main CPU.
Maybe distributed computing could be the way to go. In EVE it could basically mean adaptive free CPU capacity. More people connected = more CPU available... Did you ever check the number of folks docked or at POS'es who aren't doing much ?
An interesting idea, but unfortunately players' computers can't be trusted to perform any kind of actual logic since hacking is possible. People could tamper with the client to have it send bad calculations/information back to the server, even if they have no idea who it's hurting. ---------------- Tarminic - 29 Million SP in pink Forum Warfare |

Tortun Nahme
Minmatar Heimatar Services Conglomerate
|
Posted - 2007.11.18 22:32:00 -
[16]
you mean the hacking skill would become useful? 
Originally by: Cecil Montague They should change that warning on entering low sec to:
"Go read Crime and Punishment for a few days then come back."
|

Semkhet
KR0M The Red Skull
|
Posted - 2007.11.18 22:48:00 -
[17]
Originally by: Tarminic
Originally by: Semkhet Nowadays PC have decent idle computing resources, specially since more and more visual apps finally make use of gpu's instead of clogging the main CPU.
Maybe distributed computing could be the way to go. In EVE it could basically mean adaptive free CPU capacity. More people connected = more CPU available... Did you ever check the number of folks docked or at POS'es who aren't doing much ?
An interesting idea, but unfortunately players' computers can't be trusted to perform any kind of actual logic since hacking is possible. People could tamper with the client to have it send bad calculations/information back to the server, even if they have no idea who it's hurting.
There are pretty robust ways to achieve coherence by including encryption and CRC's. I did that in my old days when designing distributed computing in a network of microcontrollers which had to work in a highly contaminated EM environment, hence we had to implement some kind of "pertinence check" on the packets of data.
You just need to detect which packets are irregular and drop them. The protocol handles transparently any missing acknowledgment and create a new instance of the failed task somewhere else. Systems can also detect if a given node serves a statistically unexplainable amount of irregular data, which means that someone is tampering, or the node is malfunctioning. Then you simply exclude said node for distributed processing purposes during some time.
|

Tarminic
Forsaken Resistance
|
Posted - 2007.11.18 22:53:00 -
[18]
Originally by: Semkhet There are pretty robust ways to achieve coherence by including encryption and CRC's. I did that in my old days when designing distributed computing in a network of microcontrollers which had to work in a highly contaminated EM environment, hence we had to implement some kind of "pertinence check" on the packets of data.
You just need to detect which packets are irregular and drop them. The protocol handles transparently any missing acknowledgment and create a new instance of the failed task somewhere else. Systems can also detect if a given node serves a statistically unexplainable amount of irregular data, which means that someone is tampering, or the node is malfunctioning. Then you simply exclude said node for distributed processing purposes during some time.
While that idea is interesting, anyone who would modify the program to return bad (but semantically valid) data would be extremely difficult to distinguish from valid data. For example:
Server Sends: "I need the answer to 4+4" Hacked Client Responds: "7"
The server wouldn't be able to detect this (within reason) without doing the calculations themselves, and if the server is already doing the calculations to ensure parity, why involve the clients at all? ---------------- Tarminic - 29 Million SP in pink Forum Warfare |

Semkhet
KR0M The Red Skull
|
Posted - 2007.11.18 23:17:00 -
[19]
Edited by: Semkhet on 18/11/2007 23:22:22
Originally by: Tarminic
Originally by: Semkhet There are pretty robust ways to achieve coherence by including encryption and CRC's. I did that in my old days when designing distributed computing in a network of microcontrollers which had to work in a highly contaminated EM environment, hence we had to implement some kind of "pertinence check" on the packets of data.
You just need to detect which packets are irregular and drop them. The protocol handles transparently any missing acknowledgment and create a new instance of the failed task somewhere else. Systems can also detect if a given node serves a statistically unexplainable amount of irregular data, which means that someone is tampering, or the node is malfunctioning. Then you simply exclude said node for distributed processing purposes during some time.
While that idea is interesting, anyone who would modify the program to return bad (but semantically valid) data would be extremely difficult to distinguish from valid data. For example:
Server Sends: "I need the answer to 4+4" Hacked Client Responds: "7"
The server wouldn't be able to detect this (within reason) without doing the calculations themselves, and if the server is already doing the calculations to ensure parity, why involve the clients at all?
Your objection is valid in principle. But believe me, these are problems that have long been solved through a variety of methods.
For example what you do is that the actual processing code (algorithm or whatever) is injected along the data said code has to process (since the code is tightly dependent of the operation to execute), and you only allow a limited time for the whole process to take place until a valid reply gets back to the server.
Also, a scheme which is often implemented in electronics in very sensitive applications is the principle of redundant parallel operations. For example you let the same sequence of operations be executed by different processors, and you accept for further treatment only the most statistically occurring result. Typically you do that in electronics deployed in space.
Then, don't forget that these subroutines and datasets don't offer any meaning by itself since they are just fragments of a more complex calculation. Besides encryption, CRC's, timings, polymorphic code and a whole additional array of security measures, at the end it's only the server which knows if the received data make sense or not in the particular context the calculation was executed. And if there's a thing the client will never know, is what will be done with the results of a given computation.
So it's quite difficult to achieve anything concrete by attempting to tamper with distributed computation, unless you completely reverse-engineer both the client and the server software and additionally fully understand the underlying data structures (which often are way more complex than the programs themselves).
|

Tarminic
Forsaken Resistance
|
Posted - 2007.11.18 23:27:00 -
[20]
Originally by: Semkhet Your objection is valid in principle. But believe me, these are problems that have long been solved through a variety of methods.
For example what you do is that the actual processing code (algorithm or whatever) is injected along the data said code has to process (since the code is tightly dependent of the operation to execute), and you only allow a limited time for the whole process to take place until a valid reply gets back to the server.
Also, a scheme which is often implemented in electronics in very sensitive applications is the principle of redundant parallel operations. For example you let the same sequence of operations be executed by different processors, and you accept for further treatment only the most statistically occurring result. Typically you do that in electronics deployed in space.
Then, don't forget that these subroutines and datasets don't offer any meaning by itself since they are just fragments of a more complex calculation. Besides encryption, CRC's, timings, polymorphic code and a whole additional array of security measures, at the end it's only the server which knows if the received data make sense or not in the particular context the calculation was executed. And if there's a thing the client will never know, is what will be done with the results of a given computation.
So it's quite difficult to achieve anything concrete by attempting to tamper with distributed computation, unless you completely reverse-engineer both the client and the server software and additionally fully understand the underlying data structures (which often are way more complex than the programs themselves).
That's actually very interesting and informative - I have to admit that my expertise is somewhat lacking in this area. Could you point me towards articles or references where I can read more about techniques used to solve these problems?
 ---------------- Tarminic - 29 Million SP in pink Forum Warfare |

Semkhet
KR0M The Red Skull
|
Posted - 2007.11.18 23:31:00 -
[21]
Originally by: Tarminic That's actually very interesting and informative - I have to admit that my expertise is somewhat lacking in this area. Could you point me towards articles or references where I can read more about techniques used to solve these problems?

Sure. It's late and I'm going to bed, but tomorrow I will send you an ingame mail with some links.
|

Krxon Blade
|
Posted - 2007.11.18 23:44:00 -
[22]
adding better hardware == more $$ down the drain == CCP adds more content == marketing lure more subscribers == heavier system load == more lag == adding better hardware == more $$ down the drain == CCP adds more content == marketing lure more subscribers == heavier system load == more lag == adding better hardware == more $$ down the drain == CCP adds more content == marketing lure more subscribers == heavier system load == more lag == adding better hardware == more $$ down the drain == CCP adds more content == marketing lure more subscribers == heavier system load == more lag == adding better hardware == more $$ down the drain == CCP adds more content == marketing lure more subscribers == heavier system load == more lag == ... Moral? Players will come and go but lag is persistent 
--
|

Yumis
|
Posted - 2007.11.18 23:59:00 -
[23]
Originally by: Benilopax All I could think is that at last one of those things will be used for something other than playing chess.
Actually speaking about computer chess, the old computer chess champion Deep Junior is hosted on an Intel Caneland platform, with 4 sockets each with 4 cores, thats 16 x 2.93GHz cores or 46.88GHz of potential processing power. I know as I personally work with this system thats pretty impressive for an "off the shelf" product.
|

Cyriel Longinus
XERCORE
|
Posted - 2007.11.19 05:49:00 -
[24]
Originally by: Semkhet
Originally by: Tarminic That's actually very interesting and informative - I have to admit that my expertise is somewhat lacking in this area. Could you point me towards articles or references where I can read more about techniques used to solve these problems?

Sure. It's late and I'm going to bed, but tomorrow I will send you an ingame mail with some links.
Actually I would like to be pointed to some articles and references as well. Thank you.
|

Adunh Slavy
Ammatar Trade Syndicate
|
Posted - 2007.11.19 08:16:00 -
[25]
Originally by: Asestorian ... meaning fleet battles of 1000+ people being lag free ...
Looking forward to being called primary by that, woo hoo! -AS
The Real Space Initiative (Forum Link) |

Semkhet
KR0M The Red Skull
|
Posted - 2007.11.19 08:29:00 -
[26]
@Cyriel Longinus, @Tarminic As you wish :
In most cluster topologies, load balancing methods act "ad posteriori", meaning that load balancing occurs AFTER the load threshold has been reached. There is always a delay between the detection of a load condition and the moment the node receives additional resources. In highly variable load conditions, this can lead to "computational hysteresis", where due to the latency of the load balancing mechanism vs the variable real time load requirements, the node never gets the optimal amount of resources and oscillates between states were the resources are either insufficient (undersized) or get wasted (oversized).
Implementing distributed computing (referred as DC from now on) in EvE could in fact PREEMPT both network and CPU overload conditions since each client connected could add two kind of resources: computational power and network bandwith.
Since each single EvE connection corresponds to a given character ingame, each DC instance would automatically be mapped to the solar system the character is interacting in. You get an intrinsic self-balancing CPU load mechanism following each character.
Since the server knows if a given character needs high-priority or low-priority interaction (character in space or docked for ex.), the amount and type of DC can be modulated from client to client.
DC increases raw CPU power but has the tendency to increase network traffic as side-effect. Hence you must include a strategy were from the perspective of the clients, snippets of data are sent both to other clients and the server, while the server limits itself to validate the data in the clients when these snippets are promoted as valid.
It's a bit like comparing a classic download with the download strategy of a bittorrent system. In the first, more people download and slower become the effective download speed, in the latter, more people downloads, more snippets are made available and faster become the effective download speed.
But to achieve flux efficiency in a secondary distributed network, your network need what is called "a sense of direction". To make things clearer, let's take Jita. People all over the world interacts in Jita, so you can't simply declare that all connections are suitable for DC, since a guy in Australia will suffer a higher network latency than say, a guy in London. So for each solar system you must categorize each separate connection in one of 3 subcategories:
a) The clients were CPU & network conditions allow full DC. b) The clients were CPU & network conditions only allow limited DC (non time-critical tasks). c) The clients judged as non-DC able from an efficiency perspective
Implementing DC could give CCP the unique opportunity to test all the stuff in real time and conditions, since this could be transparently implemented in the production clients and only used for analysis, profiling and test purposes until the system works reliably as intended before going live.
Fault-Tolerant Simulation of Message-Passing Algorithms by Mobile Agents Fractional Dynamic Faults Computing on anonymous networks with sense of direction Distributed Security Algorithms by Mobile Agents Distributed Computing Sanity Checking List of distributed computing projects BOINC DCE
|

Grimpak
Gallente Trinity Nova KIA Alliance
|
Posted - 2007.11.19 08:37:00 -
[27]
people here are forgetting something.
if you upgrade the servers so that they will handle 1000 people fleet battles, soon we will have 2000 people battles or more, making the lag monster appear again.
it's not an atempt to exploit. it's not being an ass.
it's just that people will appear until lag makes them stop in their tracks. ---
planetary interaction idea! |

Semkhet
KR0M The Red Skull
|
Posted - 2007.11.19 08:48:00 -
[28]
Edited by: Semkhet on 19/11/2007 08:48:45
Originally by: Grimpak people here are forgetting something.
if you upgrade the servers so that they will handle 1000 people fleet battles, soon we will have 2000 people battles or more, making the lag monster appear again.
it's not an atempt to exploit. it's not being an ass.
it's just that people will appear until lag makes them stop in their tracks.
Correct, that's why I believe that the system should be designed so that each player compensates the strain he will add to the central cluster by offering itself resources able to reroute part of the combined cpu, data & network load (which actually takes place strictly bidirectionally between the server and the clients) into a topology where clients receive, process and emit both data from/to clients and from/to the server.
IMHO it's the only way to preempt and cope with "bursts of strain" on the long term regardless of specific conditions linked or not to the scalability of CCP's hardware.
|

cinderbrood
Caldari Dark Prophecy Inc. Knights Of the Southerncross
|
Posted - 2007.11.19 09:21:00 -
[29]
well if not the use of the clients to handle cpu related data. I see no reason it couldnt pass on Details it already received via the server, Would still need checking to make sure it hadnt been modified, but could save some bandwith issues especially if ip geolocation was factored in, altho that would screw those connecting via proxy's id assume.
See the last note of my last post for an example of what im on about
|

Semkhet
KR0M The Red Skull
|
Posted - 2007.11.19 09:30:00 -
[30]
Edited by: Semkhet on 19/11/2007 09:33:50
Originally by: Yosarian I prefer the 'build a really big computer approach'.
Offloading tasks to clients is never going to be reliably fast enough for realtime (read pvp)applications due to the latency of the internet. And that's the area that's in most need of performance improvements, ie large battles. In addition to that, you have to build in a new layer of batch management: distribution and collection of data to and from the clients. This will also require encryption, adding yet more overhead. Bear in mind that many of these clients will have not so many CPU cycles consistently available anyway since they're already running the game client (plus who knows what else). It seems like a whole lot of work for not enough reward.
Bit torrent has its virtues, but latency is not one of them. Distributing an application is a whole different ball game from distributing a static file. Virtualization via a supercomputer seems a much safer bet for CCP.
Well, if you think that even supercomputers are internally following the path of massive parallel architecture, it is not as unreliable as it seems. Let's just imagine:
20000 connected clients, 5000 clients unable to perform DC 10000 clients able to perform limited DC (non critical tasks) 5000 clients to perform full DC.
Now that's still a significant and FREE cpu capability.
And in the exchange of non critical data, just think about the following case: each time you enter in a system, you get a bunch of data related to the local chat window. What if your client receives all the related data from other connected clients, and the server limits itself to perform a simple CRC check to see if you have received all of it ?
Don't forget for example that in the case of jumping into a gate camp, it's not only the server which knows the nature and position of ships, but the clients as well since the server refreshes the data when opportune. What prevents other clients to collaborate in sending you most of the relevant data through multiple TCP/UDP concurrent connections, and limit the server to play the "orchestra director" just checking pertinence, validity and injecting further data only when either not enough of the involved clients can serve it OR to integrate a new subset of data (new ship, etc...) ?
You can find a multitude of cases where the data can be injected by other clients in parallel to the server.
|
| |
|
| Pages: [1] 2 3 :: one page |
| First page | Previous page | Next page | Last page |