Pages: 1 2 3 4 :: [one page] |
|
Author |
Thread Statistics | Show CCP posts - 15 post(s) |
|
CCP Zymurgist
Gallente C C P
|
Posted - 2011.06.20 15:50:00 -
[1]
CCP Curt knows you love big technical blogs and brings us a great dev blog on CarbonIO and how we are improving EVE on the network level.
Zymurgist Community Representative CCP NA, EVE Online Contact Us |
|
Vincent Athena
|
Posted - 2011.06.20 16:02:00 -
[2]
Looking forward to the read. First
|
Lykouleon
Wildly Inappropriate Wildly Inappropriate.
|
Posted - 2011.06.20 16:22:00 -
[3]
CCP CURT, 010101110110100101101100011011000010000001111001011011110111010100100000011011010110000101110010011100100111100100100000011011010110010100111111
Seriously nice blog and some great info. And enough graphs to appease my graph-throne <3 Lykouleon > CYNO ME CLOSER SO THAT I CAN HIT THEM WITH MY SWORD |
Elaron
Jericho Fraction The Star Fraction
|
Posted - 2011.06.20 16:22:00 -
[4]
Awesome dev blog. I can't wait to see the results when the tech benefits of CarbonIO/BlueNet really start to be taken advantage of.
|
Ager Agemo
Caldari Care Factor
|
Posted - 2011.06.20 16:28:00 -
[5]
so basically EVE is finally getting truely multithreated on the server side, sweet. any chance the code will ever become fully multithreated? and what about this being added on the client?
|
Kyoko Sakoda
Caldari Veto. Veto Corp
|
Posted - 2011.06.20 16:28:00 -
[6]
Confirming this went way over my head but was hella awesome!
___
Latest video: Future Proof (720p) 2D Animator |
Kandro Ashtear
|
Posted - 2011.06.20 16:31:00 -
[7]
I'm going to preface this post with a disclaimer. I've never used python, never used any programming in the massive scale that you have, and many people here, especially CCP employee's, are much smarter than I.
I have what some would consider a basic question... why python? I've heard complaints about it left right and center, but people still use it.
Why not make all of the software in C++ or any of the other languages out there? What does python offer that C++ doesn't or even Visual C#?
Thought I'd start the flame war and my education.
|
Miss Modus
|
Posted - 2011.06.20 16:33:00 -
[8]
What was happening at the four peaks of the CPU% per user graph where the CarbonIO spiked much higher than the original StacklessIO?
|
Abramul
Gallente StarFleet Enterprises
|
Posted - 2011.06.20 16:35:00 -
[9]
And here we were, thinking Incarna would hurt performance. Nice work!
Should be interesting to see how much this affects large battles.
|
GateScout
|
Posted - 2011.06.20 16:38:00 -
[10]
Now that's just cool! Great dev blog!
Thanks for giving us a peek inside.
|
|
J Kunjeh
Gallente
|
Posted - 2011.06.20 16:41:00 -
[11]
Ooooh...two tech pron Dev blogs in less than 7 days! I'll have to read this one again after my brain again re-solidifies, but the first read through was exciting. Can't wait to see what this does to TQ in the wild. ~Gnosis~ |
Ralitge boyter
Minmatar
|
Posted - 2011.06.20 16:42:00 -
[12]
Looking good finally moving away from Python with code that does not "have" to be in python. I have nothing against python in and of it self but for much of the work that you guys have it do it simply is not the best suited. Using C++ and optimizing the hell out of it will help a lot.
Now the only thing left is to see about getting stackless to efficiently work on multiple cores if you guys can manage that EVE will finally be able to deal with a 10k fleet battle without flinching or having your poor hamsters going on strike while demanding Segways. ------------------------------------------- Should you disagree with me, well I guess that is because I disagree with you. If you have a problem with that please feel free not to tell me. |
Hexxx
Minmatar
|
Posted - 2011.06.20 16:44:00 -
[13]
Originally by: Kandro Ashtear I'm going to preface this post with a disclaimer. I've never used python, never used any programming in the massive scale that you have, and many people here, especially CCP employee's, are much smarter than I.
I have what some would consider a basic question... why python? I've heard complaints about it left right and center, but people still use it.
Why not make all of the software in C++ or any of the other languages out there? What does python offer that C++ doesn't or even Visual C#?
Thought I'd start the flame war and my education.
As I understand it, it's a pretty simple and sensible reason...it's faster to develop in Python than it is in C++. This isn't a question of skillsets, but more of dealing with the language itself. Python is a higher level language and so many of the necessities of dealing with C++ don't exist in Python, making developing a much easier thing to do.
C++ was also developing in the late 70's....there's been some advances in programming since then.
With that ease of development come some costs...these are likely the negatives you hear about. For CCP, and for others, the costs are worth the benefits.
|
Uncanny Valley
|
Posted - 2011.06.20 16:46:00 -
[14]
Is BlueNet going to be available for the Incarna release, or just CarbonIO? When are you going to "publish" BlueNet data?
Looking at the graphs on CarbonIO (Red) versus the StacklessIO (Blue) there are some very significant spikes in the CarbonIO CPU usage (brief though they may be, they are far above StacklessIO peaks). Have you investigated those causes? What affect on cluster performance would that cause (i.e. laggyness)?
|
Hylax Ciai
Cataclysm Enterprises Ev0ke
|
Posted - 2011.06.20 16:46:00 -
[15]
Originally by: Kandro Ashtear I'm going to preface this post with a disclaimer. I've never used python, never used any programming in the massive scale that you have, and many people here, especially CCP employee's, are much smarter than I.
I have what some would consider a basic question... why python? I've heard complaints about it left right and center, but people still use it.
Why not make all of the software in C++ or any of the other languages out there? What does python offer that C++ doesn't or even Visual C#?
Thought I'd start the flame war and my education.
I think, back in the day when the development of EVE started, python was chosen because it was easier to learn than C++. New employees would have fewer difficulties to get used to the code.
The reason why they aren't just replacing the python code with C++ code is, that it would mean a tremendous amount of work to rewrite all these lines of code. Another point is that all the python code works, as it is right now. Rewriting it in C++ means that a lot of bugs would slip into the code. Basically tested software if better as untested software, right?
So, they are only rewriting the python code when they are required to change it anyway.
|
Taedrin
Gallente Zero Percent Tax Haven
|
Posted - 2011.06.20 16:57:00 -
[16]
Originally by: Hylax Ciai Edited by: Hylax Ciai on 20/06/2011 16:49:24 Edited by: Hylax Ciai on 20/06/2011 16:49:02 Edited by: Hylax Ciai on 20/06/2011 16:47:37
Originally by: Kandro Ashtear I'm going to preface this post with a disclaimer. I've never used python, never used any programming in the massive scale that you have, and many people here, especially CCP employee's, are much smarter than I.
I have what some would consider a basic question... why python? I've heard complaints about it left right and center, but people still use it.
Why not make all of the software in C++ or any of the other languages out there? What does python offer that C++ doesn't or even Visual C#?
Thought I'd start the flame war and my education.
I think, back in the day when the development of EVE started, python was chosen because it was easier to learn than C++. New employees would have fewer difficulties to get used to the code.
The reason why they aren't just replacing the python code with C++ code is, that it would mean a tremendous amount of work to rewrite all these lines of code. Another point is that all the python code works, as it is right now. Rewriting it in C++ means that a lot of bugs would slip into the code. Basically tested software is better than untested software, right?
So, they are only rewriting the python code when they are required to change it anyway, as it was detailed in the blog.
Most likely, python wasn't chosen because it is easier to learn, but rather because it is easier and faster to write code. A relatively simple task in Python which requires only one line of code may require hours of coding and debugging in C++.
The idea is that an hour of programming time is worth FAR more than a couple CPU cycles. Higher level programming languages are better business than lower level programming languages. ----------
Originally by: Dr Fighter "how do you know when youve had a repro accident"
Theres modules missing and morphite in your mineral pile.
|
Ishina Fel
Caldari Terra Incognita Intrepid Crossing
|
Posted - 2011.06.20 16:59:00 -
[17]
Whoa, wait a moment! Where did that one come from? I heard no mention of this in any prior devblogs, and not in the fanfest coverage either And yet, this is the kind of thing people will frantically gobble up, the kind of thing everyone wants to read.
When are which parts of this getting enabled? Is that an official feature of the Incarna expansion, or just something that happened to be ready at the same time? What are you hoping to do with this technology in the future?
- Signature? What signature? |
Trebor Daehdoow
|
Posted - 2011.06.20 17:07:00 -
[18]
Now this is my kind of tech-****! Nice!
|
Forest Hill
|
Posted - 2011.06.20 17:11:00 -
[19]
Awesome. I really like these techie blogs!
When will this be released to the rest of TQ?
|
BeanBagKing
Terra Incognita Intrepid Crossing
|
Posted - 2011.06.20 17:11:00 -
[20]
My nerd status has been humbled by this torrent of geek-speak :(
I love technical blogs though, keep them coming.
|
|
SirHarryPierce
|
Posted - 2011.06.20 17:15:00 -
[21]
Originally by: Kyoko Sakoda Confirming this went way over my head but was hella awesome!
This.
|
Peter Powers
FinFleet Raiden.
|
Posted - 2011.06.20 17:16:00 -
[22]
awesome devblog, allways enjoy reading more about how you guys tackle such problems :)
now, the test with the switched out proxy.. was that done on TQ? Deblob! the Website with Statistics about the BFF vs. DRF+Friends. Conflict!
|
anvyl sky
|
Posted - 2011.06.20 17:16:00 -
[23]
Originally by: Ishina Fel Whoa, wait a moment! Where did that one come from? I heard no mention of this in any prior devblogs, and not in the fanfest coverage either And yet, this is the kind of thing people will frantically gobble up, the kind of thing everyone wants to read.
When are which parts of this getting enabled? Is that an official feature of the Incarna expansion, or just something that happened to be ready at the same time? What are you hoping to do with this technology in the future?
qft
|
Randal Eirikr
|
Posted - 2011.06.20 17:17:00 -
[24]
I understood the colorful graph parts at least!
|
|
CCP Veritas
|
Posted - 2011.06.20 17:26:00 -
[25]
Originally by: Ager Agemo so basically EVE is finally getting truely multithreated on the server side, sweet. any chance the code will ever become fully multithreated? and what about this being added on the client?
It's exceedingly unlikely that the server code will be come fully multithreaded, as we do not have any intention of abandoning Python for game logic code and other high-level constructs. What BlueNet does allow for us to do is to take the lower level packet-slinging systems and spread them wide - something Gridlock has on its plate to do in the not-too-distant future for flying in space and the Incarna guys have already been doing for the walking around business.
CarbonIO has not been turned on for the client as yet, but since the client doesn't sling packets for a living that's not a big deal. It will need to be activated before BlueNet is leveraged at all, so you can expect it to happen before the multi-player Incarna release.
Originally by: Abramul Should be interesting to see how much this affects large battles.
It more-or-less won't until we over in Gridlock get about leveraging it as mentioned above.
Originally by: Taedrin The idea is that an hour of programming time is worth FAR more than a couple CPU cycles. Higher level programming languages are better business than lower level programming languages.
Quoted for truth.
Originally by: Forest Hill When will this be released to the rest of TQ?
CarbonIO has been live on TQ cluster-wide for a week.
Originally by: Peter Powers now, the test with the switched out proxy.. was that done on TQ?
Yes
|
|
Dalmont Delantee
Gallente Shiloh Technologies
|
Posted - 2011.06.20 17:27:00 -
[26]
Now this is the kind of Dev blog we like, not the ones saying you'll be selling ships or making people pay $99 :P
GRATZ :) Take comfort in knowing that its probably some pimply faced twit, or 40 year old virgin, who gleens everytime mommy offfers to take them to needle point lessons |
Antihrist Pripravnik
Scorpion Road Industry
|
Posted - 2011.06.20 17:38:00 -
[27]
Originally by: Dalmont Delantee Now this is the kind of Dev blog we like, not the ones saying you'll be selling ships or making people pay $99 :P
GRATZ :)
Players of many other games are happy to see new "content" that they would have to pay for, while EVE nerds, myself included, just love to see technical wall of text like this one. It tells something about the community and why we like spending time in a huge, confusing game with steep learning curve --- The EVE 3rd-Party Shutdown Party |
Bagehi
Association of Commonwealth Enterprises
|
Posted - 2011.06.20 17:41:00 -
[28]
Graphs. So pretty. This leaves me wondering if someone, somewhere in the near future is going to rewrite the GIL so that it can send processes to other cores/CPUs to be run, rather than keep them all on the same core. But... I don't know much of anything about coding so, to whoever explains why this is a dumb idea, be nice please.
This signature is useless, but it is red.
|
|
Chribba
Otherworld Enterprises Otherworld Empire
|
Posted - 2011.06.20 17:44:00 -
[29]
Coolies.
/c
Secure 3rd party service | in-game 'Holy Veldspar' Now /w voice |
|
Minsc
Gallente Alpha Empire
|
Posted - 2011.06.20 17:47:00 -
[30]
Originally by: CCP Veritas
Originally by: Ager Agemo so basically EVE is finally getting truely multithreated on the server side, sweet. any chance the code will ever become fully multithreated? and what about this being added on the client?
It's exceedingly unlikely that the server code will be come fully multithreaded, as we do not have any intention of abandoning Python for game logic code and other high-level constructs. What BlueNet does allow for us to do is to take the lower level packet-slinging systems and spread them wide - something Gridlock has on its plate to do in the not-too-distant future for flying in space and the Incarna guys have already been doing for the walking around business.
CarbonIO has not been turned on for the client as yet, but since the client doesn't sling packets for a living that's not a big deal. It will need to be activated before BlueNet is leveraged at all, so you can expect it to happen before the multi-player Incarna release.
Originally by: Abramul Should be interesting to see how much this affects large battles.
It more-or-less won't until we over in Gridlock get about leveraging it as mentioned above.
Originally by: Taedrin The idea is that an hour of programming time is worth FAR more than a couple CPU cycles. Higher level programming languages are better business than lower level programming languages.
Quoted for truth.
Originally by: Forest Hill When will this be released to the rest of TQ?
CarbonIO has been live on TQ cluster-wide for a week.
Originally by: Peter Powers now, the test with the switched out proxy.. was that done on TQ?
Yes
wow stealth server patch.
One question I have is whether the python develpment that CCP has done is getting any recognition/use in the larger python community. Are all of these advances only usefull in EVE or are they usefull in other places too.
|
|
lisaaa
|
Posted - 2011.06.20 17:48:00 -
[31]
Originally by: CCP Veritas
Quote:
CarbonIO has not been turned on for the client as yet, but since the client doesn't sling packets for a living that's not a big deal. It will need to be activated before BlueNet is leveraged at all, so you can expect it to happen before the multi-player Incarna release.
Quote:
CarbonIO has been live on TQ cluster-wide for a week.
I don't get >.<
|
Flynn Fetladral
Caldari BlackSite Prophecy
|
Posted - 2011.06.20 17:52:00 -
[32]
Great blog! Will be nice to see multi threaded client coming to a computer near you sometime in the future.
Follow Flynn on Twitter |
Callic Veratar
|
Posted - 2011.06.20 17:53:00 -
[33]
Originally by: Bagehi Graphs. So pretty. This leaves me wondering if someone, somewhere in the near future is going to rewrite the GIL so that it can send processes to other cores/CPUs to be run, rather than keep them all on the same core. But... I don't know much of anything about coding so, to whoever explains why this is a dumb idea, be nice please.
If I understood what was explained, the GIL is a transactional lock. Processing that require it's function must be done in the order they were received to prevent memory trampling. Think of it like this: I buy a ship, then I sell a stack of minerals. My bank balance should be [balance] = [balance] - [ship] + [minerals]. But, if you tried the transactions at the same time you could end up with [balance] = [balance] - [ship] with your minerals missing or [balance] = [balance] + minerals] with a free ship.
|
|
Fien Silver
|
Posted - 2011.06.20 17:53:00 -
[34]
Originally by: Miss Modus What was happening at the four peaks of the CPU% per user graph where the CarbonIO spiked much higher than the original StacklessIO?
Egads I was hoping that would slide by. I almost erased them out but thought it best not to fudge the data in any way.
These graphs are from the initial 24-hour deployment to a fully-loaded pair of proxy nodes on TQ. The "spikes" are the result of a spinlock-bug I fixed the next day. Essentially, since the communications is now multithreaded, its possible for one thread to call for the close of a socket at the same time another one is writing data to it. This condition is guarded against with a mutex of course, but if it occured at EXACTLY the same time (multi-core) there was a "hole" in the logic that spun a core until the socket was actually released. This had no effect on throughput, but would tie down a core until the TCP/IP stack finnaly came back and said "yup he's gone".
Spinlocks on mutli-core systems are more efficient in cases of small hold times and infrequent contention, but in this case I needed to use a more conventional sleep/event type lock.
Now if you were really clever you'd ask about those little peaks that grow slowly over time toward the end of the run ;)
Originally by: J Kunjeh Can't wait to see what this does to TQ in the wild.
Didn't I mention? You're soaking in it. Its been TQ-wide for the last week :) Was fun to watch our CPU usage drop off a cliff.
Originally by: Uncanny Valley Is BlueNet going to be available for the Incarna release, or just CarbonIO? When are you going to "publish" BlueNet data?
This was created to service the data requirements of Incarna, but in so doing, the door was opened for other systems to use it. CarbonIO was created so that something like BlueNet could function (ie- send/receive packets off the GIL). BlueNet is now available for internal teams to start examining and see if they can leverage.
Originally by: Ishina Fel Whoa, wait a moment! Where did that one come from? I heard no mention of this in any prior devblogs, and not in the fanfest coverage either Surprised And yet, this is the kind of thing people will frantically gobble up, the kind of thing everyone wants to read.
When are which parts of this getting enabled? Is that an official feature of the Incarna expansion, or just something that happened to be ready at the same time? What are you hoping to do with this technology in the future?
Well.. the simple version is we didn't know if this would work at all, let alone gain any efficiency. CarbonIO was written to open up a data-path through MachoNet that was wide enough to allow Incarna data to flow without hurting cluster performance, and we can show that it does that. Other than that, we didn't really expect any efficiency gains. But apparently the ground-up rework with this new paradigm *did* result in some gains.. some big ones.. but we only discovered that once we ran it, and then only over the last week and a half or so.
Something this big and basic being rolled into TQ, we never thought it would work the first time, that's why we took it so slow to begin with. But.. it DID work the first time, and fast, surprising everyone (me most of all).
So to sum up, we didn't tell anyone because we didn't know if we would have anything to tell, and even if we did, we didn't know when it would be available.
Set the way-back machine to 2 weeks ago with me huddled at my desk praying to the gods of no-crashy as this was rolled this into Jita for the first time. Hoping that whatever went wrong I'd be able to identify and fix before Incarna had to ship.
Yeah you didn't want to be me. "reproduction steps: get 2000 users to perform trade actions randomly on Jita.. ".
But I'll be damned if it didn't work, and lowered CPU by 10%.
|
|
|
CCP Curt
|
Posted - 2011.06.20 17:55:00 -
[35]
BAH. defaulted to my throwaway alt. That last post was meant to be by CCP Curt
|
|
lisaaa
|
Posted - 2011.06.20 18:00:00 -
[36]
What did Veritas mean by that the CarbonIO isn't turned on for the client, but it has been live on TQ for a week now ???
|
Callic Veratar
|
Posted - 2011.06.20 18:04:00 -
[37]
Originally by: lisaaa
What did Veritas mean by that the CarbonIO isn't turned on for the client, but it has been live on TQ for a week now ???
Exactly that, it's enabled on the server but not on the client yet. It doesn't need to be on both ends to work.
|
|
CCP Curt
|
Posted - 2011.06.20 18:07:00 -
[38]
Originally by: lisaaa
What did Veritas mean by that the CarbonIO isn't turned on for the client, but it has been live on TQ for a week now ???
He meant the TQ cluster, server-side. The client will see no benefit from this since it spends almost no time sending packets. We have not fully tested all the client systems with it yet, but will turn it on once we have. So far so good.
Originally by: Miss Modus What was happening at the four peaks of the CPU% per user graph where the CarbonIO spiked much higher than the original StacklessIO?
Egads I was hoping that would slide by. I almost erased them out but thought it best not to fudge the data in any way.
These graphs are from the initial 24-hour deployment to a fully-loaded pair of proxy nodes on TQ. The "spikes" are the result of a spinlock-bug I fixed the next day. Essentially, since the communications is now multithreaded, its possible for one thread to call for the close of a socket at the same time another one is writing data to it. This condition is guarded against with a mutex of course, but if it occured at EXACTLY the same time (multi-core) there was a "hole" in the logic that spun a core until the socket was actually released. This had no effect on throughput, but would tie down a core until the TCP/IP stack finnaly came back and said "yup he's gone".
Spinlocks on mutli-core systems are more efficient in cases of small hold times and infrequent contention, but in this case I needed to use a more conventional sleep/event type lock.
Now if you were really clever you'd ask about those little peaks that grow slowly over time toward the end of the run ;)
Originally by: Uncanny Valley Is BlueNet going to be available for the Incarna release, or just CarbonIO? When are you going to "publish" BlueNet data?
This was created to service the data requirements of Incarna, but in so doing, the door was opened for other systems to use it. CarbonIO was created so that something like BlueNet could function (ie- send/receive packets off the GIL). BlueNet is now available for internal teams to start examining and see if they can leverage.
Originally by: Ishina Fel Whoa, wait a moment! Where did that one come from? I heard no mention of this in any prior devblogs, and not in the fanfest coverage either Surprised And yet, this is the kind of thing people will frantically gobble up, the kind of thing everyone wants to read.
When are which parts of this getting enabled? Is that an official feature of the Incarna expansion, or just something that happened to be ready at the same time? What are you hoping to do with this technology in the future?
Well.. the simple version is we didn't know if this would work at all, let alone gain any efficiency. CarbonIO was written to open up a data-path through MachoNet that was wide enough to allow Incarna data to flow without hurting cluster performance, and we can show that it does that. Other than that, we didn't really expect any efficiency gains. But apparently the ground-up rework with this new paradigm *did* result in some gains.. some big ones.. but we only discovered that once we ran it, and then only over the last week and a half or so.
Something this big and basic being rolled into TQ, we never thought it would work the first time, that's why we took it so slow to begin with. But.. it DID work the first time, and fast, surprising everyone (me most of all).
So to sum up, we didn't tell anyone because we didn't know if we would have anything to tell, and even if we did, we didn't know when it would be available.
Set the way-back machine to 2 weeks ago with me huddled at my desk praying to the gods of no-crashy as this was rolled this into Jita for the first time. Hoping that whatever went wrong I'd be able to identify and fix before Incarna had to ship.
Yeah you didn't want to be me. "reproduction steps: get 2000 users to perform trade actions randomly on Jita.. ".
But I'll be damned if it didn't work, and lowered CPU by 10%
|
|
tatsudoshi I
Gallente The Venus Project - Zeitgeist Movement
|
Posted - 2011.06.20 18:07:00 -
[39]
Edited by: tatsudoshi I on 20/06/2011 18:09:15
Originally by: CCP Curt Now if you were really clever you'd ask about those little peaks that grow slowly over time toward the end of the run ;)
I actually thought that was what MissModus was asking about.. So what are the ever growing spikes and when or if do they stop? ..................................................
May we all have the courage to believe. Long live Mifune! |
|
CCP Curt
|
Posted - 2011.06.20 18:14:00 -
[40]
Originally by: tatsudoshi I Edited by: tatsudoshi I on 20/06/2011 18:09:15
Originally by: CCP Curt Now if you were really clever you'd ask about those little peaks that grow slowly over time toward the end of the run ;)
I actually thought that was what MissModus was asking about.. So what are the ever growing spikes and when or if do they stop?
They are stat crunching. The longer the server runs the more data it sifts through when polled for information. Toward the end of the run it actually starts to be visible. CarbonIO manages idle time quite a bit differently (not necessarily better, just different) than StacklessIO so while those spikes were previously lost in the noise, they became visible.
This is being addressed now to more efficiently deal with that data.
|
|
|
Motriek
Navy of Xoc Wildly Inappropriate.
|
Posted - 2011.06.20 18:16:00 -
[41]
Please clarify what portions of the server stack this impacts and benefits. As understand it, there are proxy servers, sol nodes / servers, and database servers. Does this primarily benefit the proxies or the sols?
|
|
CCP Curt
|
Posted - 2011.06.20 18:23:00 -
[42]
Originally by: Lykouleon CCP CURT, 010101110110100101101100011011000010000001111001011011110111010100100000011011010110000101110010011100100111100100100000011011010110010100111111
Seriously nice blog and some great info. And enough graphs to appease my graph-throne <3
cat nerd.cpp #include <stdio.h>
//------------------------------------------------------------------------------ int main( int argn, char *argv[] ) { char in[]="010101110110100101101100011011000010000001111001011011110111010" "100100000011011010110000101110010011100100111100100100000011011" "010110010100111111";
int pos = 0; while( in[pos] ) { char accumulator = 0; for( int i=0; i<4 && in[pos] ; i++ ) { accumulator += in[pos] == '1' ? 1<<i : 0; pos++; }
printf( "[%d:%c]:", (int)accumulator, accumulator ); }
printf( "\n" ); return 0; }
$ gcc -o nerd nerd.cpp ;./nerd [10: ]:[14:]:[6:]:[9: ]:[6:]:[3:]:[6:]:[3:]:[4:]:[0:]:[14:]:[9: ]:[6:]:[15:]:[14:]:[10: ]:[4:]:[0:]:[6:]:[11: ]:[6:]:[8]:[14:]:[4:]:[14:]:[4:]:[14:]:[9: ]:[4:]:[0:]:[6:]:[11: ]:[6:]:[10: ]:[12: ]:[15:]:
I don't get it.
-Curt
|
|
Sakura Ren Fenikkusu
|
Posted - 2011.06.20 18:29:00 -
[43]
Edited by: Sakura Ren Fenikkusu on 20/06/2011 18:29:01 I definetly love to read dev blogs and articles like this.
I'm a hobbyist/indie developer, working on a few small games, and trying to learn everything I can, so that in the future I can make something as great as Eve Online.
It is one thing to read articles about how things could be done, and how they have been done in the past, another to see what live MMOs are doing to make their games better from a technical standpoint.
Keep up the good work.
EDIT: CCP Kurt broke the forums....
|
|
CCP Warlock
|
Posted - 2011.06.20 18:30:00 -
[44]
Originally by: CCP Curt
Originally by: Lykouleon CCP CURT,
//------------------------------------------------------------------------------ int main( int argn, char *argv[] ) { char in[]="010101110110100101101100011011000010000001111001011011110111010" "100100000011011010110000101110010011100100111100100100000011011" "010110010100111111";
int pos = 0; while( in[pos] ) { char accumulator = 0; for( int i=0; i<4 && in[pos] ; i++ ) { accumulator += in[pos] == '1' ? 1<<i : 0; pos++; }
printf( "[%d:%c]:", (int)accumulator, accumulator ); }
printf( "\n" ); return 0; }
$ gcc -o nerd nerd.cpp ;./nerd [10: ]:[14:]:[6:]:[9: ]:[6:]:[3:]:[6:]:[3:]:[4:]:[0:]:[14:]:[9: ]:[6:]:[15:]:[14:]:[10: ]:[4:]:[0:]:[6:]:[11: ]:[6:]:[8]:[14:]:[4:]:[14:]:[4:]:[14:]:[9: ]:[4:]:[0:]:[6:]:[11: ]:[6:]:[10: ]:[12: ]:[15:]: [/code]
I don't get it.
-Curt
It says "Will you marry me?" - I think you forgot that its 8 bits per character...
|
|
Ambo
I've Got Nothing
|
Posted - 2011.06.20 18:30:00 -
[45]
Originally by: CCP Curt
I don't get it.
-Curt
http://www.roubaixinteractive.com/PlayGround/Binary_Conversion/Binary_To_Text.asp
Paste binary, click 'to text'
Also, awesome dev blog and AWESOME work. --------------------------------------
|
tatsudoshi I
Gallente The Venus Project - Zeitgeist Movement
|
Posted - 2011.06.20 18:33:00 -
[46]
Edited by: tatsudoshi I on 20/06/2011 18:36:38
Originally by: CCP Curt
I don't get it.
-Curt
Drink more or less coffee, which ever is the opposite of what you are doing at this very moment! (Read with over-exited speaker voice)
EDIT, Ambo love your toon! ..................................................
May we all have the courage to believe. Long live Mifune! |
Bienator II
|
Posted - 2011.06.20 18:39:00 -
[47]
thats a lot of effort to avoid touching the GIL. (Python was probably a bad longterm choice for the server logic but i guess you have good reasons why you can't migrate)
have you experimented with Jython (http://www.jython.org)? It is basically fully multithreaded python on the JVM but was between 0.1x and 5x slower (single threaded comparison) the last time i played with it. It is under active development and gets faster with every release.
So that might get interesting in future if you want to run on 32+ cores which isn't that uncommon for a java server any more :)
|
|
CCP Curt
|
Posted - 2011.06.20 18:45:00 -
[48]
Originally by: CCP Warlock
It says "Will you marry me?" - I think you forgot that its 8 bits per character...
<mirage>[curt|/home/curt]$ cat nerd.cpp #include <stdio.h>
//------------------------------------------------------------------------------ int main( int argn, char *argv[] ) { char in[]="010101110110100101101100011011000010000001111001011011110111010" "100100000011011010110000101110010011100100111100100100000011011" "010110010100111111";
int pos = 0; while( in[pos] ) { char accumulator = 0; for( int i=0; i<8 && in[pos] ; i++ ) { accumulator += in[pos] == '1' ? 1<<(7 - i) : 0; pos++; }
printf( "%c", accumulator ); }
printf( "\n" ); return 0; }
<mirage>[curt|/home/curt]$ gcc -o nerd nerd.cpp ;./nerd Will you marry me?
Ah no, sorry married with 3 kids already.
Coffee? heck no I go right to the source- Diet Coke.
|
|
Mike deVoid
Firebird Squadron Terra-Incognita
|
Posted - 2011.06.20 18:46:00 -
[49]
Always worth telling CCP employees and CCP teams when they do impressive work, to balance when I post in the rage threads.
GJ guys :). -----
Quote: The maximum acceptable limit of MT in EVE is vanity items + the ability to buy things already created legitimately by another player. PLEX can be already be used to attain SP, ships an |
|
CCP Curt
|
Posted - 2011.06.20 18:47:00 -
[50]
Originally by: Bienator II thats a lot of effort to avoid touching the GIL. (Python was probably a bad longterm choice for the server logic but i guess you have good reasons why you can't migrate)
have you experimented with Jython (http://www.jython.org)? It is basically fully multithreaded python on the JVM but was between 0.1x and 5x slower (single threaded comparison) the last time i played with it. It is under active development and gets faster with every release.
So that might get interesting in future if you want to run on 32+ cores which isn't that uncommon for a java server any more :)
I know that it gets examined from time to time along with other possible solutions, but that's way outside my paygrade. I just make the little box go *ping*
|
|
|
Korerin Mayul
Amarr
|
Posted - 2011.06.20 19:02:00 -
[51]
i like the smell of this. i hope this stuff gets some air-time with the 'high level languages in supercomputing' crowd.
|
Herr Nerdstrom
Caldari Havoc Violence and Chaos Merciless.
|
Posted - 2011.06.20 19:05:00 -
[52]
I have a couple questions:
- What is the percentage of operations that are affected by this offloading of GIL and non-GIL related code, and that therefore benefit from these changes? Optimizing code to reduce CPU usage of that code by 35% is great, but it's not that much of a benefit if that code only accounts for 3% of the load. - It sounds like a lot of the efforts here are basically to reinvent nonblocking I/O, and perhaps splitting the network I/O (including compression and encryption) off to another thread, with the inclusion of completion ports. What am I missing? - The final graph showed the difference between StacklessIO and CarbonIO CPU usage. However, the CPU being examined was never at 100% to begin with. Without testing these upgrades on a core that was maxed, i.e., during a fleet battle, how can we tell how much actual benefit will be produced? Providing stats of a core that was already functioning fine, and therefore without significant lag(TM), doesn't really tell us anything about the effect of the upgrades. If the changes will bring the CPU usage of a 1000 pilot fleet battle to something less than 100%, then I think CCP can say something about the effectiveness of the changes.
I still think this is CCP spending years trying to overcome the limitations of a 'convenient' language like Python. The better choice, in my opinion, still would have been to use a non-interpreted and fully-featured language like C or C++.
That being said, I appreciate the efforts of the various teams at CCP, and applaud their progress. This dev blog was also excellent...please keep them coming.
|
|
CCP Curt
|
Posted - 2011.06.20 19:27:00 -
[53]
Originally by: Herr Nerdstrom Edited by: Herr Nerdstrom on 20/06/2011 19:07:43 I have a couple questions:
- What is the percentage of operations that are affected by this offloading of GIL and non-GIL related code, and that therefore benefit from these changes? Optimizing code to reduce CPU usage of that code by 35% is great, but it's not that much of a benefit if that code only accounts for 3% of the load.
Sort of, but it can depend greatly on how you measure and define "load". I'm not going to p lay games, but it bears mentioning that if I define "load" as "time spent processing on a CPU" and that 3% is spent doing inefficient things that cause the other 97% to spin in circles, then clearly by eliminating it the overall efficiency gain can greatly surpass 3%.
I do think there was some of that going on, with the GIL being requested too often, the overall system efficiency went down, CarbonIO requests it less.
But lets take that point at face value- clearly it is not the case since the graphs show a very substantial gain, on the order of 50%, for proxies. So comm must have been taking up at least 50% of the load, yes? And most likely far more since reducing the entire comm library to a single NOP would probably not work.
Quote:
- It sounds like a lot of the efforts here are basically to reinvent nonblocking I/O, and perhaps splitting the network I/O (including compression and encryption) off to another thread, with the inclusion of completion ports. What am I missing?
Not re-invent.. redeploy.. StacklessIO was using completion ports and nonblocking I/O, but it was tightly coupled to the GIL. ie- there was built in seriality. On a whiteboard or flowchart it would be a simple matter of erasing some lines and penciling in "a miracle occurs" and *poof* decoupled, but in the real world it took a long development effort to remove.
The end effect is that not JUST communications occurs off-GIL (as StacklessIO did) but that the code USING it can ALSO occur off-GIL.
Quote:
- The final graph showed the difference between StacklessIO and CarbonIO CPU usage. However, the CPU being examined was never at 100% to begin with. Without testing these upgrades on a core that was maxed, i.e., during a fleet battle, how can we tell how much actual benefit will be produced?
You make a good point- if both graphs cross 100% at 1000 users, then their shape is irrelevant. If that shape conferred no information, but they do. Load growth follows fairly linearly at ever load we have tested live, and the labratory tests (where we pushed big-iron blades to 100% routinely) predicted savings.
So far the lab data has been borne out on live loads, which can be taken as evidence that other lab predicitons are likely to be true. We can't be sure until its actually tried.. and we are standing by.. but a reasonable conclusion based on all available data is that there will be some savings on near-100% loads.
Which is irrelevant!
CarbonIO was never about reducing load directly, that's a side effect! BlueNet is about reducing serial load, when teams like Gridlock start using it to go-wide on the flying-in-space systems.
|
|
tatsudoshi I
Gallente The Venus Project - Zeitgeist Movement
|
Posted - 2011.06.20 19:35:00 -
[54]
Edited by: tatsudoshi I on 20/06/2011 19:35:37
Originally by: CCP Curt Coffee? heck no I go right to the source- Diet Coke.
I have heard around the health-heads that diet drinks on long term stores "something" in the bone marrow or something. Because there is no energy in the drink the body does not "see" the drink as something that needs to be digested so it does not know what to do and so it stores it.. or something. I have not researched this for health-fanaticism so use as seen fit. Only FYI. I have switched to coffee as I can't drink liters of it as I could with soda. ..................................................
May we all have the courage to believe. Long live Mifune! |
pmchem
Minmatar GoonWaffe Goonswarm Federation
|
Posted - 2011.06.20 19:35:00 -
[55]
CarbonIO sounds great (nice work). If I may redirect a little bit, any chance you guys will be able to use a MPI implementation for python (or... any underlying server code) anytime soon for taking _real_ advantage of having multicore CPUs?
This would be the ultimate solution for server-side scaling of fleet fights, as you're probably already aware.
|
Chaotic Alacrity
|
Posted - 2011.06.20 19:50:00 -
[56]
Seriously, half the reason I play this game is so that I have context for the awesome devblogs. Great job.
Are the internal discussions regarding new optimizations made possible by CarbonIO centered on reorganizing the calls to existing modules or are people starting to look for ways that they could offload logic currently in python to external libraries (that can now run concurrently)?
|
Klandi
Science and Trade Institute
|
Posted - 2011.06.20 20:32:00 -
[57]
Great post - lots of detail and juicy bits
Looking forward to lots more - as much or as deep as you want to go
|
Sri Nova
|
Posted - 2011.06.20 20:38:00 -
[58]
im surprised the developers of python have not come over there, to give all of you a good beating. For abusing their environment in ways that they could never have imagined.
Its amazing what ccp has pulled off with this engine and its fascinating reading about your accomplishments . im sure a lot of head banging, hair pulling, and sending off dear friends to the loony bin occurred during all this .
I bet all of you are elated when you make gains like this .and no words can adequately commend you guys for the hard work you put in to this .
but awesome job guys really its appreciated especially when we are playing the game, enjoying it and not realizing that its your hard work enabling us immerse ourselves in your universe so easily .
|
Glyken Touchon
Gallente Independent Alchemists
|
Posted - 2011.06.20 20:41:00 -
[59]
Great job!
Originally by: CCP Curt Egads I was hoping that would slide by. I almost erased them out but thought it best not to fudge the data in any way.
These graphs are from the initial 24-hour deployment to a fully-loaded pair of proxy nodes on TQ. The "spikes" are the result of a spinlock-bug I fixed the next day.
Good choice. Real data with an explanation comes across much better with this audience. ______
When the forums asked CCP for transparency, we didn't mean the HUD... |
Salene Gralois
K-2
|
Posted - 2011.06.20 21:29:00 -
[60]
That'll cure me of my geek-itch for a week. Thanks, i loved this type of blog in the past and (surprise :D) i still love them.
|
|
Jim Luc
Caldari Rule of Five Split Infinity.
|
Posted - 2011.06.20 22:13:00 -
[61]
So would this allow for a better server-side physX model in the flying-in-space portion of Eve? Also, the ability for twitch-based controls (fast frigates and fighters need the sensation of actually flying around with roll and yaw, and strafe left/right without pointing the nose in a new direction), and fleet formations?
|
Sister Megarea
Sisters of Agony
|
Posted - 2011.06.20 22:42:00 -
[62]
Edited by: Sister Megarea on 20/06/2011 22:42:14 Geek p0rn - SPROING!!!
I'm not a "high level" programmer (Perl/MySQL), but dangit, this is the type of dev blog I really enjoy (Well, this and the ones telling us we're getting free SP )
|
Tres Farmer
Gallente Federation Intelligence Service
|
Posted - 2011.06.20 23:03:00 -
[63]
Just fast in: sweet Dev Blog. Will read once back home in.. phew.. 12+ hours
Get rid of Rooms with Doors - Shortrange Jumpdrives for everybody! |
Robogod
Vires Quod Iunctum
|
Posted - 2011.06.20 23:07:00 -
[64]
Hello, I wondered about this for some time knowing EVE ran on Python. There are ways around that blasted GIL though. They aren't well documented, but I've used them before. Python can be truely multi-threaded. Interestingly with Python's true multi-threads the threads don't even have to be running on the same physical computer. You would have to worry more about asyncronus execution, but it can be done for the rest of the code while still mantaining the ease of use of Python.
Being a programer myself I understand the aim for the higher level Python, but it's not the fastest when clock speeds don't go up, just number of cores. Python can be that "need for speed" though in terms of writing it, and even exicution if you lay down the right framework.
This sounds like a great improvement none the less. Network IO and transfer speeds are a huge impact on performance of clusters and MMOs. Good job!
-Robogod "The Machine" |
Elyon Itari
|
Posted - 2011.06.21 00:22:00 -
[65]
Edited by: Elyon Itari on 21/06/2011 00:22:29
This - the improvements, as well as the devblogs detailing them - is why we stick with you through outrageous feature claims and horrendous policy changes.
A+ |
Raid'En
|
Posted - 2011.06.21 00:57:00 -
[66]
too much tech for me, the dark side got me and i didn't read it. only looked at the images well anyway, less lag, right ?
|
Komen
Gallente The Night Crew
|
Posted - 2011.06.21 01:00:00 -
[67]
Very impressive. I shall be less upset about the CarbonUI bugs because of this blog. But it doesn't completely make up, and I really, really want to see more bugsquashing going on, because it's completely annoying having my windows jump all over the screen, having my ship start moving when I really did not double-click anything, etc, etc, etc.
Carry on and quick-march.
|
Chiralos
Merchant Princes
|
Posted - 2011.06.21 01:13:00 -
[68]
In the recent video when Torfi said that bit about upgrading the F1 car during the race ... I see what he meant.
Good show. Amarr Victor. |
Dezolf
Minmatar DAX Action Stance
|
Posted - 2011.06.21 01:46:00 -
[69]
Great blog. Only missing one thing; pseudo-code. ;D
I do have a quick question, though. It might come from it being 3:40 am, and me not feeling like googling atm. With GPU's being used more and more for calculations etc. (and less for "sweet graphics"), Have "you guys" looked into that possibility with some of the things being done server-side? If so, what were your findings? If not, why not? (--This, I assume, is because it's either impossible, improbable, or a waste of time)
Also, something else, I assume that, because you release so much data on how EVE is running server-side, it's potentially easier to create "hacks" or similar (since people don't have to guess so much on the structure of the server-side of things), have you considered this?
[Fakeedit:] Next time I comment on a dev-/techblog, I'll wait 'till I've had a good nights sleep. :P |
MotherMoon
Huang Yinglong
|
Posted - 2011.06.21 02:25:00 -
[70]
Edited by: MotherMoon on 21/06/2011 02:27:54 I'm just going to link this everytime someday incarnation wasn't a lot of work and It's just "one room"
Thanks for all tour work ccp
|
|
Vile Zurk
|
Posted - 2011.06.21 02:40:00 -
[71]
You guys tried running your Python code base on PyPy? Its just-in-time compilation can bring surprising performance improvements.
As I don't see CCP porting its Python code base to 3.x soon, PyPy might be the way to go to squeeze more performance (mainly on your game logic servers as I understand it). Plus, using an alternate interpreter is a much less elaborate undertaking than removing Python/PyPy's GIL (as that's what it takes to do true multithreading) or even porting your entire code base to C++ or whatnot.
|
Internet Knight
The Kobayashi Maru
|
Posted - 2011.06.21 04:12:00 -
[72]
Ultimately, CCP has come to realize what I've said all along:
Python sucks, even stackless python.
I don't want to say it, but... I told you so.
---
|
ihcn
|
Posted - 2011.06.21 04:17:00 -
[73]
Originally by: Internet Knight Ultimately, CCP has come to realize what I've said all along:
Python sucks, even stackless python.
I don't want to say it, but... I told you so.
we're all impressed at how much smarter you are than the people at ccp. no really.
|
Kariem Mahkasad
Minmatar Star Frontiers Ignore This.
|
Posted - 2011.06.21 04:58:00 -
[74]
Thanks CCP for the tech-****. I'm in a Game Development degree and this stuff gets me all jittery and excited for my future in the industry. Keep up the awesome; ignore the whiners, hurdle the dead.
|
|
CCP Spitfire
C C P C C P Alliance
|
Posted - 2011.06.21 05:37:00 -
[75]
Originally by: Komen Very impressive. I shall be less upset about the CarbonUI bugs because of this blog. But it doesn't completely make up, and I really, really want to see more bugsquashing going on, because it's completely annoying having my windows jump all over the screen, having my ship start moving when I really did not double-click anything, etc, etc, etc.
Carry on and quick-march.
Despite sharing a common name, CarbonIO and CarbonUI are developed by two different teams. (not to say that CarbonUI work does not bring performance improvements as far as the game client is concerned).
Spitfire Community Representative CCP Hf, EVE Online |
|
Gnulpie
Minmatar Miner Tech
|
Posted - 2011.06.21 06:07:00 -
[76]
That is completely awesome! WOOOOOOOOT
But what I don't understand ... you seemed to have invented a way to offload data processing (IO in this case) from the GIL and completely out of the usual Python-processing path, even have the abilitiy to run C++ modules on the fly without involving Python thus harnessing all the available multicore/multiprocessor power available.
Why can't that be done with the modules that handle the space simulation and creates "the lag" in huge fleet battles? Offload the critical parts from Python and the GIL, calculate bits and pieces using multicores/multiprocessors, use a static copy of the battlefield until all calculations are done, upload the result back to Python - just the same way you do it with the IO now?
Or am I missing something here? Are the space simulation modules too complex to be untied from Python and GIL? |
Alain Kinsella
Minmatar
|
Posted - 2011.06.21 10:45:00 -
[77]
Great read, it reminds me of some software here at work...
Anyway, your post implies that there is now a backend routing network (BlueNet) helping to optimize your network latency. They also appear to have a caching ability for local data, if I've read it correctly.
That brings up a couple interesting questions/suggestions:
1) Have you considered applying redundant connections on the backend, and 'predictive' secondary connections on the client? Example of the second would be to have the client's node pre-connect to the system associated to a gate you're flying to, and begin loading up data ahead of time. This could make the renders more seamless, and the jump lag would be reduced considerably.
2) Does this make it easier to decouple the pure python modules such that they can run in another OS? I can see those parts possibly running better in a more compact space, like the Sparc T2/T3 chips. [Note this is pure curiosity; I think you're working well on windows boxes - better than I've seen Solaris in some cases.]
|
Ione Hawke
|
Posted - 2011.06.21 11:10:00 -
[78]
Edited by: Ione Hawke on 21/06/2011 11:14:10
Originally by: CCP Curt
CarbonIO was never about reducing load directly, that's a side effect! BlueNet is about reducing serial load, when teams like Gridlock start using it to go-wide on the flying-in-space systems.
Uhu, can you give a time & effort scale when we can expect this to be applied to fleet battles? (as in, 'this is a piece-of-cake now and requires a week of diligent coding and carefull testing and voila', or 'this is a serious effort and requires a complete rewrite of all in space communication code, expect it summer 2013' or ... something in between :))
edit: bleh, my portrait background is screwed up
|
VE Vengeance
|
Posted - 2011.06.21 12:50:00 -
[79]
Originally by: Alain Kinsella Great read, it reminds me of some software here at work...
Anyway, your post implies that there is now a backend routing network (BlueNet) helping to optimize your network latency. They also appear to have a caching ability for local data, if I've read it correctly.
That brings up a couple interesting questions/suggestions:
1) Have you considered applying redundant connections on the backend, and 'predictive' secondary connections on the client? Example of the second would be to have the client's node pre-connect to the system associated to a gate you're flying to, and begin loading up data ahead of time. This could make the renders more seamless, and the jump lag would be reduced considerably.
2) Does this make it easier to decouple the pure python modules such that they can run in another OS? I can see those parts possibly running better in a more compact space, like the Sparc T2/T3 chips. [Note this is pure curiosity; I think you're working well on windows boxes - better than I've seen Solaris in some cases.]
Interessting point. The "prefetching" of near Solar solar systems sound interessting. The question is if the majority of the jump-process is getting information from the new node (the system you are jumping to) or the transport of your data to the new node. The second part would be something like "pre-send", so you send your basic data to the systemnodes of near gates. But I guess it would be complicated because if you pre-send/fetch data they are not accurate at the moment you jump through the gate. Alot of things could happen during warp to the gate :D
|
Ay Liz
Sacred Templars RED.OverLord
|
Posted - 2011.06.21 12:51:00 -
[80]
Very interesting. I look forward to any improvements Team Gridlock can achieve with this and Time Dilation.
|
|
Neolithic Man
|
Posted - 2011.06.21 12:53:00 -
[81]
WHOA
|
Yvan Ratamnim
|
Posted - 2011.06.21 13:41:00 -
[82]
Ok so this was all server side carbonio improvements that were astounding, but you say the bluenet things your seeing in testing are ASTONISHING or something of that nature and the tests shown are without bluenet...
what is the delay in releasing bluenet onto TQ? still not finished? is there a timeline on when bluenet will get switched on on TQ?
|
Consortium Agent
|
Posted - 2011.06.21 14:06:00 -
[83]
Very nicely done blog CCP. This is the kind of details we like to see and isn't it great that this doesn't fly over the heads of many of your customers?! :P
I appreciate all the hard work you guys are doing with Eve from a back-end perspective - this sort of development frequently goes unnoticed and you end up with a lot of us asking for more this or more that without heed to the work you've already been doing. We still want more this or more that, of course <g>, but many of us do recognize what you're doing over there and why :)
I also understand the need to unify core code across multiple projects and I'd venture a guess that working with Sony's network for Dust514 may have assisted with understanding how to implement BlueNet?
In any case, I know I've been rough on some of you at CCP lately (and you deserve that) but really... this is awesome work and kudos to CCP developers for putting it together. This is going to make Eve lololfast(er) ;P
|
LLoyd Thomson
|
Posted - 2011.06.21 15:23:00 -
[84]
I understood everything before the first occurrence of GIL.
No, seriously freakin good job. Must have been the hell to introduce
|
Skogen Gump
The Star Fraction
|
Posted - 2011.06.21 15:48:00 -
[85]
Very good write-up but I have a question ?
I was led to believe that Python IO operations were not bound to the GIL:
Originally by: http://wiki.python.org/moin/GlobalInterpreterLock
The GIL is controversial because it prevents multithreaded CPython programs from taking full advantage of multiprocessor systems in certain situations. Note that potentially blocking or long-running operations, such as I/O, image processing, and NumPy number crunching, happen outside the GIL. Therefore it is only in multithreaded programs that spend a lot of time inside the GIL, interpreting CPython bytecode, that the GIL becomes a bottleneck.
Is this a particular implementation detail of Stackless?
Also, Python 3.2 has re-written the GIL and whilst it's not perfect it certainly sounds better than it did:
Originally by: What's new in Python 3.2
The mechanism for serializing execution of concurrently running Python threads (generally known as the GIL or Global Interpreter Lock) has been rewritten. Among the objectives were more predictable switching intervals and reduced overhead due to lock contention and the number of ensuing system calls. The notion of a ôcheck intervalö to allow thread switches has been abandoned and replaced by an absolute duration expressed in seconds. This parameter is tunable through sys.setswitchinterval(). It currently defaults to 5 milliseconds.
Just wondering if you've seen any performance by moving to Stackless 3.2 ?
|
NoobPwn
|
Posted - 2011.06.21 16:23:00 -
[86]
I think this is just an async networking model which has been widely used for about a decade, thing different is that your packet builder & parser is written in Python. You used to call network functions within python interpreter lock, now you access them via a queue and the send/recv process is done in other threads.
Good to know that you made such improvements, but things you really should get done is to get rid of standard Python interpreter once and for all, use something like LLVM to improve actual runtime performance.
Example: http://code.google.com/p/unladen-swallow/
|
NoobPwn
|
Posted - 2011.06.21 16:31:00 -
[87]
One more thing to say, the current client has issues with its compression library used, zlib 1.2.3 has bugs that can corrupt the heap. I encounter thread safety issues too, blue.dll(your ultimate interface to python objects) is referencing a class being freed elsewhere, crash in blue.BlueWrapper::ConvertBlueToPython call stack shows references from UI.
|
Kaelarian
Handsome Millionaire Playboys
|
Posted - 2011.06.21 17:09:00 -
[88]
So does this mean that it could be possible to implement direct piloting control (e.g. joystick) in the future? The main bottlenecks seem to be the tick rates required for the network IO and Carbon. Depending on the implementation specifics, it seems like the primary remaining obstacle is Carbon.
|
Levitikon
Constructive Influence Northern Associates.
|
Posted - 2011.06.21 17:37:00 -
[89]
Originally by: CCP Veritas
Originally by: Taedrin The idea is that an hour of programming time is worth FAR more than a couple CPU cycles. Higher level programming languages are better business than lower level programming languages.
Quoted for truth.
Nope. that's only true when you're writing a new iFart application for iPhone.
For huge, realtime systems with tens, or even hundreds of thousands of concurrent users, systems just like EVE, that few cpu cycles are worth much more than single hour of programmer.
For large enough systems, higher code efficiency is make or die matter. Just ask google, facebook, amazon or paypal.
Besides, with amount they pay you at CCP, it's not that your workhour is worth that much cpu cycles:P
|
Aineko Macx
|
Posted - 2011.06.21 17:56:00 -
[90]
Originally by: CCP Warlock ...
Hello Jacky, I remember our discussion at the dev pub crawl about your research into the GIL problem and possible solutions. Are the improvements mentioned in this devblog the result of this research (i.e. the GIL can't be solved, but can be circumvented), or do you still have something in the cooker? ________________________ CCP: Where fixing bugs is a luxury, not an obligation. |
|
Isabella Thresher
Fat Kitty Inc. Excuses.
|
Posted - 2011.06.21 19:14:00 -
[91]
good blog, thanks
|
|
CCP Atropos
|
Posted - 2011.06.21 22:27:00 -
[92]
Originally by: NoobPwn Good to know that you made such improvements, but things you really should get done is to get rid of standard Python interpreter once and for all, use something like LLVM to improve actual runtime performance.
Example: http://code.google.com/p/unladen-swallow/
Unfortunately Unladen Swallow is pretty much dead in the water these days.
If you're interested in other Python VM's, here is a good panel from PyCon this year covering most of the currently available VM's (CPython, PyPy, Jython and Iron Python).
Software Engineer Core Engineering |
|
|
CCP Curt
|
Posted - 2011.06.22 12:48:00 -
[93]
Quote: With GPU's being used more and more for calculations etc. (and less for "sweet graphics"), Have "you guys" looked into that possibility with some of the things being done server-side?
Short answer is no, long answer is hell no.
Three problems that I can think of off the top of my head: - GPUs are *not* CPUs, they are specialized parallel vector processors. They do math very. damn. fast. (google MAC unit) But that's really all. Logic-wise they are pretty limited and highly proprietary, not the hybrid RISC/CISC beasts modern CPUs have become. - Primarily because of the preceding, coding for them is very specialized. There is no good standard, crack a book on writing pixel shaders for an education on the mind-scrambling business of writing code that works on even most of them. Let alone any kind of standard API for loading non-graphics work onto them - Even if all of that could be solved, there is still the fact that servers don't have *****in' graphics cards, if they have one at all its usually an on-board single-chip OEM something-that-gets-the-terminal-to-light-up. Servers tend to be stuffed with conventional CPUs.
If you were referring to speeding up the Client then its a flat-out "no" the GPU is busy doing what it needs to be doing, game logic/routing always takes a back seat to rendering.
Quote: You guys tried running your Python code base on PyPy? Its just-in-time compilation can bring surprising performance improvements.
The powers that be are very conscious of newer/faster/better platforms and research them more thoroughly then they are generally given credit for. Try to keep in mind that there is a vast big difference between proof-of-concept code or even a moderately complex piece of software and the weapons-grade commercial monster that runs cluster-wide making EVE happen.
Quote: I think this is just an async networking model which has been widely used for about a decade, thing different is that your packet builder & parser is written in Python
Close, most of the communications glue has been asynchronous since StacklessIO. What was (and is) in Python is the marshaling logic (aka Machonet) which can be thought of as a dynamic routing configure-er. Its interaction is minimal once the routes are computed, and was never a bottleneck. The difference now is the decoupling of the GIL from the actual transactions (of which there are about a kabillion per second) so they can deliver that data to *other* asynchronous-capable systems.
Which in turn frees up the Python interpreter to do [much]more work.
Quote: Nope. that's only true when you're writing a new iFart application for iPhone.
For huge, realtime systems with tens, or even hundreds of thousands of concurrent users, systems just like EVE, that few cpu cycles are worth much more than single hour of programmer.
I would so pay for iFart.
You are are correct, CPU cycles can be VERY precious, but only "where it matters" and that's the rub, when does it matter? Less than most people think. For excellent discussion on such things I refer you to an extremely relevant masterwork: http://www.amazon.com/Zen-Code-Optimization-Ultimate-Software/dp/1883577039
If you are interested in writing seriously fast code, skip the chapter on optimizing for the 286 and Tatoo the rest on the inside of your eyelids.
|
|
Glyken Touchon
Gallente Independent Alchemists
|
Posted - 2011.06.22 14:19:00 -
[94]
Originally by: Gnulpie Why can't that be done with the modules that handle the space simulation and creates "the lag" in huge fleet battles? Offload the critical parts from Python and the GIL, calculate bits and pieces using multicores/multiprocessors, use a static copy of the battlefield until all calculations are done, upload the result back to Python - just the same way you do it with the IO now?
If I'm reading it correctly (no guarantees), that's the sort of stuff Gridlock will be doing next - taking advantage of the new capabiltites. ______
When the forums asked CCP for transparency, we didn't mean the HUD... |
WheatGrass
Gallente Silent but Friendly
|
Posted - 2011.06.22 19:03:00 -
[95]
Edited by: WheatGrass on 22/06/2011 19:06:19 -The geek in me really enjoyed the dev blog. Thank you.
Originally by: CCP Curt - Even if all of that could be solved, there is still the fact that servers don't have *****in' graphics cards....
May 24, 2011 Cray Unveils Its First GPU Supercomputer Michael Feldman of HPCWire.com
Edit... I suppose you're going to want me to renew my subscription now. |
Alundil
Gallente Galactic Salvage Inc.
|
Posted - 2011.06.22 22:39:00 -
[96]
First, excellent dev blog CCP. Enjoyed reading about this. And I also enjoy knowing that there are real tangible plans (and implementations) to improve the backend.
Originally by: WheatGrass Edited by: WheatGrass on 22/06/2011 19:06:19 -The geek in me really enjoyed the dev blog. Thank you.
Originally by: CCP Curt - Even if all of that could be solved, there is still the fact that servers don't have *****in' graphics cards....
May 24, 2011 Cray Unveils Its First GPU Supercomputer Michael Feldman of HPCWire.com
Edit... I suppose you're going to want me to renew my subscription now.
I guess since you found an article showing "servers" with GPU's you expect CCP to have already coded for, and deployed to, said "Servers?" (even though the server you reference is a Cray SC and not a traditional "(datacenter server") Who feels like upping their sub $$ to help acquire a couple true deal super comupters?
|
Grimnir
|
Posted - 2011.06.22 22:58:00 -
[97]
Originally by: Alundil
I guess since you found an article showing "servers" with GPU's you expect CCP to have already coded for, and deployed to, said "Servers?" (even though the server you reference is a Cray SC and not a traditional "(datacenter server") Who feels like upping their sub $$ to help acquire a couple true deal super comupters?
We can pay for it in Aurum!
Sorry, couldn't resist it :)
|
|
CCP Curt
|
Posted - 2011.06.22 23:17:00 -
[98]
Quote:
I guess since you found an article showing "servers" with GPU's you expect CCP to have already coded for, and deployed to, said "Servers?" (even though the server you reference is a Cray SC and not a traditional "(datacenter server") Who feels like upping their sub $$ to help acquire a couple true deal super comupters?
Now now, be nice :)
Those supercomputers are one-trick ponies, they exist to do math (vector processing) and very little else. That article is very interesting read in that the blades have to balance actual CPUs with GPUs depending on the job they are meant to do.
Very little an EVE server does would benefit from GPU offloading, remember its not just "math" its a specific KIND of very parallel-ready vector math that those monsters do.
Oh I'm sure if we slapped some network interfaces on those blades, and re-implemented our codebase in Cray-Linux they would be very capable servers, but bang-for-buck would be pretty disgusting. Remember the whole point of parallel computing is to go wide. Supercomputers go wide *and* deep to do specialized high-availability (usually real-time) tasks such as weather, ballistics, particle models and such.
|
|
Christelle CoraFlorenza
|
Posted - 2011.06.22 23:30:00 -
[99]
Congratulations to programmers who were finally given a possibility to realize an elegant solution the way it should have worked from day one.
Also, maketing guys, don't cry. You'll be able to phuk those techies over again soon enough by rushing out next features half-finished.
But seriously. You have our money, please let the techies do their thing and give it to us when it's awesome, not when it just barely works.
|
Jaik7
|
Posted - 2011.06.23 03:52:00 -
[100]
will this let me run CQ on low settings above 19 FPS? because i am getting tired of running into walls.
Originally by: CCP Shadow The trolls have been vanquished.
|
|
Eraggan Sadarr
|
Posted - 2011.06.23 13:39:00 -
[101]
Very nice read. A true dev blog :) ... and python because productivity is so high with this language.
Eve Market Scanner - Marketlog comparisons |
Jim Luc
Caldari Rule of Five Split Infinity.
|
Posted - 2011.06.23 16:03:00 -
[102]
I heard a few things about not offloading to GPU things and such, but perhaps I'm not understanding correctly.
It is possible to implement a PhysX server for the space physics calculations now, right? I know the GPU wouldn't be good for things like fleet fights or AI calculations, but regarding missile physics, ship physics (so we can finally get a better rigid body than just the bounding sphere, I would think this would help take the load off of the regular CPU, or at least return the calculations per tick much faster.
I would like to see an eventual inclusion of twitch-based controls, even if they're only plausible for a frigate in MWD or for controlling drones & fighters - it would be fun to fly around like that.
|
Inspiration
|
Posted - 2011.06.23 16:09:00 -
[103]
Edited by: Inspiration on 23/06/2011 16:10:51 Edited by: Inspiration on 23/06/2011 16:10:32
Originally by: CCP Curt You are are correct, CPU cycles can be VERY precious, but only "where it matters" and that's the rub, when does it matter? Less than most people think. For excellent discussion on such things I refer you to an extremely relevant masterwork: http://www.amazon.com/Zen-Code-Optimization-Ultimate-Software/dp/1883577039
If you are interested in writing seriously fast code, skip the chapter on optimizing for the 286 and Tatoo the rest on the inside of your eyelids.
I remember the book from him called: 'Zen of Assembly Language'
http://www.amazon.com/Zen-Assembly-Language-Knowledge-Programming/dp/0673386023/ref=pd_sim_b_1
Learned a great many things from it back in my young days :). If this book you refer too is half as good as that one was, it is worth its money with ease!
|
Lederstrumpf
|
Posted - 2011.06.27 13:08:00 -
[104]
CCP PR folks should have advertised your blog rather than waste time praising monocles!
|
Ma Talune
Minmatar DaZeD and ConFuseD Nabaal Syndicate
|
Posted - 2011.06.28 17:34:00 -
[105]
Originally by: Jim Luc I heard a few things about not offloading to GPU things and such, but perhaps I'm not understanding correctly.
It is possible to implement a PhysX server for the space physics calculations now, right? I know the GPU wouldn't be good for things like fleet fights or AI calculations, but regarding missile physics, ship physics (so we can finally get a better rigid body than just the bounding sphere, I would think this would help take the load off of the regular CPU, or at least return the calculations per tick much faster.
Nope - most of this stuff is done on servers for cheat security reasons and what not. E.I. not your local CPU+GPU and the servers gain nothing from GPGPU and not to say never but the stuff CCP does is way to general stuff to work well with GPU's.
Originally by: Jim Luc
I would like to see an eventual inclusion of twitch-based controls, even if they're only plausible for a frigate in MWD or for controlling drones & fighters - it would be fun to fly around like that.
Think we all would - beta tested a german game for release soon but it just sucked in most ways EvE is great as you can't have both things and eat it too generally speaking. Tech of a propper MMO like EvE doesn't work with realtime stuff - infact this blog has been about they are trying to go closer to the realtimeish way WoW and many other land based MMO's are trying to do it. But thats just cheats to hide that its still not nearly realtime. Often spacestation walking like behavior just cheat people a little better due to how we programmers can predict people behavior better into the future. But every competitor is just like all EvE systems are now or worse. On of the main issues being the time between you are CCP London and back in internet world is way to long for most realtime stuff like what we dream of. In fact modern First Person Shooters's like Call of Duty or Battlefield uses many of the same cheats as MMO's to hide they still have really big problems with the delay. Thats why for all the research into the topic FPS's still lag unless your are very closed based with good stable connections in internet world terms. And thats why we can't have cake
As a note on people loving fixes (like me ;) they should be aware CCP need to keep content pushing first for EvE as competition are getting better at doing stuff which looks neat compared to EvE if you are not a spreadsheet or sci-fi geek like me and can accept EvE realtime limitations Basically CCP are in periods of new competing releases in strong competition over new and old customers until people realize that other games does not provide the depth to their differentiating features. This is due to EvE always have the edge being able to match anyone in money and staff while having been around for a long time and having some very clever staff members. I've spoken to some when they where in Denmark and even drunk are they neat people in general
-----------
Hmmmm.... I remember back in the day when we in ISS....
long live the Minmatar people. |
Ma Talune
Minmatar DaZeD and ConFuseD Nabaal Syndicate
|
Posted - 2011.06.28 17:37:00 -
[106]
Originally by: Ma Talune
Nope - most of this stuff is done on servers for cheat security reasons and what not. E.I. not your local CPU+GPU and the servers gain nothing from GPGPU and not to say never but the stuff CCP does is way to general stuff to work well with GPU's.
What I mean to say is that MMO physics != singleplayer physics. Heck even your multiplayer game like online FPS does not integrate well with any advanced physics. They just cheat a lot so you don't see the difference and use some more CPU power on server and such.
!= being "does not equal" -----------
Hmmmm.... I remember back in the day when we in ISS....
long live the Minmatar people. |
Xianthar
Vanishing Point. The Initiative.
|
Posted - 2011.07.05 22:58:00 -
[107]
Edited by: Xianthar on 05/07/2011 22:58:25
Originally by: Vile Zurk You guys tried running your Python code base on PyPy? Its just-in-time compilation can bring surprising performance improvements.
Stackless python can't run on just any VM, its a distribution of its own or can be used as a module in CPython.
That being said PyPy has included many of the features of stackless and they have been tossing around the ideal of killing the GIL.
The GIL isn't a "feature" of Python but of the VM running the code.
For instance jython doesn't have a GIL, nor does ironpython.
pypy, cpython (the 'default' vm) and stackless (a fork of cpython) do have the GIL.
|
Subrahmaya Chandrasekhar
Amarr J0urneys End
|
Posted - 2011.08.29 16:19:00 -
[108]
Exceptional explanation CCP Curt. I know a lot of work went into that. Thank you!
|
Altzcher
|
Posted - 2011.09.01 11:27:00 -
[109]
Edited by: Altzcher on 01/09/2011 11:32:52 [...] How effective is this? We're not sure yet, but its somewhere between "unbelievably stunning" and "impossibly amazing" reductions in lag and machine load for systems where this is applicable. Seriously we can't publish findings yet, we don't believe them. [..]
Grats on ur achievement, sounds rather groundbreaking - wut about the technology itself? im no techie, but can other systems use ur work? and if they can, then a big gz on ur raise =P
In any case good luck and ty for the read.
D.
|
Electra Magnetic
|
Posted - 2011.09.01 18:39:00 -
[110]
You guys are way behind. I am highly disappointed that this was not implemented during the latest patch. The delay represents as a whole how myself and the majority of your players and potential players feel about CCP. We are fed up with your lackadasical approach to development and player experience. Multi-core, GPU, and Monitor support was due months ago. walking in stations with other people seems years away. What happened to Dust 514? Oh yea... You guys dont care enough about the money you are making now to improve the game and expand.
"No one wants to spend 3 weeks of their lives training refinery efficency 5"
|
|
Peregrine Brockhouse
|
Posted - 2011.09.01 21:52:00 -
[111]
Originally by: Electra Magnetic You guys are way behind. I am highly disappointed that this was not implemented during the latest patch. The delay represents as a whole how myself and the majority of your players and potential players feel about CCP. We are fed up with your lackadasical approach to development and player experience. Multi-core, GPU, and Monitor support was due months ago. walking in stations with other people seems years away. What happened to Dust 514? Oh yea... You guys dont care enough about the money you are making now to improve the game and expand.
"No one wants to spend 3 weeks of their lives training refinery efficency 5"
Clueless troll is clueless.
|
Omendor
Special Projects Experts
|
Posted - 2011.09.02 12:43:00 -
[112]
Edited by: Omendor on 02/09/2011 12:44:37 To be serious again...
That Dev-Blog about CarbonIO was just amazing and there are hints across the article that let me assume that also or especially the client can benefit greatly from this update. Can you tell us how this will have an impact on our clients? Sometimes when loading grids, loading hangars at POSes, checking/searching assets, jumping with assets window open, searching contracts and so on we all can experience some pause in the client. Will this apparent I/O heavy operations also benefit from CarbonIO? I do not expect another great and detailed blog from the devs, just a short glimpse on what we can expect from this, thanks a lot!
And to everyone at CCP: Please keep up with all the great work! You give us great stuff to have fun with! All those trolls out there are just cute little teddy bears, which want to play with kids (or was it the other way around )
|
Sverre Haakonson
Gallente
|
Posted - 2011.09.06 08:40:00 -
[113]
A really nice blog!
|
|
|
|
Pages: 1 2 3 4 :: [one page] |