| Pages: 1 [2] 3 :: one page |
| Author |
Thread Statistics | Show CCP posts - 0 post(s) |

Eleana Tomelac
Gallente Through the Looking Glass
|
Posted - 2008.05.07 14:13:00 -
[31]
Originally by: Arana Tellen I can't stand the input lag with vsync, I have a fast response monitor, mouse and no vsync, if you have a slow monitor (even modern high cost ones can be bad due to the type of tech they use), slow mouse and vsync, ugh.
Reducing buffering should help here, triple buffering adds even more input lag. I don't remember if you can turn it off in eve, but you could forbid it in the video drivers. Because finally, when you are at 30 fps cap with vsync, yes it hurts having frames rendered ahead! Would have to check how far you can forbid rendering frames ahead.
For eve this kind of lag isn't too annoying, you won't be missing the target as in an fps! -- Pocket drone carriers (tm) enthousiast !
Assault Frigates MK II |

Arana Tellen
Gallente The Blackguard Wolves Black Star Alliance
|
Posted - 2008.05.07 14:22:00 -
[32]
Its the present interval control.
http://techgage.com/reviews/zalman/zm-mfc2/zalman_zmmfc2_11_thumb.jpg
It sits inbetween the wall and the PSU so includes the losses in the psu, it comes through a backplate port and goes into the panel. ---------------------------------
Oh noes!
Originally by: CCP Greyscale *moderated - mother abuse - Mitnal*
|

Reecoh Soltar
Exotic Dancer Talent Agency Zeta Tau Epsilon
|
Posted - 2008.05.07 14:26:00 -
[33]
A few weeks ago I took delivery of a brand new gaming rig. It's running dual nVidia 8800GT cards in SLI. Eve was one of the first things I put on it.
Eve was running at 200+ FPS with this setup with all options at max, and the card temps were lower than what I was seeing on HL2 w/ the Cinematic Mod installed. But Eve would frequently cause Windows (XP SP2 32bit) to crash, requiring a full reboot, or cause Eve to have weirdly distorted graphics (usually a crash in full-screen and distortion in window mode I noticed). I had no problems in anything else I installed.
I solved this by doing 2 things: 1) Set my Interval to 1 as folks have mentioned here and 2) in the nVidia control panel forced all Anti-aliasing options to OFF for Eve (after reading a dev post saying AA was not supported at all right now). So, that would be my recommendation to start. I may have also turned SLI off, I don't recall now if I turned that back on. I'll check when I get home and update if it's still off.
|

Cpt Cosmic
|
Posted - 2008.05.07 14:31:00 -
[34]
Originally by: Eleana Tomelac stuff
thats why you have heatsinks to not affect other components.
|

Eleana Tomelac
Gallente Through the Looking Glass
|
Posted - 2008.05.07 14:34:00 -
[35]
Originally by: Reecoh Soltar I solved this by doing 2 things: 1) Set my Interval to 1 as folks have mentioned here and 2) in the nVidia control panel forced all Anti-aliasing options to OFF for Eve (after reading a dev post saying AA was not supported at all right now). So, that would be my recommendation to start. I may have also turned SLI off, I don't recall now if I turned that back on. I'll check when I get home and update if it's still off.
I've never used SLI, but couldn't you do this : Dedicating one board for the AA and the other for the rest? Wouldn't it be a good way as I don't remember eve being very comfortable with either dual GPUs or dual CPUs.
It was just a thought... It may be totally dumb and not work at all, but you may try to test it.
PS : for the people having issues, are you all forcing the AA options from the drivers or from the software mentionned in the HDR+AA thread that is around here? -- Pocket drone carriers (tm) enthousiast !
Assault Frigates MK II |

Eleana Tomelac
Gallente Through the Looking Glass
|
Posted - 2008.05.07 14:39:00 -
[36]
Originally by: Cpt Cosmic
Originally by: Eleana Tomelac stuff
thats why you have heatsinks to not affect other components.
The other components still gets hot with most of the cooling mounted on cards.
The heatsin is not shielding for other components, the GPUs are not mounted in insualted boxes from the rest of the board...
So, other components can still get hot, and it happens with most of the cards sold these days. -- Pocket drone carriers (tm) enthousiast !
Assault Frigates MK II |

Cpt Cosmic
|
Posted - 2008.05.07 15:53:00 -
[37]
Originally by: Eleana Tomelac
Originally by: Cpt Cosmic
Originally by: Eleana Tomelac stuff
thats why you have heatsinks to not affect other components.
The other components still gets hot with most of the cooling mounted on cards.
The heatsin is not shielding for other components, the GPUs are not mounted in insualted boxes from the rest of the board...
So, other components can still get hot, and it happens with most of the cards sold these days.
yeah thats true under normal circumstances but I am a modder :) I am running a notebook with a 8600M GT. I am overclocking it, to make its performance close to its desktop counterpart and build in 3 alu heatsinks in my notebook for it to shield it totally from the rest hehe. it runs @90 ¦C under heavy load lol but it works rly well
but back to topic forgot to mention somethink, even I think eve wont destroy your card, I realized a fps drops since the last patches, but maybe its just my drivers dunno.
|

Reecoh Soltar
Exotic Dancer Talent Agency Zeta Tau Epsilon
|
Posted - 2008.05.07 16:17:00 -
[38]
Originally by: Eleana Tomelac
I've never used SLI, but couldn't you do this : Dedicating one board for the AA and the other for the rest? Wouldn't it be a good way as I don't remember eve being very comfortable with either dual GPUs or dual CPUs.
It was just a thought... It may be totally dumb and not work at all, but you may try to test it.
PS : for the people having issues, are you all forcing the AA options from the drivers or from the software mentionned in the HDR+AA thread that is around here?
I'm not sure about splitting AA onto the second card, being new to SLI myself. However, until CCP supports AA I'm going to leave it off. See the comments from CCP Tanis here:
http://oldforums.eveonline.com/?a=topic&threadID=686575&page=5#149
|

Eleana Tomelac
Gallente Through the Looking Glass
|
Posted - 2008.05.07 16:44:00 -
[39]
Originally by: Reecoh Soltar
I'm not sure about splitting AA onto the second card, being new to SLI myself. However, until CCP supports AA I'm going to leave it off. See the comments from CCP Tanis here:
http://oldforums.eveonline.com/?a=topic&threadID=686575&page=5#149
Here was the thread I was thinking of.
Maybe some dev should say in that one that the AA+HDR may cause serious crashes?
For the splitting of AA to the other card, I was thinking about that because the way it is processed could be different thus it could be a workaround for the AA+HDR bugs. But it might as well crash as bad! Worth a try I think... You should visit the nvidia site and check if there are some explanations, they have some part of it (or a subsite) dedicated to SLI. -- Pocket drone carriers (tm) enthousiast !
Assault Frigates MK II |

Jian Gi
Caldari Deadly Addiction Un-Natural Selection
|
Posted - 2008.05.07 17:29:00 -
[40]
It's not eve's, or any other apps for that mater, fault..
It's the designers responsibility to specify safety margins for the safe usage of his design. If there is a problem (limit) somewhere it should be clearly stated in the spec sheet, or taken care of internally.
Keep in mind also that all ic's have some kind of PLL internally to generate their clock. scaling down this frequency in response to increased thermals is trivial.
It could be that the coding in eve aggravates some kind of problem, but it still is the ic designer's fault and not the other way around ...
(I am an ic designer in RL albeit an analog one. TDP is a ***** when talking about mimo radios ) |

Anatal Klep
|
Posted - 2008.05.07 20:47:00 -
[41]
Originally by: Arana Tellen
Well this all depends on how the company works out TDP and if they have put sufficiant cooling on the card for very heavy loads along with airflow conditions inside the case.
True, the vender might screw up cooling. Also, NIDIA may have speced the card incorrectly and CCP is actually within spec. That does not seem to be the case here, though, since there were no problems prior to Trinity 1.1. IOW, something CCP did is causing the problem.
If the card is run harder than its specs allow, then the cooling that was designed for those specs may not cut it. Note that it is entirely possible that CCP was well aware of the specs and designed to them. However, a bug in the software could easily cause the card to be overdriven anyway.
In fact, I think this is very probably the case. I can run missions for hours on end with no problems. In fact I can do anything in EVE except market trading without problems. But I am a trader so I do a lot of trading. As I have reported in another thread, the problems start whenever I have done extensive Market activity and the Market display will usually start acting flakey prior to crashes and lockups.
It seems pretty obvious that an update thread in the Market is stepping on (or getting stepped on by) another thread. It is entirely possible that some state variable used to terminate a loop gets reset and the loop doesn't terminate (or runs 2 gig iterations). If that loop has the wrong graphics driver calls in it, the card could be running exactly the same set of gates at maximum card clock speed in an infinite loop. That will cause local overheating because the card is being run out of spec.
[This is consistent because at least half the time the card resets (rather than a full system lockup) Windows decides that EVE is not responding. If an EVE thread is in an infinite loop writing to the card, EVE cnn't acknowledge and Windows will see it as not responding.]
Obviously there is a lot of speculation here, but I would give pretty good odds this whole problem is a thread bug introduced in Trintity v1.1.
Quote: However this would NOT just affect 8800 card. But I dont know what you mean by FPS limiting devices on the silicon, I doubt this is the case. If it was you would not get linear performance increases overclocking the cards, even the extreme overclockers have not run into said issues.
Other cards are affected if you look at the Windows forum threads. The 8700/8800s just seem to have more problems. Different cards and cooling systems might be more susceptible or have better specs if the overdriving is marginally out of spec.
My point about FPS was just that when specing maximum loads for the card the vendor will probably define them in terms of general things like FPS because those are things the client application can control. IOW, defining probability distributions about picoamps of land current wouldn't be very useful.
|

Anatal Klep
|
Posted - 2008.05.07 20:51:00 -
[42]
Originally by: Zytrel
Originally by: Anatal Klep Advanced settings ... where? I can't find anyhting to do it.
ESC -> Display & Graphics -> Display setup section -> check "advanced settings" -> set "present interval" (last option) to "interval one"
Ah, silly me. I was looking in the NVIDIA driver and Vista for direct card control. Thanks, I'll try that.
Quote: Alternatively, you could just force vsync via your gfx drivers.
I couldn't find any place to do that.
|

Arana Tellen
Gallente The Blackguard Wolves Black Star Alliance
|
Posted - 2008.05.07 21:59:00 -
[43]
Originally by: Anatal Klep
Originally by: Arana Tellen
Well this all depends on how the company works out TDP and if they have put sufficiant cooling on the card for very heavy loads along with airflow conditions inside the case.
True, the vender might screw up cooling. Also, NIDIA may have speced the card incorrectly and CCP is actually within spec. That does not seem to be the case here, though, since there were no problems prior to Trinity 1.1. IOW, something CCP did is causing the problem.
If the card is run harder than its specs allow, then the cooling that was designed for those specs may not cut it. Note that it is entirely possible that CCP was well aware of the specs and designed to them. However, a bug in the software could easily cause the card to be overdriven anyway.
In fact, I think this is very probably the case. I can run missions for hours on end with no problems. In fact I can do anything in EVE except market trading without problems. But I am a trader so I do a lot of trading. As I have reported in another thread, the problems start whenever I have done extensive Market activity and the Market display will usually start acting flakey prior to crashes and lockups.
It seems pretty obvious that an update thread in the Market is stepping on (or getting stepped on by) another thread. It is entirely possible that some state variable used to terminate a loop gets reset and the loop doesn't terminate (or runs 2 gig iterations). If that loop has the wrong graphics driver calls in it, the card could be running exactly the same set of gates at maximum card clock speed in an infinite loop. That will cause local overheating because the card is being run out of spec.
[This is consistent because at least half the time the card resets (rather than a full system lockup) Windows decides that EVE is not responding. If an EVE thread is in an infinite loop writing to the card, EVE cnn't acknowledge and Windows will see it as not responding.]
Obviously there is a lot of speculation here, but I would give pretty good odds this whole problem is a thread bug introduced in Trintity v1.1.
Quote: However this would NOT just affect 8800 card. But I dont know what you mean by FPS limiting devices on the silicon, I doubt this is the case. If it was you would not get linear performance increases overclocking the cards, even the extreme overclockers have not run into said issues.
Other cards are affected if you look at the Windows forum threads. The 8700/8800s just seem to have more problems. Different cards and cooling systems might be more susceptible or have better specs if the overdriving is marginally out of spec.
My point about FPS was just that when specing maximum loads for the card the vendor will probably define them in terms of general things like FPS because those are things the client application can control. IOW, defining probability distributions about picoamps of land current wouldn't be very useful.
A gfx card/cpu should run without incident at 100% load for a reasonable amount of time. Trinity introduced graphics acceleration because before then most of the work was done by the CPU. The better graphics are causing cards either poorly designed or in an environment without sufficiant cooling to fail. That is the most logical reason. ---------------------------------
Oh noes!
Originally by: CCP Greyscale *moderated - mother abuse - Mitnal*
|

Hugh Ruka
Caldari Exploratio et Industria Morispatia
|
Posted - 2008.05.07 22:10:00 -
[44]
Originally by: Zytrel
Originally by: Anatal Klep Advanced settings ... where? I can't find anyhting to do it.
ESC -> Display & Graphics -> Display setup section -> check "advanced settings" -> set "present interval" (last option) to "interval one"
Alternatively, you could just force vsync via your gfx drivers.
If you have vsync always off in windows driver, setting anything in the application preferences (i.e. EVE settings) will be ignored. So you still have to look at the card driver.
It used to be in the Nvidia control panel in the Direct3D section somewhere. However I had no Nvidia card installed for a few years :-) --- SIG --- Goumindong for CSM. |

Kagura Nikon
Minmatar Infinity Enterprises Odyssey.
|
Posted - 2008.05.07 23:03:00 -
[45]
Originally by: Anatal Klep
Originally by: Niraia EVE isn't damaging your graphics card, your lack of cooling is. Whether that's due to a design flaw, you living in a warm climate, or an accumulation of fluff (and other suspicious-looking hair) on your heat sink, it certainly isn't a problem with EVE. That's silly. This is your problem, and you've already listed the only real effective solutions (namely increasing cooling and reducing the GPU clock rate).
How do you account for the fact that the same computers, cards, and environments had no problems prior to v1.1 without any adjustments?
In the end the heat produced on the card is a function of how much exercise the gates are getting. Clock rate and cooling specs for cards and computers are based on average sustained loads defined by the card vendors. Applications are not supposed to exceed those load specs. If the application exceeds those loads on a sustained basis, the card will overheat. 8700 and 8800 are high end cards so if EVE is exceeding the load specs for those cards, there is a serious problem.
and eve is not even close to beign heavy application for a modern video card. 1 hour playign almsot any other gam wil stress it more.
What can happen is possibly a bug in the DRIVERS that is triggered by usign certain set o f features in certain way. Since there is No other piece of software tha can damage hardware but their drivers itself. ------------------------------------------------- If brute force doesn't solve your problem... you are not using enough
|

Kagura Nikon
Minmatar Infinity Enterprises Odyssey.
|
Posted - 2008.05.07 23:05:00 -
[46]
Originally by: Eleana Tomelac
Originally by: Reecoh Soltar I solved this by doing 2 things: 1) Set my Interval to 1 as folks have mentioned here and 2) in the nVidia control panel forced all Anti-aliasing options to OFF for Eve (after reading a dev post saying AA was not supported at all right now). So, that would be my recommendation to start. I may have also turned SLI off, I don't recall now if I turned that back on. I'll check when I get home and update if it's still off.
I've never used SLI, but couldn't you do this : Dedicating one board for the AA and the other for the rest? Wouldn't it be a good way as I don't remember eve being very comfortable with either dual GPUs or dual CPUs.
It was just a thought... It may be totally dumb and not work at all, but you may try to test it.
PS : for the people having issues, are you all forcing the AA options from the drivers or from the software mentionned in the HDR+AA thread that is around here?
taht is not how sli works and even less how AA works. ------------------------------------------------- If brute force doesn't solve your problem... you are not using enough
|

Grytok
moon7empler Ev0ke
|
Posted - 2008.05.07 23:42:00 -
[47]
EvE is not damaging your GPUs. It's YOU, that damages your GPUs by forcing AA through the driver, which is not supported by the client. .
|

Arana Tellen
Gallente The Blackguard Wolves Black Star Alliance
|
Posted - 2008.05.08 00:06:00 -
[48]
Originally by: Grytok EvE is not damaging your GPUs. It's YOU, that damages your GPUs by forcing AA through the driver, which is not supported by the client.
? AA damaging a card. lol. ---------------------------------
Oh noes!
Originally by: CCP Greyscale *moderated - mother abuse - Mitnal*
|

Kagura Nikon
Minmatar Infinity Enterprises Odyssey.
|
Posted - 2008.05.08 00:47:00 -
[49]
Please peopel stop postign your guesswork. Wrogn information only hurts more than helps. if you don 't know how a graphic system works, please post no theories.
Eve stress very little a video card, VERY. The only theoretical problem I could think of, based on how my 8800 behave is exaclty that it operates too lond in a LOW level. Such low level that the card reduces the fan speed, but is stil enough taht heat builds up if your case is badly ventilated. I noticed my 8800 get a bit hot, but as soon as I add MORE cargo to it and cooler kicks in all goes normal. Hypotheticaly this could cause some issues in a very bad ventilaled case.
Just keep your PC well ventialted or use any of the several card setup softwares aroudn to change the threshhold on wich the card fan kicks in. THe gfan speed is controled by the power draw of the card not the temperature it is IN, so if its on 3d mode but on very low load the fan will barely run.
------------------------------------------------- If brute force doesn't solve your problem... you are not using enough
|

Falkrich Swifthand
Caldari eNinjas Incorporated
|
Posted - 2008.05.08 09:07:00 -
[50]
Forums tend to concentrate problems. People who don't normally post on the forums will tend to when they want to complain.
From what I've seen, very few people have had problems with "EVE destroying video cards". I can't say that it's happened to me. Of the people that have had problems (crashes or overheating), for most of them it's turned out to be one of the following: 1: Forcing AA when it's not supported by EVE or the nVidia driver profile for EVE. This is just asking for a crash, but shouldn't damage the card. 2: They're using a laptop, which almost by definition has inadequate cooling, causing heat-damage if you use it too much. 3: Graphics card or driver causing the graphics card's fans to be set to a speed based on the graphics card's load instead of it's temperature, allowing it to overheat. This one's just brain-dead, but hopefully up-to-date drivers sort it. 4: Over-clocking. Why is it so difficult for people to understand that over-clocking (and over-volting) produce more heat, make the component more susceptible to manufacturing defects and may exceed the tolerances of the chip, any of which can cause anything from minor data corruption to crashes to in-chip shorting (permanent damage) to chip death? It invalidates your warranty for a reason. 5: Bad or underspec'ed PSU. Cheapskate. If your power-supply can't handle the load placed on it by your system under full usage then the voltages will fluctuate, doing all sorts of interesting things to your pc. See #4 (over-volting). 6: Bad luck. They were one of the <1% who get a legitimate card (or other pc component, e.g. cpu or ram) failure.
I know it's a pain to check everything, but it really isn't EVE's fault, or the problems would be much more widespread.
nullnull
My sig is not my sig. |

Koyama Ise
Caldari State War Academy
|
Posted - 2008.05.08 09:30:00 -
[51]
Originally by: Falkrich Swifthand 4: Over-clocking. Why is it so difficult for people to understand that over-clocking (and over-volting) produce more heat, make the component more susceptible to manufacturing defects and may exceed the tolerances of the chip, any of which can cause anything from minor data corruption to crashes to in-chip shorting (permanent damage) to chip death? It invalidates your warranty for a reason.
I believe over-volting reduces heat generated but can break your hardware when it wants to jump. Also I'm running premium graphics on my HD 3850 with bloom on high and it's overclocked (GPU: 720 MHz RAM: 870 MHz vs GPU: 600 MHz RAM: 700MHz) and it runs at around 50 ¦C (122¦F for all you backwards people ) with stock cooling. -------- Yes, I know I'm an alt, what are you going to do about it? |

Kagura Nikon
Minmatar Infinity Enterprises Odyssey.
|
Posted - 2008.05.08 10:13:00 -
[52]
Originally by: Koyama Ise
Originally by: Falkrich Swifthand 4: Over-clocking. Why is it so difficult for people to understand that over-clocking (and over-volting) produce more heat, make the component more susceptible to manufacturing defects and may exceed the tolerances of the chip, any of which can cause anything from minor data corruption to crashes to in-chip shorting (permanent damage) to chip death? It invalidates your warranty for a reason.
I believe over-volting reduces heat generated but can break your hardware when it wants to jump. Also I'm running premium graphics on my HD 3850 with bloom on high and it's overclocked (GPU: 720 MHz RAM: 870 MHz vs GPU: 600 MHz RAM: 700MHz) and it runs at around 50 ¦C (122¦F for all you backwards people ) with stock cooling.
Over voltign increase pwoer usage and by the laws of thermodinamics this directly reflects on increasing temperature.
AA has NOTHIGN do do with it people!!
A game implementation has nothign do do with AA, AA is not implemented by game. ITs 100% in drivers. When a game say it does nto support AA, means that it uses a few resources that usually mean no AA since they create aweful bad effects (MAY generate) so the drivers or the game developer usually disable the "HINT AA level feature" taht can be called from the application, just to be on safe side. That because 99% of users are dumb and wil just blame evberyone and their mother if they see graphical glitches by activatign AA in game. But AA cannot and wil not ever brake your card, no mather how your game is programmed
FSAA is simply renderign the scene in a much higher resolution than scaling it down. THe card can and will do it without even informign the game that is doing it. You can use AA even on games that were programed before AA existed.
Again the can do FSAA or not is 100% idependent from the game. THe only thing the way a game is made can cause is screwign the image (if you use deferred renderign for example you will apply FSAA to both the geom bugffer as well to the lighning info buffer, resultign in completely !@#!@#!@ colors and shadows) ------------------------------------------------- If brute force doesn't solve your problem... you are not using enough
|

ZelRox
Reikoku Band of Brothers
|
Posted - 2008.05.08 10:26:00 -
[53]
This on the side, i still want to know why the login screen and char selection screen throws my cpu and gpu into hissy fit. CPU usage goes 100% on both, so does the temp. Why...? ----------------------
BiH 4tw |

Disteeler
Segunda Fundacion T e r c i o s
|
Posted - 2008.05.08 12:46:00 -
[54]
Edited by: Disteeler on 08/05/2008 12:48:39
Requiring it's because EvE is one of the few games using all shader processing power from modern cards (shader 3.0) thus requiring better cooled units. If the card burns under those conditions it's a fault of the manufacturer, not EvE's fault. It's the only reason I can think about this issue.
Also, see this:
Sig by Black Necris |

Seishi Maru
Infinity Enterprises Odyssey.
|
Posted - 2008.05.08 12:53:00 -
[55]
Originally by: Disteeler Edited by: Disteeler on 08/05/2008 12:48:39
Requiring it's because EvE is one of the few games using all shader processing power from modern cards (shader 3.0) thus requiring better cooled units. If the card burns under those conditions it's a fault of the manufacturer, not EvE's fault. It's the only reason I can think about this issue.
Also, see this:
Wrong. Shader model 3 has nothing to do with stressing the card. Even if you use only shaders 1.0 they will be processed by the same units that process the SM 3 ones. SM3 just means that your card can do branches and a few more other instructions that SM2 cards cannot. For a 8800 card, runnign a SM3 or SM2 shader is exactly same usage of its gates and processing units.
|

OneSock
Crown Industries space weaponry and trade
|
Posted - 2008.05.08 13:10:00 -
[56]
Originally by: Kagura Nikon
and eve is not even close to beign heavy application for a modern video card. 1 hour playign almsot any other gam wil stress it more.
Not exactly how it works though. Each frame may be simple, but that does not mean the GPU is idle at any time. When docked my GFX card renders almost 200fps.
Originally by: Kagura Nikon
What can happen is possibly a bug in the DRIVERS that is triggered by usign certain set o f features in certain way. Since there is No other piece of software tha can damage hardware but their drivers itself.
A driver issue or Eve code issue. For example, on my rig, when I click and hold my mouse button (dragging an item or window scroll bar) in Eve, i hear the fans on my PC (whether CPU or GFX I'm not sure) spin faster. And when I let go of the mouse button it returns to normal. Isn't that more than a bit odd ? And it only happens in Eve.
|

Seishi Maru
Infinity Enterprises Odyssey.
|
Posted - 2008.05.08 13:23:00 -
[57]
Originally by: OneSock
Originally by: Kagura Nikon
and eve is not even close to beign heavy application for a modern video card. 1 hour playign almsot any other gam wil stress it more.
Not exactly how it works though. Each frame may be simple, but that does not mean the GPU is idle at any time. When docked my GFX card renders almost 200fps.
Originally by: Kagura Nikon
What can happen is possibly a bug in the DRIVERS that is triggered by usign certain set o f features in certain way. Since there is No other piece of software tha can damage hardware but their drivers itself.
A driver issue or Eve code issue. For example, on my rig, when I click and hold my mouse button (dragging an item or window scroll bar) in Eve, i hear the fans on my PC (whether CPU or GFX I'm not sure) spin faster. And when I let go of the mouse button it returns to normal. Isn't that more than a bit odd ? And it only happens in Eve.
The GPU might be running full time but not usign fully its resources. Its architecture is way more complicated than a CPU sicen they have dozen of parallel pipelines and different stages taht run at different speeds.
2D interfaces made using OpenGl or DX are usually heavy stuff on the raster side of the card (specially if you have very high resolution) but very ligth on the memory access, texture fetch, filtering etc... So your card may reach a bottleneck but still have most of its core almost idle.
Mouse is controlled by the video card ALWAYS, no its not your cpu that moves it trough screen. So continuous interaction of mouse causes continuous activity of card driver. Depending on what else the card may be doing this may cause it to decide to kick in the high power consumption mode.
At end the fact remains that here is NO limit not even HINT on any graphical API on limits on what you can do. There is no way you can be stressing your card too much, because the thing that limits when you can ask for a new frame or action is the card itself. When you do a simple example application on a GPU feature you stress the card much more than any game, since on those tests you don give any break for the card between frames (in games you spend time doing other stuff). Eve interface at end might be something very heavy for the rasterizer, since the client itself don do much else but rendering. But even so its a light game on respect of using the GPU as a whole, that would balance out in a not noticeably high thermal generation.
Of course cards that come with any minor defect at their rasterizers might break due to so focused stress on that part. But again there is no way a game developer can control that. If you wanted to make a software that woudl burn a card, you would fail. You would succed maybe obnly on burning cards with some defects.
|

Vicious Phoenix
|
Posted - 2008.05.08 17:03:00 -
[58]
I have never crashed while playing EVE (fleet battle excluded of course ) and I have an 8800GTX. The key is to have it properly supported. I use a full tower case with a ton of fans and a 750w power supply.
CFW (Certified Forum Warrior) I kill people ingame too.
Originally by: CCP Tuxford I prefer dew over pepsi. I prefer beer over most things. Damn now I want beer.
|

Azuse
The Brotherhood Of The Blade Pure.
|
Posted - 2008.05.08 18:50:00 -
[59]
Never had the blue screen with vista, not even once, however resets are occurring (i.e. display device stops responding so vista restarts it) and it's bloody annoying but playable since the os keeps it running all is get is momentary blips where my screen become garbled/black. Weirder still in the time this has been happening i can't think of instances alike so no clue what's going on or how to make it happen.
The one positive i have is ccp finally signed up to ms error reporting service a while back and last week i got the option to send one (every other time i just get the check for solution/close message). The data it gathered took over two and a half hours to send so if a gm actually sees this just look for the honking big one 
Interestingly the past few days since the patch it's only occurred twice although it was no where near the frequency you're describing ever, i suspect the alt+tab bug (the one where the gpu stops responding and only displays the desk top, you know, the one the last patch "fixed").
Temperature has been been a long standing issue with eve though, running even will make your gpu 5-10 degrees (highest difference i've actually seen anyone note was 18) but it should still be well within the cards spec, default alarm temp is what 115 on nvidia cards? Anyway eve has always made the gpu run hotter than other games, most notably when there's the least no screen, no throttling, but this is hardly new phenomenon as you can easily see if you go back through the forums seeing as it's been reported repeatedly for over three years. Sure it won't harm a card that being cooled properly however eve is doing (or not doing) something no other game does and always has been but not new so unless 1.1 actually changed something there in the code, nice thought but considering the other underlying problems still i doubt it, then i really don't see how this particular one is related.
Unless eve is finally overheating the cores with it's old code ofc... -------------------------
|

Emperor D'Hoffryn
No Quarter. Vae Victis.
|
Posted - 2008.05.08 19:56:00 -
[60]
Edited by: Emperor D''Hoffryn on 08/05/2008 19:58:56 Ive been playing EVE on a 8800GTX for over a year now.
the current "official" drivers from Nvidia, released in December (come on nvidia, you can do better) have caused me problems in Vista when playing in windowed mode. It usually takes running multiple clients, and doesnt always happen, but EVE will start to conflict with AERO, causing GPU resets, screen flicker, and **** poor FPS.
disabling AERO permantly is easy, control panel->admin tools->services and then find "Desktop Window Manager Session Manager" double click it and then hit the stop button. See if your situation improves, if so, choose "disabled" from the startup type dropdown menu.
generally I have found, with my 8800GTX and the current driver, playing full screen is ALWAYS more stable then windowed, even if you are only running a single client windowed. I can play hours on end with no problems full screen, windowed i crash once an hour, usually on warp in to the gate. However, I have never had a blue screen or forced reboot.
just FYI, a blue screen in Vista is serious business, it means a serious hardware/driver based error occured. It does not mean EVE just mucked up.
Originally by: Meridius Dex I could actually fit a Thorax WITH LASERS and get better DPS, better speed, better tank and - wait for it - better cap stability
|
| |
|
| Pages: 1 [2] 3 :: one page |
| First page | Previous page | Next page | Last page |