| Pages: [1] 2 3 :: one page |
| Author |
Thread Statistics | Show CCP posts - 0 post(s) |

Anatal Klep
|
Posted - 2008.05.06 16:17:00 -
[1]
In the Windows forums there are several threads describing graphica card resets, EVE crashes, and system lockups that started with the 1.1 patch. None of these problems were observed with significant frequency prior to the 1.1 patch. Nobody reports comparable problems playing other MMORPGs.
Most players reporting such chronic problems have GeForce 8700/8800 graophics cards. One fairly common complaint is that the problems seem to become more frequent over time. Reducing clock rate and increasing cooling sometimes resolves the problems for awhile, but then they return a few days later.
Other players have replaced graphics cards and the problems go away for a couple of days, only to gradually return again.
IME experience I have noticed the same increasing frequency. There has also been an increase in frequency of the Blue Screens where Vista shuts down the system to prevent damage. (Presumably Vista is monitoring the temperature, which is critical in laptops like mine.)
I am now getting graphics card resets as frequently as every two minutes and the game is essentially unplayable. In the past couple of days I am starting to see Blue Screens for NMI memory parity checks that are also increasing in frequecy. On running diagnostics after reboot everything passes, suggesting an intermittent temperature problem.
This all screams that some change made in v1.1 is over-taxing the graphics cards, causing them to overheat and, consequently, increasing the possibility of permament incremental damage.
|

Aldelphius
Carbide Industries
|
Posted - 2008.05.06 16:58:00 -
[2]
Although Im not going to say this problem isnt happening, I will say that the cards I have seen reporting this problem seem to be in laptops or midrange graphics cards. The people who say this is a card quality issue are usually using 8800GTX's, or other equivelent high end cards that have integrated heatsinks for the ram. Disabling station environment seems to stop the crashes and reduce temps as well.
Just an observation Ive made from reading all the threads.
|

Niraia
Gallente GREY COUNCIL Hydra Alliance
|
Posted - 2008.05.06 17:40:00 -
[3]
EVE isn't damaging your graphics card, your lack of cooling is. Whether that's due to a design flaw, you living in a warm climate, or an accumulation of fluff (and other suspicious-looking hair) on your heat sink, it certainly isn't a problem with EVE. That's silly. This is your problem, and you've already listed the only real effective solutions (namely increasing cooling and reducing the GPU clock rate). |

Chi Quan
Perkone
|
Posted - 2008.05.06 17:50:00 -
[4]
maybe you have a heat/power source issue. i run a 7600 with xpsp2 and and rather mediocre hardware, but i have never ever had a bsod related with the graphics card that was NOT heat/power related (in fact i have not had a single bsod for a year, but thats ot).
the problems you describe sound like that. replacing graphics makes ppl "dust off all the other stuff in there" thus, temporarily fixing the issue. maybe it's a good idea to have some graphics benchmark run continuously and see if it brings the pc down. ---- "i-r-l33t3r-than-u 'cause ju is a n00b" is not a valid argument, it just shows you don't have any |

Kazuo Ishiguro
House of Marbles Zzz
|
Posted - 2008.05.06 18:23:00 -
[5]
Turning on vsync is probably a good idea, as it eliminates the problems caused by the station environments. |

Draygo Korvan
Merch Industrial GoonSwarm
|
Posted - 2008.05.06 18:43:00 -
[6]
Originally by: Kazuo Ishiguro Turning on vsync is probably a good idea, as it eliminates the problems caused by the station environments.
Ill give you a quick idea what vsync is and what it does.
Lets say person 1 is playing a game with vsync off, and getting 320 frames a second (in theory) and person 1 is using a 75 Hertz moniter
Then we have person 2, playing the same game with vsync on, getting 75 frames a second, also using the same moniter.
Now for Real FPS: Your framerate is capped by the ability of the moniter to display it. 75 hz moniters can only display 75 frames per second, and 60 Hertz moniters can only display 60 frames per second.
So essentially for person 1 he is viewing the game at 75 frames a second while his GPU is turning out frames at a rate of 320 frames a second.
Person 2's GPU is only outputing frames at 75 frames per second, doing significantly less work.
vsync caps the GPU's fps at your moniters refresh rate (measured in Hertz) It also has the added benifit of preventing tearing (where the top half of your moniter displays a different frame than your bottom half). |

Night Breed
Minmatar The Organisation.. Dominatus Phasmatis
|
Posted - 2008.05.06 19:15:00 -
[7]
Edited by: Night Breed on 06/05/2008 19:16:43 I for one am not having any problem at all. I just recently(about 3 days ago) turned on HDR lighting and shadow quality to high. I also tried the tweak of using HDR+AA on my 7900 GT. I am not having any problems at all with the recent patch. The only thing is that HRD+AA does not work because I have a 7900 GT.
But HDR lighting does. But Like I have said im not having any BSODs or skyrocketing temperatures. I stayed logged in atleast 6 hours a day non-stop and im having absolutely no problems at all. So it must be either with your card series or the way your cooling in your system is.
My specs : AMD athlon xp X2 4200+ dual core (think thats the exact name) 2 Gigs of ram (unknown speed and freq) Creative X-FI extreme gamer. Nvidia Geforce 7900 GT (256 MB ram) Windows XP Pro
As I have said Im not having any problems at all and I have an outdated video card. So your problem again lies with your series of Graphics card or your cooling.
P.S for cooling I have 1 80 mm fan in the back ( think thats the size) the Zalmon heatsink ( the one with 2 copper rings) and the PSU fan. Plus my case is open for more ventilation (people might say thats a bad Idea but i have not had any problems)
P.P.S I normally have a 2 clients running at the same time on my computer also.
|

Anatal Klep
|
Posted - 2008.05.06 20:42:00 -
[8]
Originally by: Niraia EVE isn't damaging your graphics card, your lack of cooling is. Whether that's due to a design flaw, you living in a warm climate, or an accumulation of fluff (and other suspicious-looking hair) on your heat sink, it certainly isn't a problem with EVE. That's silly. This is your problem, and you've already listed the only real effective solutions (namely increasing cooling and reducing the GPU clock rate).
How do you account for the fact that the same computers, cards, and environments had no problems prior to v1.1 without any adjustments?
In the end the heat produced on the card is a function of how much exercise the gates are getting. Clock rate and cooling specs for cards and computers are based on average sustained loads defined by the card vendors. Applications are not supposed to exceed those load specs. If the application exceeds those loads on a sustained basis, the card will overheat. 8700 and 8800 are high end cards so if EVE is exceeding the load specs for those cards, there is a serious problem.
|

Anatal Klep
|
Posted - 2008.05.06 21:07:00 -
[9]
Originally by: Kazuo Ishiguro Turning on vsync is probably a good idea, as it eliminates the problems caused by the station environments.
I agree, that seems like a good idea. However, I can't find anyplace to set it. I've looked all over Vista control panel and the NIVIA control panel and can't find anything that looks like it would do the job. (Naturally there is no 'vsynch' in any of the Help, or even 'station'.) Assuming it can be adjusted for my laptop, do you know where the most likely place to look for it is, or a likely synonym?
|

Eleana Tomelac
Gallente Through the Looking Glass
|
Posted - 2008.05.06 21:09:00 -
[10]
Originally by: Anatal Klep Clock rate and cooling specs for cards and computers are based on average sustained loads defined by the card vendors. Applications are not supposed to exceed those load specs. If the application exceeds those loads on a sustained basis, the card will overheat. 8700 and 8800 are high end cards so if EVE is exceeding the load specs for those cards, there is a serious problem.
Ok, find me those specs... Simple as this.
For the heating process, I know the basics, the gate swtiching causing more heat than when being idle and so on.
So, eve makes the card work too much? There are few questions that pop to my mind... Why the graphic drivers doesn't throttle this? Why the card bios doesn't throttle this?
Why would the graphic card designer trust the software enough to let this happen?
Haven't we seen in the past viruses burning displays, viruses wrecking hard disk drives, viruses clearing the bios? Failsafes have been added on everything to make sure the components won't receive damage or can be reset (for the bios data). Knowing this, manufacturers would have left a security hole allowing to burn their products that easy? I haven't heard of video card eater viruses yet...
So, eve would be the first of this kind? Huge news...
Finally, I come back to this : -It's nonsense -It is bad design, mainly in cooling -- Pocket drone carriers (tm) enthousiast !
Assault Frigates MK II |

Zytrel
Purgatorial Janitors Inc.
|
Posted - 2008.05.06 21:10:00 -
[11]
Originally by: Anatal Klep How do you account for the fact that the same computers, cards, and environments had no problems prior to v1.1 without any adjustments?
In the end the heat produced on the card is a function of how much exercise the gates are getting. Clock rate and cooling specs for cards and computers are based on average sustained loads defined by the card vendors. Applications are not supposed to exceed those load specs. If the application exceeds those loads on a sustained basis, the card will overheat. 8700 and 8800 are high end cards so if EVE is exceeding the load specs for those cards, there is a serious problem.
While I can confirm that EVE's performance since v1.1 noticeably has decreased - especially for the station environments/multiple clients - and the card also gets a bit hotter than it used to, the problem of your card overheating is most likely caused by the cheap and insufficient stock cooling which sadly became the standard these days.
I can almost guarantee that any heat issues will go away by replacing the stock cooler with a proper one (Zalman, Thermalright, etc.).
Given you didn't do any manual overclocking, no application should be able to overheat your card, even under 100% load, ever. Period.
Don't get me wrong though, I also get the feeling that something went wrong with v1.1 and I would love CCP looking into the issue, but for the performance reasons. 
regards, zytrel.
|

Anatal Klep
|
Posted - 2008.05.06 23:17:00 -
[12]
Originally by: Eleana Tomelac
Ok, find me those specs... Simple as this.
Every card manufacturer publishes them. I'm too lazy to go to the NVIDIA site to look for them, but I'm sure they are there.
Quote: Why the graphic drivers doesn't throttle this? Why the card bios doesn't throttle this?
They can't. Device drivers have to be reentrant (i.e., called a second time while still processing from the first call) and they tend to have a large number of essentially independent API functions that can be called in parallel. So an application is only limited by the main ALU clock rate in throwing stuff at the card. A tight loop invoking the same API function could easily cause local overheating. The best one can do is monitor the temperature and shut down if the card gets too hot.
That usually works but it is very indirect, especially if client is looping over the same operations so the strees is localized in the card gates. Every time the card overheats there is a small probability of at least a partial failure. For example, a land may partially melt. That increases the resistence slightly the next time so that it produces more heat. It the board is overstressed enough times, the gate can fail permanently. IOW, temperature cutoffs are usually sufficient for one-time unusual situations, but repeated abuse will eventually cause problems.
Quote: Haven't we seen in the past viruses burning displays, viruses wrecking hard disk drives, viruses clearing the bios?
We sure have, but doing it to a graphics card is tricky. How do you overstress the card without it being visible in the display? You also have to do it long enough for the card to overheat. Most people would reboot. If not, the temperature cutoff would probably save the board unless it was done many times. If one wants to be malicious, there are much less problematic ways.
|

Anatal Klep
|
Posted - 2008.05.06 23:27:00 -
[13]
Originally by: Zytrel
I can almost guarantee that any heat issues will go away by replacing the stock cooler with a proper one (Zalman, Thermalright, etc.).
I could buy that (i.e., is is a computer vendor problem of underdesign) if there had always been problems. But these problems only started with v1.1.
Quote: Given you didn't do any manual overclocking, no application should be able to overheat your card, even under 100% load, ever. Period.
Sorry, but that is simply not true. The drivers have to be reentrant and support parallel invocation. A tight loop can easily overheat the card if run long enough. All they can do is provide a temperature cutoff.
Quote: EDIT: For the vsync option you'll need to enable "advanced settings" and set "present interval" to "interval one".
Advanced settings ... where? I can't find anyhting to do it.
|

Arana Tellen
Gallente The Blackguard Wolves Black Star Alliance
|
Posted - 2008.05.06 23:42:00 -
[14]
Software damaging cards would require code that effects the bios of the card. Executing any normal code should not damage the card. ---------------------------------
Oh noes!
Originally by: CCP Greyscale *moderated - mother abuse - Mitnal*
|

Arana Tellen
Gallente The Blackguard Wolves Black Star Alliance
|
Posted - 2008.05.06 23:43:00 -
[15]
Edited by: Arana Tellen on 06/05/2008 23:42:58
Originally by: Draygo Korvan
Originally by: Kazuo Ishiguro Turning on vsync is probably a good idea, as it eliminates the problems caused by the station environments.
Ill give you a quick idea what vsync is and what it does.
Lets say person 1 is playing a game with vsync off, and getting 320 frames a second (in theory) and person 1 is using a 75 Hertz moniter
Then we have person 2, playing the same game with vsync on, getting 75 frames a second, also using the same moniter.
Now for Real FPS: Your framerate is capped by the ability of the moniter to display it. 75 hz moniters can only display 75 frames per second, and 60 Hertz moniters can only display 60 frames per second.
So essentially for person 1 he is viewing the game at 75 frames a second while his GPU is turning out frames at a rate of 320 frames a second.
Person 2's GPU is only outputing frames at 75 frames per second, doing significantly less work.
vsync caps the GPU's fps at your moniters refresh rate (measured in Hertz) It also has the added benifit of preventing tearing (where the top half of your moniter displays a different frame than your bottom half).
Actually the software just throws away the extra frames it renders It still does the same work. ---------------------------------
Oh noes!
Originally by: CCP Greyscale *moderated - mother abuse - Mitnal*
|

Katana Seiko
Gallente
|
Posted - 2008.05.07 00:23:00 -
[16]
Originally by: Night Breed
My specs : AMD athlon xp X2 4200+ dual core (think thats the exact name) 2 Gigs of ram (unknown speed and freq) Creative X-FI extreme gamer. Nvidia Geforce 7900 GT (256 MB ram) Windows XP Pro
It should be "AMD Athlon64 X2 4200+". There's no Athlon XP with two cores and the X2 allready says "dual core".
Well, a game like EVE isn't able to destroy anything but files. Software can't damage hardware. The BIOS is software, and they are the software that make your hardware work. So if the BIOS is damaged, the hardware isn't necessarily damaged.
HDR is accessing the VRAM a little bit more often than Windows. That means, that your VRAM will get a little bit warmer than usually. If you don't do anything to cool down your stuff, it's your fault. |

Arana Tellen
Gallente The Blackguard Wolves Black Star Alliance
|
Posted - 2008.05.07 00:29:00 -
[17]
Originally by: Katana Seiko
Originally by: Night Breed
My specs : AMD athlon xp X2 4200+ dual core (think thats the exact name) 2 Gigs of ram (unknown speed and freq) Creative X-FI extreme gamer. Nvidia Geforce 7900 GT (256 MB ram) Windows XP Pro
It should be "AMD Athlon64 X2 4200+". There's no Athlon XP with two cores and the X2 allready says "dual core".
Well, a game like EVE isn't able to destroy anything but files. Software can't damage hardware. The BIOS is software, and they are the software that make your hardware work. So if the BIOS is damaged, the hardware isn't necessarily damaged.
HDR is accessing the VRAM a little bit more often than Windows. That means, that your VRAM will get a little bit warmer than usually. If you don't do anything to cool down your stuff, it's your fault.
The game could technically be accessing the bios and messing with voltages/clocks but I doubt the 8800 bios has voltage options higher than standard... I don't think mine did. |

Draygo Korvan
Merch Industrial GoonSwarm
|
Posted - 2008.05.07 04:24:00 -
[18]
Originally by: Arana Tellen
Actually the software just throws away the extra frames it renders It still does the same work.
Wrong, please try again. |

Anatal Klep
|
Posted - 2008.05.07 05:26:00 -
[19]
Originally by: Arana Tellen Software damaging cards would require code that effects the bios of the card. Executing any normal code should not damage the card.
Where is this rumor coming from??? Note that the graphics card may not even have any bios! If it does have its own bios in firmware, then the device driver accesses that bios routinely so there is nothing magic about it.
Among other things, the host computer is running at a GHz clock rate while the graphics card is running at a few hundred MHz. The device driver API functions are reentrant and designed to be accessed in parallel by multiple threads. So an application can throw a lot more at the card than it can handle for a sustained period.
All you need is a WHILE TRUE loop calling the right API function; the function will be called again while it is still processing the calls from previous iterations. Since it is the same function with the same inputs, current will be flowing through the same gates essentially continuously and that will produce local overheating.
[In practice, though, most heating damage occurs on power rails on the chip that carry a cumulative current for many gates active at the same time. Those rails are all over the chip and locally serve bunches of gates. The manufacturer depends on the expectation that only half the gates in a group will be on at the same time on the average. (That expectation is quite normal for almost all commercial digital electronics.) Moral: there is a probability, albeit very small, that your card can be damaged by the sheer coincidence of having all of the gates in an artwork cluster on at the same time for each of many consecutively executed driver funtions when a pretty innocuous application is running. IOW, by bad luck.]
It would be a daunting combinatorial problem to try to anticipate all the possible hot spots among billions of gates for various combinations of API functions being called at different frequencies so that one could use counters and timers on API calls to check for overloads. (Not to mention the overhead of doing so if it is done in the driver or the artwork real estate used if it is done in the hardware.) So the card vendor provides a few generic limits on things like FPS to limit liability and depends on a temperature sensor to protect the card when an application screws up or there is just bad luck. |

Cpt Cosmic
|
Posted - 2008.05.07 09:11:00 -
[20]
i love those people that always have only one answer for every problem "HEAT HEAT MIMIMI!"
those are those people that actually have NO CLUE.
a typical nvidia gpu is designed to handle temps up to 110 ŚC, when it reaches 110 ŚC it will SHUT DOWN by itself. it even lowers clock automatically to not reach this point. this is something you can read out of the gpus bios. and the ram of the gpu normally doesnt have any cooling because it will handle 200 ŚC easily.
|

Zytrel
Purgatorial Janitors Inc.
|
Posted - 2008.05.07 09:40:00 -
[21]
Originally by: Anatal Klep Advanced settings ... where? I can't find anyhting to do it.
ESC -> Display & Graphics -> Display setup section -> check "advanced settings" -> set "present interval" (last option) to "interval one"
Alternatively, you could just force vsync via your gfx drivers.
|

Siigari Kitawa
Gallente The Aduro Protocol
|
Posted - 2008.05.07 11:19:00 -
[22]
Actually, I will be very honest here. I have noticed a trend in 6800s I have used.
At an internet cafe I used to play at they had a very nice computer with a 6800 I played EVE on. Over time, the game screen would start to mess up, and all the contents of the viewing window would be shoved to the top center of the screen. Sometimes zooming would fix this, but not always. It almost always required a ctrl-q. It became very chronic to the point that I finally switched to a different computer.
Now, I'm at home and have been using this 6800 for about 5 months. SAME PROBLEM just occurred a few days ago. If it starts to become chronic we'll know there's a problem.
|

Seetesh
Caldari Pixels Docks
|
Posted - 2008.05.07 11:37:00 -
[23]
I lsot one of my 8800gtx cards 2months back because the cooling inside the case failed. Its not eve i promise just sort out some nice cooling gear preb liquid.
|

Eleana Tomelac
Gallente Through the Looking Glass
|
Posted - 2008.05.07 12:08:00 -
[24]
Originally by: Cpt Cosmic i love those people that always have only one answer for every problem "HEAT HEAT MIMIMI!"
those are those people that actually have NO CLUE.
a typical nvidia gpu is designed to handle temps up to 110 ŚC, when it reaches 110 ŚC it will SHUT DOWN by itself. it even lowers clock automatically to not reach this point. this is something you can read out of the gpus bios. and the ram of the gpu normally doesnt have any cooling because it will handle 200 ŚC easily.
Sounds like you have no clue either on our subject. No one has clues yet, it's just exploring possibilities.
Also, while the GPU and memory design would support high temperature, the cards are not just made of that. Heat could cause some other component to fail. If something else fails, things can happen. It depends much on what other components on the board would fail at high temperature. So, it's not just an nvidia issue, they just provide the GPU and a few other stuff to card manufacturers.
For example, why I never have any issue since I added cooling to my 7950GT to keep it under 60-65ŚC (typical temps with vsync are 55ŚC and heavy environments get me higher, like missions with many ships and large fleets in pvp), I no more have crashes at all? Why when it reaches 80ŚC, it begins doing strange things and I can have freezes? So, in this case, cooling helps if added soon enough.
So, looking just at GPU specs or memory specs or any single component spec isn't helping. I guess someone cut costs on other components and they won't resist such heat levels with no help.
PS : most cases are badly cooled, proper case cooling often helps at cooling everything including the things we couldn't pinpoint yet. -- Pocket drone carriers (tm) enthousiast !
Assault Frigates MK II |

McBrite
Clown College Clown Punchers Syndicate
|
Posted - 2008.05.07 13:25:00 -
[25]
So could somebody finally say with definity, wether v-sync will reduce load or not?
|

Arana Tellen
Gallente The Blackguard Wolves Black Star Alliance
|
Posted - 2008.05.07 13:53:00 -
[26]
So it does, just checked on my zalman power meter. 197 -> 150W, you learn something every day. ---------------------------------
Oh noes!
Originally by: CCP Greyscale *moderated - mother abuse - Mitnal*
|

Arana Tellen
Gallente The Blackguard Wolves Black Star Alliance
|
Posted - 2008.05.07 13:55:00 -
[27]
Originally by: Anatal Klep
Originally by: Arana Tellen Software damaging cards would require code that effects the bios of the card. Executing any normal code should not damage the card.
Where is this rumor coming from??? Note that the graphics card may not even have any bios! If it does have its own bios in firmware, then the device driver accesses that bios routinely so there is nothing magic about it.
Among other things, the host computer is running at a GHz clock rate while the graphics card is running at a few hundred MHz. The device driver API functions are reentrant and designed to be accessed in parallel by multiple threads. So an application can throw a lot more at the card than it can handle for a sustained period.
All you need is a WHILE TRUE loop calling the right API function; the function will be called again while it is still processing the calls from previous iterations. Since it is the same function with the same inputs, current will be flowing through the same gates essentially continuously and that will produce local overheating.
[In practice, though, most heating damage occurs on power rails on the chip that carry a cumulative current for many gates active at the same time. Those rails are all over the chip and locally serve bunches of gates. The manufacturer depends on the expectation that only half the gates in a group will be on at the same time on the average. (That expectation is quite normal for almost all commercial digital electronics.) Moral: there is a probability, albeit very small, that your card can be damaged by the sheer coincidence of having all of the gates in an artwork cluster on at the same time for each of many consecutively executed driver funtions when a pretty innocuous application is running. IOW, by bad luck.]
It would be a daunting combinatorial problem to try to anticipate all the possible hot spots among billions of gates for various combinations of API functions being called at different frequencies so that one could use counters and timers on API calls to check for overloads. (Not to mention the overhead of doing so if it is done in the driver or the artwork real estate used if it is done in the hardware.) So the card vendor provides a few generic limits on things like FPS to limit liability and depends on a temperature sensor to protect the card when an application screws up or there is just bad luck.
Well this all depends on how the company works out TDP and if they have put sufficiant cooling on the card for very heavy loads along with airflow conditions inside the case.
However this would NOT just affect 8800 card. But I dont know what you mean by FPS limiting devices on the silicon, I doubt this is the case. ---------------------------------
Oh noes!
Originally by: CCP Greyscale *moderated - mother abuse - Mitnal*
|

Eleana Tomelac
Gallente Through the Looking Glass
|
Posted - 2008.05.07 14:04:00 -
[28]
Originally by: McBrite So could somebody finally say with definity, wether v-sync will reduce load or not?
Vsync can reduce load, but not in all cases.
It will work if there are more frames rendered than the number the vsync options you choose limits.
So, for light environments, where many cards can process 200fps, the workload will be limited to rendering only what your vsync asks, which is from 50 to 120 (some LCD screens goes as low as 50, but as they are not black between frames, it's not an issue for occular fatigue) usually on interval one (the screen refresh rate).
You could set the vsync setting so it limits frames to something as low as 30 fps max, you will usually not notice the frames at that speed (note that it is a totally different thing having 3 times the same image shown on screen with a 30 fps on a 90Hz screen compared to having a 30Hz screen, human eye catches the screen going to black at 30Hz, but it usually doesn't catch the tiny difference between two frames at 30fps).
When you will have more graphic intensive situations, the framerate could drop under the vsync limit, thus making the graphic card work at 100% for a good part of the time you'll be in that environment. These situations would include missions with tons of ships, places with many ships (PvP fleet or popular station in empire). But for most of the time, including being in station, the graphic card will not work much and will be much cooler (and the fan speed should lower easing your ears).
Little advice : Choose well your shadows (extreme costs really much) and anti-alias (in video drivers, not in eve) options, or your graphic card will never have a situation where it could render much more than vsync, a situation where it would cool down between stress periods.
And finally, about a good thing vsync brings forth :
Originally by: From an nvidia book for graphic developpers For some applications, i.e., visual simulations, VSync is an absolute requirement to avoid tearing.
Tearing is when the upper part of the screen doesn't show the same frame as the bottom part of the screen. It is very ugly and I highly dislike this. So, even if it isn't making your system cooler, you will at least have this advantage! -- Pocket drone carriers (tm) enthousiast !
Assault Frigates MK II |

Arana Tellen
Gallente The Blackguard Wolves Black Star Alliance
|
Posted - 2008.05.07 14:07:00 -
[29]
I can't stand the input lag with vsync, I have a fast response monitor, mouse and no vsync, if you have a slow monitor (even modern high cost ones can be bad due to the type of tech they use), slow mouse and vsync, ugh. ---------------------------------
Oh noes!
Originally by: CCP Greyscale *moderated - mother abuse - Mitnal*
|

Eleana Tomelac
Gallente Through the Looking Glass
|
Posted - 2008.05.07 14:07:00 -
[30]
Originally by: Arana Tellen So it does, just checked on my zalman power meter. 197 -> 150W, you learn something every day.
Is this your power supply having this or someting you added? I find such device interesting... (even if useless fir 99% of the people!) -- Pocket drone carriers (tm) enthousiast !
Assault Frigates MK II |
| |
|
| Pages: [1] 2 3 :: one page |
| First page | Previous page | Next page | Last page |