
BBQ
Gallente Federal Defence Union
|
Posted - 2009.03.18 20:36:00 -
[1]
I think alot of the failures could be due to the lovely nVidia fiasco that befell their laptop GPU's as a large percentage of the graphics chips are used on both laptop and desktop cards with a BIOS that reports them as different versions depending on the market.
Basically nVidia told the card and laptop makers that their chip would put out 50W of power and work at a specific themperature at maximum load and the card / laptop designers added a heatsink that ran the chip a small bit away from its maximum temp at that power output. nVidia then supplied the chips BUT they actually ran at 60W of power at full load rather than the 50W specified and they could not withstand the temperature they were rated for over extended time, this means the chips overheat due to the extra power which stresses the internal connections which were already weaker than originally planned and eventually something breaks, usually after a few months or a couple of years. The fix from people like Dell and HP was to turn the fans up higher to reduce the temperature of the chip when running to offset the extra heat produced and its now lower maximum rated temp. A similar thing happened with Intel a few years back when they tried to ramp up the very old P4 chips to 3.6GHz. The power output of the chip was on the maximum line for its rated power output and caused more than 1 system to throttle itself to protect the CPU from death (http://www.tomshardware.com/reviews/p4,919.html), in this case the design of the chip by Intel made sure that the CPU came to no damage as it did not overheat and stayed within its rated power.
It is not possible to overload a GPU or CPU as they have a maximum rated speed (MHZ), voltage and power output (Watts). These are tested and confirmed (sometimes wrongly, see above) and these design specs are then fed to the designers so they can design cooling systems to cater for this heat, some instructions can stress the silicone more than other, in this case CCP and DirectX seem to be using some that are pretty stressful but they will still fall within the maximum power output of the chip by design, the same as above is also done with CPU's. If the designers want to save a few pennies by using a slightly smaller heatsink and gambling on the end user not using their hardware to its limits then this can cause problems.
All modern GPU and CPU chips have the ability to lower their processor speed and voltage when the load on them is reduced, this lowers their heat output and enables them to run cooler when not needed and allows the fan to slow down and reduce its noise. When they are required to do a lot of work they will again ramp up to their design speeds and the power they put out will rise towards their maximum power output (unless overclocked, over voltaged or wrongly tested they cannot exceed their rated power output). If, for any reason, the design of the cooling is relying on the chip not running at 100% of its load all the time then eventualy the chip will overheat as it saturates the heatsink and cannot transfer this power / heat to the air quickly enough.
The short version is that all chips inside a PC should operate and be cooled within their maximum rated abilities, providing cooling that cannot cope with the maximum heat output of the chip is a false economy that will save a few bucks but in the long run can cause problems. The trouble is that people like HP, Dell etc try and reduce the airflow around things like graphics cards and CPU's to the lowest amount allowed to reduce noise and cost the upshot of which is that if you install a bigger graphics card or CPU the chances are you may need a bigger or faster intake or exhaust fan to shift the extra heat it produces out of the case. Trying to cool a hot graphics card with warm air that is stuck inside a case is not good and will cause overheating. ----
God gave us a brain, he also gave us a voice.
Shame some people have yet to connect them.
|