LOL nothing's changed in 25 years for ATI..theoretically the best hardware in the industry totally crippled by the worst software/drivers.
Always had the worst software, apparently always will.
Moreover, Nvidia limits 64-bit double-precision math to 1/24 of single-precision, protecting its more compute-oriented cards from being displaced by purpose-built gamer boards. The result is that GeForce GTX 680 underperforms GeForce GTX 590, 580 and to a much direr degree, the three competing boards from AMD.
Not all projects require double precision. But some do, like MW. For those projects, ATI will outperform nVidia by miles.
LOL nothing's changed in 25 years for ATI..theoretically the best hardware in the industry totally crippled by the worst software/drivers.
Always had the worst software, apparently always will.
Moreover, Nvidia limits 64-bit double-precision math to 1/24 of single-precision, protecting its more compute-oriented cards from being displaced by purpose-built gamer boards. The result is that GeForce GTX 680 underperforms GeForce GTX 590, 580 and to a much direr degree, the three competing boards from AMD.
Not all projects require double precision. But some do, like MW. For those projects, ATI will outperform nVidia by miles.
I tested Milkyway@Home with the GTX 680 and the 680 is indeed very bad at double precision math and is significantly worse in performance than the GTX 580. However, the GTX 580 is not great at double precision either. For Milkway@Home, the AMD 7970 or Tesla cards with full double precision support are the way to go.
The 680 does excel in single precision math however and is able to outperform the 580 at least with this project. The 680 does so with less power consumption as well.
Well, I hear you all and all the ATI/AMD/Nvidia stats, tests, arguments...
I have and hate them both actually... LOL!
What I would like to see is someone design a non-GPU based math co-processor on a PCIe card. In fact, that Intel 50 CPU's on a chip looks promising...
SOMETHING, ANYTHING that allows folks to perform HPC tasks without messing with the video area would be great!!!
For now, I have to be content with an HD6990 for Milkyway/Einstein/Poem and Nvidia in the other boxes... what a waste... Tesla might have been the way to go... but benchmarks on BOINC tasks with Tesla are not encouraging and the cost is way out of bounds...
hate paying for the video "feature" when it's not even being used!!! Alas, when we constitute such a small percentage of users, we're left with the "scraps" for lack of a better word. VERY expensive scraps. Wish like Texas Instruments would development something as you just discussed. Seems right up their alley. I'm sure they could use a boost in business.
hate paying for the video "feature" when it's not even being used!!!
As far as I understand it, we are benefiting from the fact that the GPU cards are being mass-produced for the video-gaming market. If not for that, our costs would be much higher. Economy of scale...
Well now that's very true. Here is a video in regards to that which was posted on GPUgrid. Pretty funny, sad, and true all wrapped into one: http://www.youtube.com/watch?v=DmaYH1F6kho
I always find my best prices
)
I always find my best prices at TigerDirect
I just got my nVidia 550TI superclocked for real cheap.
RE: LOL nothing's changed
)
Crippled? How about this for crippled?:
http://www.tomshardware.com/reviews/geforce-gtx-680-review-benchmark,3161-14.html
Not all projects require double precision. But some do, like MW. For those projects, ATI will outperform nVidia by miles.
Reno, NV Team: SETI.USA
RE: RE: LOL nothing's
)
I tested Milkyway@Home with the GTX 680 and the 680 is indeed very bad at double precision math and is significantly worse in performance than the GTX 580. However, the GTX 580 is not great at double precision either. For Milkway@Home, the AMD 7970 or Tesla cards with full double precision support are the way to go.
The 680 does excel in single precision math however and is able to outperform the 580 at least with this project. The 680 does so with less power consumption as well.
Well, I hear you all and all
)
Well, I hear you all and all the ATI/AMD/Nvidia stats, tests, arguments...
I have and hate them both actually... LOL!
What I would like to see is someone design a non-GPU based math co-processor on a PCIe card. In fact, that Intel 50 CPU's on a chip looks promising...
SOMETHING, ANYTHING that allows folks to perform HPC tasks without messing with the video area would be great!!!
For now, I have to be content with an HD6990 for Milkyway/Einstein/Poem and Nvidia in the other boxes... what a waste... Tesla might have been the way to go... but benchmarks on BOINC tasks with Tesla are not encouraging and the cost is way out of bounds...
Sigh... if only...
8-)
Agreed Tex; hate paying
)
Agreed Tex;
hate paying for the video "feature" when it's not even being used!!! Alas, when we constitute such a small percentage of users, we're left with the "scraps" for lack of a better word. VERY expensive scraps. Wish like Texas Instruments would development something as you just discussed. Seems right up their alley. I'm sure they could use a boost in business.
RE: hate paying for the
)
As far as I understand it, we are benefiting from the fact that the GPU cards are being mass-produced for the video-gaming market. If not for that, our costs would be much higher. Economy of scale...
Well now that's very true.
)
Well now that's very true. Here is a video in regards to that which was posted on GPUgrid. Pretty funny, sad, and true all wrapped into one: http://www.youtube.com/watch?v=DmaYH1F6kho