Cem_Usakligil
Well-known member
You may be aware that some image processing programs do support the use of the graphics processor on your VGA, so that the performance of those programs get an additional boost. In order to see how much difference this makes in real life, I have decided to upgrade my old VGA, which is an ATI Radeon HD6850 card with a brand new NVidia GTX 960 2OC 4GB card. I have run extensive benchmarks with the old card as well as the new one so that I could compare the additional benefit of paying for a new top-of-the line graphics card. Disclaimer: I don't game on my work PC so the performance boost of the new card is purely measured for the purposes of photography. I have used two common programs to do the testing: Lightroom 6/CC and Capture One 8.3.2.
Just to give you an idea about the power of the old graphics card vs. the new one, here are some major specs of both.
ATI Radeon HD 6850: GPU Clock: 775 MHz, Memory Clock: 1000 MHz, Memory: 1 GB
NVidia GTX 960: GPU Clock: 1300 MHz, Memory Clock: 7010 MHz, Memory: 4 GB
On paper, the new card should run at least 2 to 3 times faster than the old one.
Before I give you the results of the benchmarks, let me explain a couple things about LR6/CC, namely how much it makes use of the GPU. The short answer is, not much. LR uses the GPU only when one is working in the Develop module. The GPU is called into action when one zooms in or changes the sliders. LR does not use the GPU in all the other modules (such as Library). It certainly does not use the GPU when creating previews nor when it exports the processed images. So it is very difficult to reliably measure the added-value of the new GPU. I have timed certain repetitive actions such as zooming into 1:1 and scrolling through 50 raw images in the develop module. I have waited for each picture to be rendered and displayed correctly before moving onto the next one.
LR6/CC benchmark results:
Scroll through 50 raw images (mixed bag from 4 different cameras; Canon, Nikon, Sony, Pentax):
Without the GPU support: 4 min 10 sec
With the old GPU: 4 min 4 sec
With the new GPU: 3 min 56 sec
As can be seen, these results are not statistically significant and they are not really accurate. But still, one can conclude that the added-value of a new GPU for LR6/CC is rather negligible. I guess we will have to wait for the LR developers to improve their GPU support software before any significant gains become visible.
Capture One 8.3.2. benchmark results:
Contrary to LR, C1 does use the GPU extensively for everything (such as exporting) and the resulting performance increase is tangible.
Export 50 raw images (mixed bag from 4 different cameras; Canon, Nikon, Sony, Pentax):
Without the GPU support: 4 min 25 sec
With the old GPU: 1 min 53 sec
With the new GPU: 1 min 50 sec
These timings are the average of 5 export runs. Statistically, they are a bit more significant compared to the LR benchmarks. Moreover, the performance increase when using a GPU is more than twofold.
Conclusion:
If you are using LR6/CC switch on the GPU acceleration if LR supports your graphics card. But do not go out to buy a new and expensive GPU since the added value is zero.
If you are using C1, you will definitely want to use the GPU acceleration. However, even a 5-yo graphics card will provide 90% of the perfromance. In order to get the remaining 10%, one can buy a new and fast graphics card but I would not recommend investing in this since the added-value is less than 10% whereas the price of such a card will be above 200 Euro/$.
I hope that this helps decide in case any of you were wondering.
Just to give you an idea about the power of the old graphics card vs. the new one, here are some major specs of both.
ATI Radeon HD 6850: GPU Clock: 775 MHz, Memory Clock: 1000 MHz, Memory: 1 GB
NVidia GTX 960: GPU Clock: 1300 MHz, Memory Clock: 7010 MHz, Memory: 4 GB
On paper, the new card should run at least 2 to 3 times faster than the old one.
Before I give you the results of the benchmarks, let me explain a couple things about LR6/CC, namely how much it makes use of the GPU. The short answer is, not much. LR uses the GPU only when one is working in the Develop module. The GPU is called into action when one zooms in or changes the sliders. LR does not use the GPU in all the other modules (such as Library). It certainly does not use the GPU when creating previews nor when it exports the processed images. So it is very difficult to reliably measure the added-value of the new GPU. I have timed certain repetitive actions such as zooming into 1:1 and scrolling through 50 raw images in the develop module. I have waited for each picture to be rendered and displayed correctly before moving onto the next one.
LR6/CC benchmark results:
Scroll through 50 raw images (mixed bag from 4 different cameras; Canon, Nikon, Sony, Pentax):
Without the GPU support: 4 min 10 sec
With the old GPU: 4 min 4 sec
With the new GPU: 3 min 56 sec
As can be seen, these results are not statistically significant and they are not really accurate. But still, one can conclude that the added-value of a new GPU for LR6/CC is rather negligible. I guess we will have to wait for the LR developers to improve their GPU support software before any significant gains become visible.
Capture One 8.3.2. benchmark results:
Contrary to LR, C1 does use the GPU extensively for everything (such as exporting) and the resulting performance increase is tangible.
Export 50 raw images (mixed bag from 4 different cameras; Canon, Nikon, Sony, Pentax):
Without the GPU support: 4 min 25 sec
With the old GPU: 1 min 53 sec
With the new GPU: 1 min 50 sec
These timings are the average of 5 export runs. Statistically, they are a bit more significant compared to the LR benchmarks. Moreover, the performance increase when using a GPU is more than twofold.
Conclusion:
If you are using LR6/CC switch on the GPU acceleration if LR supports your graphics card. But do not go out to buy a new and expensive GPU since the added value is zero.
If you are using C1, you will definitely want to use the GPU acceleration. However, even a 5-yo graphics card will provide 90% of the perfromance. In order to get the remaining 10%, one can buy a new and fast graphics card but I would not recommend investing in this since the added-value is less than 10% whereas the price of such a card will be above 200 Euro/$.
I hope that this helps decide in case any of you were wondering.