• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

Why are RAW converters so slow?

Keith Wong

New member
This is not a criticism of RAW converters, I am just curious as to the answer. All RAW converters on the market that I have tried are like this, so i'm not singling out any particular one.

The cheapest sub-$1000 DSLR on the market can capture an image, apply white balance and noise reduction, apply a colour curve, sharpen, and then convert to JPG and save to a slow CF card. And it can do this 3 times per second.

My $4000 desktop PC, which has an AMD x64 4800+, 4 GB of RAM, and twin 10,000rpm HDD's in RAID0 configuration, takes the same RAW file and takes 10+ seconds to do the same conversion. I am using Bibble Pro with the latest updates. Just look - the RAM in my computer alone is bigger than my CF card, and the 10,000rpm HDD's will blow the fastest CF cards out of the water by several orders of magnitude.

If I were to do no image adjustments and just let the RAW converter process the image using camera settings, the two files (in-camera JPEG and RAW converter JPEG) are virtually indistinguishable.

I suppose the answer is in the specialized imaging chip on the camera, and the additional overhead of running a bloated OS like Windows XP. But I would have thought that with such an overwhelming computing advantage, my computer should be just as quick if not quicker.

Anybody well-versed with computers and RAW converters like to give an answer?
 

Stan Jirman

New member
It's not so simple: your camera can't actually do it 3x per second - just look at how long it takes for the data to be written to the card. Some (but not all) of this time is spent doing processing, some time is spent doing the actual writing.

The results are different, too. An image converted in a raw converter will usually look better than an in-camera JPG, even if you only use the default settings. The magnitude of the difference depends on the camera, as well: with the original 1Ds, the difference was huge, because the images were very large for the time and the processor, so they took lots of shortcuts on the way.

One other "unequal" aspect is the I/O: even a SATA drive will be running at ~35MB/s sustained rate, so it takes about 0.5s to read the raw file; it adds up. The next step is decompressing the data (the raw files are often compressed; only Nikon NEF offers a no compression option). This takes again time that the camera doesn't have to spend, because it simply has the data in front of itself.

Finally, the cameras use dedicated circuits. They are built specifically to do one thing and only one thing - taking sensor data and making a JPG out of it. It's much easier to get fast performance with dedicated hardware. Just look at CPU vs. GPU speed: the GPU can't do much, or not very precisely, but boy does it do it fast! Meanwhile, the CPU has who knows how many cores and levels of cache and can do a lot more, but is slower. Just remember back 10+ years ago the RISC vs. CISC discussion? Same thing.

But the most important thing is really the fact that an in-camera JPG is typically of lesser quality than a raw converter image. I think it's safe to assume that the in-camera logic is running purely at 8 bits because it knows that that's all that will be used eventually, vs. all raw converters run internally at 16 bits and then do the conversion to 8 as the last step - if so desired. One extra step that the camera doesn't have to do, and a whole lot more complex math to deal with.
 
D

Doug Kerr

Guest
Hi, Stan,

I think it's safe to assume that the in-camera logic is running purely at 8 bits . . .

Do we know this? Even if the output is to be a JPEG file, I would expect that the use of only an 8-bit platform for the raw to rgb (not RGB) demosaicing, conversion to YCbCr (which is what JPEG works from), where there are a lot of steps in which quantiizing error can take place, would be disadvantageous.

Or maybe that is your point.
 

Stan Jirman

New member
Well, I don't think that anyone out here on the forum knows how the cameras do their work internally - probably one of the best kept secrets; the only given thing about camera processing is that they do cut corners and that they know exactly what the desired output will be (8b JPG) and don't spend any time on anything more. Dedicated hardware is also a known quantity and what it can do for processing.

What I do know very well is how raw converters work and what are all the steps they take before they spit out their data. For some of them I could tell you in G5 cycles per pixel :) I also know that it's possible to get an image with just 8 bits throughout, even though the result won't be "best" - but then again, few in-camera JPGs are (actually I have yet to see a camera that will deliver anything even near to what say Photoshop can get with default settings). My favorite examples of pathetic in-camera JPG (in direct comparison to what can be done with raw) is the 1Ds and the 5D. Not that they are terrible (well, I think the 1Ds was) but they are pretty poor compared to their potential. And now when I come to think about it, the D2X has also a pretty bad quality ratio between in-camera and say PS JPGs. All I am saying is that they are cutting corners bigtime, and an entirely 8b pipeline is well within reason - along with other cheats.
 

Keith Wong

New member
If you say that the JPG output of the 5D is not as good as one converted from RAW, then I should go back and do some pixel peeping. I have a 5D, and I have never shot JPEG.

Thanks for the great info, Stan ... are you a software engineer or something?
 

Stan Jirman

New member
Let's say that I know something about raw decoding, and leave it at that :)

The 5D L JPG is quite fine (yet still not great), but the M and S variety is quite sad. This surprised me when I looked at it, esp. in comparison with the 1Ds2 S JPG. I can understand that the M variants won't be solid, but S should be as good or better than L (as it is with the 1Ds2).
 

Michael Tapes

OPF Administrator/Moderator
Regarding speed, there are ways to speed things up on a PC (or Mac I suppose). RawShooter exploits this to the max. In its FastProof mode it converts at about the rate of 3fps, albeit at up to 50% max size, and with poor quality. But in FastProofHQ mode, also limited to 50% native size, the conversion only take a second or 2 (on a MacBook Pro 17 running Windows XP SP2) with very good (but not perfect quality.

So to answer the original question...cameras have dedicated hardware so are very fast, PCs use generic hardware, and it is up to the programming skills to get the speed with quality. Michael Jonnson, main author of RawShooter and original architect, author and coder of Capture One for over 10 years is certainly one of the best out there.

As GPUs (Graphics processing Units) that are now in vogue (actually a requirement) for the gaming set, become commonplace, the speed will skyrocket, as these chips are dedicated image processors. I am sure that we will see this soon. I believe that Aperture requires a GPU, and I would guess that RSP and ACR and others will begin to use them if they are there, just like they do with dual processors. JMHO
 

Ray West

New member
Some things I'm not so sure about...

1) does the camera actually convert a raw file to jpeg, is it not going straight from 'pixel map' to jpeg, in the same way it might go from 'pixel map' to raw? ('pixel map' in this case I mean to refer to some other, more 'towards sensor level' arrangement of bits')

2) game processing is different requirements than image processing. For cad work, and so on (2d) the high speed game processors do not help. I am not saying that such processors do/will not exist, but it is a different requirement.

3) is the jpeg embedded in some tiffs, of any useful resolution? I thought it was a compressed thumbnail sort of thing, although tiff is pretty flexible, so I guess it could be anything.

4) Adobe own tiff, but they are pushing their own standard (can't think of it???). I reckon we had better store images on film, or just print them.

Just some thoughts.

Best wishes,
 

Asher Kelman

OPF Owner/Editor-in-Chief
Mark my words,

"Stan is the man!" Reread his posts. Whenever he posts it's something you can take to the bank. Besided being a Czech, which aready, IMHO, gets him 9.9 points out of a maximum of ten, he's as smart and as proficient as it gets. He's the master, mister, patron, un autre "Grand Monsieur" and, BTW, (totally schizophenically apart), a fine photographer. I hope he'll share his work here!

Asher
 

Stan Jirman

New member
Oh Asher, don't flatter me like so :)

As for raymw's questions:

1) The camera has to do a certain level of raw conversion, because there's a huge difference between sensor data and a JPG. To name the biggest, the sensor data is Beyered, meaning each pixel has only one color, and JPG data is full RGB, meaning there has to be interpolation. It so happens that the interpolation step is the most difficult and time consuming one. There are other necessary steps, such as tone mapping: the sensor data is linear up to a point, while a JPG is "gamma'd"; not a huge operation, but still. Individual sensor colors max out at different levels of light, and this has to be accounted for as well - otherwise you'd have images with a severe color tint. And so on.

2) Video cards are being used today already for raw decoding and image processing. Aperture is one such example on the Mac, and is the reason why Aperture has such high requirements for the GPU.

3) I don't know about tiffs, since only Nikon cameras offer writing tiffs (and they require the same level of processing as JPGs); but many raw files have these days a well sized JPG in them. This is the case with all Canon cameras and CR2 files. These JPGs are of the order of 1500 pixels along the longer edge (I may be a bit off on this). But these JPGs don't have an embedded EXIF header, so you must rely on usually less sophisticated / complete raw EXIF header parsing apps.

4) Adobe is pushing DNG, the "digital negative" format. It's getting there, but still doesn't addres quite all issues - which is why the last DNG converter actually embeds the original raw file in itself - "just in case". So we are looking at a 40-50MB "raw" file from a 1Ds2 - no, thank you. DNG will be viable once camera manufacturers actually use the DNG format to save raw images with - and I think that will be "never", given their stubbornness.

As for printing or outputting to film - that I believe is a bit of a knee jerk reaction. I think you'll be much better off "archiving" a Powerbook or something like that, a machine that you know could read and decode all images up to a certain point. Should for some reason in 10 years the 1Ds2 no longer be supported by a raw converter anymore, you'll always have the archived Powerbook in the garage and can get a much better quality result than from ... er ... scanning a print.
 

Ray West

New member
Thanks for your reply, Stan.

I know very little about macs, so I can't comment how long such things may last. The earliest pc that I have is about 1979 - based on the z80. I don't expect that will boot now, I may try one day, just to see. It used to be that there was a ten/twenty year life on ic's due to moisture penetration, wicking up the legs, corroding the internal connections. On the mac, does it have boot rom? what is its life?

I like paper tape, I think I calculated you could hold a 10MB file on pt, and store it in a box 63 inches square (by just over an inch thick). With paper tape, it is trivial to build your own reader.

There are a lot of old photos in shoe boxes around the world, stuff well over 50 years old. I'm not sure if 50 year old cd's will be of any use. (I wish I had a use for some just a few days old). If digital cameras were invented in 1910, say, and now someone invented film, we'd probably be thinking how wonderful it is.

Any way, I'm dragging this off topic, so I'll leave it alone,

Best wishes,

Ray
 

Harvey Moore

New member
Leica DMR uses the dng format, only dslr I know of that uses it.

Adobe dng converter has the option of embedding the original raw file or not, the dng jpg previews options are none, medium, or large. I haven't used the dng converter, so I don't know dimensions of jpg preview image.
 

Michael Tapes

OPF Administrator/Moderator
If you maintain your RAW files, and the the current and future DNG converters, there is no need to archive to DNG for "security reasons". There may be other reasons within your workflow to do it, but for the ability to read RAW files in the future. A copy of PS, and/or other current RCs, and the DNG converter will keep your RAW files safe forever. If at any time your RAW files drop off the list of supported files by your cameras converter, or PS or DNG, or any other RC, then THAT is the time to save a machine with the converters installed and working and a complete disk set or 2 of the orphaned files. JMHO.
 

Michael Tapes

OPF Administrator/Moderator
Asher Kelman said:
Mark my words,

"Stan is the man!" Reread his posts. Whenever he posts it's something you can take to the bank.

Asher

Asher,

Where can you take my posts. I use the same bank Stan does. I am confused :>)
 
DNG output is not so rare

My little Ricoh GR-D produces this, as do the Leica/Panasonics (do the other Panasonics, or just the Leica?). The stubborn ones are C and N. You may see others decide to forget about their proprietary formats soon. Olympus .orf is uncompressed and not very tricky, so i don't see why they would care. Sony is a pretty proud engineering outfit, but they are rapidly re-engineering their K-M acquisition, and might find this direction appealing.

The question I would be interested in knowing more about is whether DNG can accomodate all the useful stuff that now goes into the MakerNotes section of EXIF, protected by "security through obscurity?"

scott
 

Stan Jirman

New member
Sorry if I offended someone by not including "somewhat more obscure" cameras in my discussion of raw formats above. I should have said that of the big two, Canon and Nikon, neither supports DNG and that I bet it's not going to change anytime soon. Same applies for TIFF.

As for maker notes: DNG can accomodate all the useful stuff, if for no other reason that it also accomodates for "maker notes" like data. When you read the DNG spec you'll see that they thought of pretty much everything in an official way so that it would not have to be obscured; however, I am sure that manufacturers will come up with features that the DNG dudes didn't think of, and until there's an official tag for that there will be only the propriatery way of storing it.
 

Mike Spinak

pro member
I don't have the experise to even venture an opinion on why RAW converters are so slow, but I do have something loosely related to add, which may be of interest:

I was talking the other day to a good friend of mine, who is one of the main people involved with one of the most popular converters. I asked him specifically in what areas we could expect to see the most notable progress in the next generation of RAW converters. His first and foremost answer was: conversion speed.

=================================

Regarding Stan's assertion that the in-camera logic is running purely in 8 bits, and Doug's questioning of this:

Again, this is beyond my expertise, but shouldn't in-camera handling of the files in 8 bit produce a predictable pattern of differences (compared to in-computer handling of the files in 16 bit), and therefore, shouldn't this be a testable hypothesis?

What patterns of differences would one expect to see from processing by in-camera 8 bit logic, compared to processing by in-desktop-computer 16 bit. How difficult would it be to compare an in-camera jpeg of a picture to a RSE/C1/ACR/Bibble/whatever-made jpeg of a picture, and see the predicted differences?

Mike

www.mikespinak.com
 

Ben Rubinstein

pro member
My dual processer makes a joke of the single processer that preceded it with ACR, literally halves the time. Well worth the investment.
 
Michael Tapes said:
So to answer the original question...cameras have dedicated hardware so are very fast, PCs use generic hardware, and it is up to the programming skills to get the speed with quality.

Yes, this is IMHO what really makes the difference, as far as the OP's question goes. Hardware (or firmware but to a much lesser extent) implementation is usually very much faster than non-dedicated hardware/software on a personal computer platform, but not as flexible. That's why the leading manufacturers develop their ASICs (Application Specific Integrated Circuits), dedicated chips for a specific task.

Bart
 
D

Doug Kerr

Guest
Hi, Mike,
Again, this is beyond my expertise, but shouldn't in-camera handling of the files in 8 bit produce a predictable pattern of differences (compared to in-computer handling of the files in 16 bit), and therefore, shouldn't this be a testable hypothesis?

I think it would be very dificult, as we have no knowledge of the algorithms being practiced and they are in any case very complicated (especially in the demosiaicing stage).

It is not, for example, as if we could reconstruct the calculations in a very high precision simulation (32-bit arithmetic, perhaps) to get the "right" answer and then compare that with what the various processes (including in-camera) give us.

Unlike, say, caculating the orbit of a satellite, or the shape of the quasi-catenary curve of a suspended cable of tapering linear density, there is no "correct" result to the demosicaing process, as there is no deteministic path from the suite of sensor outputs to the suite of pixel colors - the algorithm uses a sphisticated (and certainly unknown to "us") "guessing" process.
Again, this is beyond my expertise, but shouldn't in-camera handling of the files in 8 bit produce a predictable pattern of differences (compared to in-computer handling of the files in 16 bit), and therefore, shouldn't this be a testable hypothesis?

Note also that not all JPEG encoding algorithms are the same.

Best regards,

Doug
 
Keith Wong said:
This is not a criticism of RAW converters, I am just curious as to the answer. All RAW converters on the market that I have tried are like this, so i'm not singling out any particular one.
I suspect it involves two factors (I could be wrong).

1) The camera uses an ASIC (Application Specific Integrated Circuit) so the silicon logic is highly optimized for the task at hand. Albeit, the same chip would likely make a 25 MHz 386 CPU seem insanely fast at web browsing.

2) Once you get the data onto a general purpose CPU with huge amounts of RAM one can use more complex algorithms. This is especially true if lens distortion corrections are included in the conversion.

So in essence, it gets slower because of the general purpose CPU and more complex RAW conversion method. The results are likely to be better (not necessarily because of the above, but due to human intervention too (selection the proper conversion parameters).

Keith Wong said:
If I were to do no image adjustments and just let the RAW converter process the image using camera settings, the two files (in-camera JPEG and RAW converter JPEG) are virtually indistinguishable.
That is the general purpose CPU versus ASIC issue. Same result (or very close) with totally un-optimized architecture versus as specific one. Imagine drag-racing with a Volvo delivery truck (CPU) versus a fiberglass bodied funny car (ASIC).

Keith Wong said:
I suppose the answer is in the specialized imaging chip on the camera, and the additional overhead of running a bloated OS like Windows XP. But I would have thought that with such an overwhelming computing advantage, my computer should be just as quick if not quicker.
I doubt the OS has a significant overhead to merit notice in a a 10 second process.

Keith Wong said:
Anybody well-versed with computers and RAW converters like to give an answer?
Yes. ;o)

some thoughts,

Sean (who has yet to read the whole thread)
 
Mike Spinak said:
Again, this is beyond my expertise, but shouldn't in-camera handling of the files in 8 bit produce a predictable pattern of differences (compared to in-computer handling of the files in 16 bit), and therefore, shouldn't this be a testable hypothesis?

What patterns of differences would one expect to see from processing by in-camera 8 bit logic, compared to processing by in-desktop-computer 16 bit. How difficult would it be to compare an in-camera jpeg of a picture to a RSE/C1/ACR/Bibble/whatever-made jpeg of a picture, and see the predicted differences?
The problem here is that with complex algorithms it may never be possible to precisely correlate lower precision results. The reasons behind this can be found in a simple case called bifurcation. In essence, consider the concept of the square root of a number. The square root of a number is a number such that when it is multiplied by itself it yields the inital number. Hence the square root of 4 is 2 as 2*2 (2 multiplied by 2) is 4 and the square root of 9 is 3 as 3*3=9.

But something strange happens when we try to get the square root of a negative number as the product (result of multiply) any two numbers will never yield a negative number as -2*-2 (negative two times negative two) is 4. Hence to deal with this, the concept of complex numbers must be introduced.

A complex number is a number with two parts. One part is real (i.e., the numbers we can count like $1.50 in money) plus an imaginary part which is a multiple of the newly introduced square root of negative one (-1) which is denoted by i. While you can never hold the imaginary part in your hands as a concrete thing, in physics and engineering it is a very real thing that affects your every day life through the engineering underlying the design of things like airplanes.

... (more depth of introduction skipped to stay on topic). Please note I will expand this if it is so requested.

What is interesting here is that there are certain magical cusp points where our everyday real numbers (the ones we can hold in our hands like 5 jelly beans or half a donut) cannot solve a problem while the complex numbers can. What this introduces near the values for this cusp point is that there may be two answers. Why? Because, just as 2*2=4 so does -2*-2=4. Hence, in truth we get that both 2 and -2 are square roots of the number 4. And this fact that two solutions to a formula can exist near on of these cusps (bifurcations).

What the heck am I saying here? Skipping all the math, have you ever held a 2 liter soda pop bottle? Have you ever popped a side in and had it stay? Have you ever popped it out and had it resume its original shape? What the 2 liter plastic soda pop bottle shows here is that equalibrium (a solution to the equation balancing the forces on the bottle) can have two possible solutions. This is what a bifurcation is. To see this in another for look at this image (it may be huge as I have yet to learn how to inject HTML into this forum):

800px-LogisticMap_BifurcationDiagram_450.gif

http://www.envisagement.com/opf/800px-LogisticMap_BifurcationDiagram_450.gif
- http://commons.wikimedia.org/wiki/Image:LogisticMap_BifurcationDiagram.png (public domain image)

Anyway, what this image shows is the graph of an equation containing many bifurcations. This is like being able to take multiple routes in a drive across town to get to the same goal (a very rough image).

Stepping back to the original question: When an algorithm is complex enough, things like this can occur.

Suppose this occurs at a value of 5. Then in 8-bit precision you might get hit one of these point while in 16-bit precision you might barely miss on of thes bifurcation points. The result is that you will end up at a different value than depending on the precision you use in your calculations. If you introduce enough of these bifurcation points into a calculation, then 8-bit and 16-bit arithmetic may yield insanely different answers. This is called chaos or chaotic dynamics in mathematics. One of the earliest instances of this was found in weather prediction. A computer was used to model weather behavior and good results were found. Then more precise inputs were used ( 8.1 rather than 8) and suddenly the results were incredibly different.

In short, the accuracy and stability of a numerical algorithm (converting sensor data to an image format) can be greatly affected by both the precision of the input data and the precision at which the calculation was done and changing the precision can affect the answer greatly for a minor change in the input.

This is less prevalent in generating an image file from a RAW sensor input than it is in weather prediction. But it can nonetheless have negative effects in an image (more ringing/halos about an edge).

And adding the advances in computing power that are constantly occuring we get more and more complex algorithms being used for digital photography all the time. Hence, as RAW conversions improve, correlating their results based on the precision of the data used in the conversion often becomes less and less reliable.

some thoughts but not answers,

Sean
 
Last edited:

Peter Ruevski

New member
Asher Kelman said:
Mark my words,

"Stan is the man!" Reread his posts. Whenever he posts it's something you can take to the bank. Besided being a Czech, which aready, IMHO, gets him 9.9 points out of a maximum of ten, he's as smart and as proficient as it gets. He's the master, mister, patron, un autre "Grand Monsieur" and, BTW, (totally schizophenically apart), a fine photographer. I hope he'll share his work here!

Asher

Stan Jirman said:
Oh Asher, don't flatter me like so :)

Stan,

You indeed are "the man" in this forum ;-)
You have my deep respect.

Best regards,
Peter Ruevski
 
Last edited:

Tom Yi

New member
Keith,
Have you tried either the RawShooter Premium or the Essentials version?
Also I learned that turning off all the programs on the bottom right of the task bar opens up a lot of RAM to work on RAW images. Obviously, if you are hooked up to the net, don't turn off the virus protection and stuff, but I think if you have a lot of programs running in the background, turning them off may speed things up a bit.
 
Top