• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

The Canon sRaw and raw output formats

Doug Kerr

Well-known member
Introduction

For some while, most Canon EOS cameras (and a number of other Canon digital cameras) have offered a "Raw" format option for the output of a shot. This very nearly gives us the actual output of each sensel. From this we can, in postprocessing software, "develop" the actual image with a great deal of control over just how that is done, with many well-known advantages.

In 2007, in their EOS 1D Mark III digital SLR camera, Canon introduced a new output format option, the “sRaw” (“small raw”) format. It is now offered in all subsequent middle- and upper tier Canon dSLR’s.

This format provides a Raw data suite (with which we can do all dearly-beloved things) that describes an image with exactly half the pixel dimensions (one-fourth the pixel count) of the camera's "native" image (the one that is one pixel per sensel).

And it delivers those in a file half the size of the Raw file (which describes an image at the "native" pixel dimensions.

Is that a bargain or what!

And in doing that, there is a small compromise in being able to recover, in the postprocessing software, the precise values of the raw outputs of each sensel.

A variant

Shortly after, Canon introduced another "(not quite so) small Raw format", known at first as sRaw 1. Then the original sRaw format was renamed as sRraw2.

But soon sRaw2 was named back to sRaw, and sRaw1 was renamed "mRaw" ("medium Raw", I suppose). (Yes, it sounds like a way to order steak cooked.)

Remember, this exercise in naming came from the folks that brought us the D30 digital camera, and later the quite different 30D.​

Here, the data suite describes an image with pixel dimensions about 68% those of the "native" image (about 47% the pixel count), and the file size is about 75% that for a regular Raw file (which again describes an image with the "native" pixel count).

In order to accomplish the great bargain that sRaw and mRaw provides, a very clever thing had to be done. I'll summarize how it works for sRaw.

sRaw

We start by taking each four-sensel "Bayer cluster" on the sensor (RGGB) and use it as a true "tristimulus" pixel detector (for a "fat" pixel). (Thus we get an image with one-fourth the "native" pixel count.)

We then transform the tristimulus output from each fat pixel detector into a "luminance-chrominance" form, reminiscent of the YCbCr transform used in regular JPEG encoding.

But there are two differences here:

• The three outputs are not the same as the underlying values of any recognized RGB color space; the sensor runs in its own color space.

• The YCbCr scheme used in regular JPEG encoding operates from the nonlinear color coordinates R, G, and B; as a result, Y is not really luminance (we in fact often call it luma to make that clear), and the combination of Cb and Cr do not really represent chrominance (we often call the factor they jointly describe as chroma to make that clear.

In the scheme used in sRaw, the first coordinate (sometimes labeled "y") indeed essentially gives the luminance of the fat pixel, and the other two (often labeled "x" and "z") essentially give the chrominance.
In any case, we now only record a x-z pair for every two pixels. This is an identical concept to the chrominance subsampling (there, more accurately, chroma subsampling) used in a regular JPEG file.

The rationale behind this, in both situations, is that the human eye can discern finer detail in the luminance aspect of the image than in the chromaticity aspect. Thus we can profitably use a lower resolution for chromaticity than luminance.

Hey, is this a good idea before we have done such things as applying sharpening and so forth? Probably not.​

mRaw

I am less clear about how mRaw works, since the tidy "Bayer cluster as a fat pixel" concept is not applicable.

This scheme also does recoding into a luminance-chrominance form, but after that, here there is only one x-z pair recorded for every four pixels (a 1:4 chrominance subsampling).

Not really reversible

Of course owing to the chrominance subsampling, the the Raw development software must interpolate between the available x-z pairs to reconstruct those for the pixels that don't have their own, before reversing the transform to get the actual so-called "R", "G", and "B" values for each fat pixel. Thus the suite of raw data we have to work with may not be exactly what the camera started out with.

I have no idea just what mischief this can cause.

In practice

I have no idea how this is received in practice. I'd be pleased to hear from any members who have had experience with these format modes.

Best regards,

Doug
 
Top