• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

ALPA 12 TC & SINAR Emotion

Hi Eric,

and thanks again for your thougthful advise! After our brief conversation, I stumbled across Alpa, and boy oh boy do they have nice stuff!

In deed, the 12TC is a very intruiging camera I think. Must check on Jacks' fora about it.
 
Merci Thierry!

May be a basic question that I always stumble across, what is the difference bewteen single shot and multishot systems?

If I understand that correct there is no price difference with the sinar DB.
 
Dear Georg,

It is rather a long one to answer! Basically an image is made of the 3 basic colours RED, GREEN and BLUE, thus it NEEDS the colour information from the 3 primary colour channels being R, G and B.

Each CCD is made of pixels. If one takes an image with these pixels like this, one gets only B & W information. Therefore, this pixels have to be "sensitive" to colours and to the 3 basic R, G and B colours. In reality, each pixels is fitted with a tiny little colour filter on top of it according to a special pattern:

- the first line of pixels has R-G-R-G-R-G-... etc filters
- the second line of pixels has G-B-G-B-G-B-....etc filters
- the third line has again pixels with R-G-R-G-R-G-...etc filters
- the fourth line has then again pixels with G-B-G-B-G-B-....etc filters

.... and so on all along the CCD's surface.

One first remark: there are 2 times more Green pixels than Red or Blue pixels.

When capturing now an image in a single-shot mode, one can easily understand that EACH pixel ONLY captures the light information of the colour channel from the colour filter on top of it, thus the pixels with Red filters will capture ONLY the red information, those with a Blue filter will capture ONLY the blue information, and those with Green filters will capture ONLY the green information.

Does a bell ring now?!

YES, each pixel is missing the colour information for 2 basic colour channels out of 3! Pretty bad, isn't it?

Well, not that bad, although related with some problems: ever heard about interpolation?
The interpolation process is a process with a very "clever" and complicated algorithm calculating the 2 missing colours for each pixels. It is diffificult to explain in detail how this works, and each software has its own (secret) algorithm, but basically it looks for the neighbour pixels, and how much information they have captured (concerns mainly the Chrominance information) to calculate and build up the 2 missing colours for each pixel. Then, one reason for twice as much Green pixels (remember?) is that Luminance information for the 3 colour channels is calculated out of those green pixels.

The result of this rather complicated and long interpolation process is .... a colour image, BUT WITH interpolated (calculated) colours. What does this mean? Although interpolation algorithms are today pretty good in many situations, giving nearly perfect (interpolated) colours for the image, there are situations where the algorithm makes mistakes and creates artifacts or wrong colours (by interpolating/calculating wrongly). This makes sense, and one would be surprised of the contrary. Actually one wonders how the resulting colour images are still that good!

Here comes the multishot mode!

Today, all multishot backs do take 4 images: after the first image the CCD is moved by a very tiny distance of excatly 1 pixel in horizontal direction. Then the second shot is taken. Again, now the CCD is moved downwards by exactly 1 pixel size: here the third shot is taken. Finally the CCD is moved again horizontally (in the opposite direction as after the 1st shot) by excatly 1 pixel, and the 4th shot is then taken. After that, the sensor is driven automatically in its initial position.
But the result of this is something very interesting: one has captured this way and for EACH pixel the colour information for the 3 colour channels, even 2 times the green information. It is now VERY easy (and MUCH faster than any interpolation algorithm) to calculate and assemble the 4 images together to get a ....

TRUE COLOUR image with no interpolation or whatsoever interpolation artifacts. This image created by means of 4 shots, is the ONLY way to get a 100% true reproduction of the subject, in full resolution.

One "little" problem with the 4-shot mode: one CAN shoot ONLY static and non-movable subjects, obviously!
And the camera needs to be as stable as possible between the 4 shots, without any vibration (mirror slap, bad tripod, wooden floor, studio in a location with heavy car traffic outside, high located floors, etc ...).

Important remark: this explains also why the RAW file size of a single shot image is 1/3 of the size of a multishot RAW, before being interpolated!

Now, there is another possibility when using a multishot mode: it is called the 16-shot mode (or also called MICROSCANNING). With this mode the CCD is moved 4 times in each direction, by each time 1/2 (HALF) of a pixel size. The result is an image with 4 times higher resolution (and obviously 4 times higher file size) and much more details captured.

Why should one use this, is 4 shot not enough? Well, do remember that the pixels are beside each other with a very little gap between them. During the 4-shot process, the information "in-between" this gap in the subject IS NOT captured. There are situations where one wishes to get the ultimate details: that's what the 16-shot is for, to capture very high-definition images.
BUT, this 16-shot process requires the use of the best possible digital lenses available: it is recommended to use the so-called HR Digital lenses (High Resolution lenses).

No need to say that the camera has to be ABSOLUTELY stable and vibration-free during this 16-shot process, even more than for the 4 shot mode.

So in other words, if one wishes absolutely true and ultimate quality in digital, the multishot process is the only option giving this result, with the necessity however to shoot static and non-moving subjects.

One last question: HOW can one move the CCD by exactly and only the distance of 1 pixel (1 pixel = 7,2 micrometer in the 33 MPx sensors!) in the 4-shot mode, and even by HALF a pixel for the 16-shot?

Here comes to play a very precise (and expensive) feature of these multishot backs: the piezo plate. This piezo plate has the CCD fixed on it and is "activated" (actually pushed/moved) on its 4 sides by a cristal. This cristal can expand by a certain amount, depending the voltage one puts on it. It is very easy to calculate how many volts it needs to make this cristal expand exactly by the size of a pixel. When one puts now this amount of voltage on the cristal, this will expand and push the piezo plate exactly by this distance, thus the CCD sensor will move AS WELL by excatly the distance of one pixel (or half for the 16-shot mode).

This is roughly explained, but nevertheless and IMO understandable.

Best regards,
Thierry

Merci Thierry!

May be a basic question that I always stumble across, what is the difference bewteen single shot and multishot systems?

If I understand that correct there is no price difference with the sinar DB.
 
Oh my God.

3d-cool.gif


LOLOLOLOL I had NO IDEA!

I thought I was asking just a harmless little question that probably has an obvious answer attached.

Thank you so much for that in depth explanation Thierry! Outstanding!

In reality, each pixels is fitted with a tiny little colour filter on top of it according to a special pattern:
I guess that would be the equivalent to the Bayer Pattern on my Olympus! Right?

I promise my subconsequent question will not require so much work to answer.

3d-angel.gif


What is sinars policy on warranty of such sophisticated backs? How many years are standard? Can that be extended for a additional fee?

Is there something in place to cover the owner in case it has to be send in for repair for a longer time, can he get a substitute if he has to complete work, urgent assignments etc.? If so, at what costs?

I am just thinking about some time when I can afford such, I certainly would like to have a peace of mind solution.

This is roughly explained, but nevertheless and IMO understandable.

Roughly explained? You gotta be kiddin me!
3d-silly.gif


Seriously, very understandable in deed!

We have to go for a few Koelsch when we meet in Cologne, I owe you one! Hope you can make time then. Have you ever been in "Die Malzmuehle"? Best beer in town, well 15 years ago that was, and hearty food! I am sure they are still around.

P.S. Was it correct to say there is no price difference in single or multishot for sinar backs? - I think I read that somewhere on another forum. -

P.P.S

So in other words, if one wishes absolutely true and ultimate quality in digital, the multishot process is the only option giving this result, with the necessity however to shoot static and non-moving subjects.

Just out of curiousity, I understand this was developed for studio/product applications mainly, however, do you know by any chance whether some landscape shots exist that were taken with a multishot? The odd totally windstill day, lake, reflections, something like this. I imagine this to be possible and would love to see such results! I saw a shot made from a nautilus on a HP presentation that was made with Sinar, I could not take my eyes of it, such an excellent execution, this picture was just outstanding! Now I wonder whether this was a multishot perhaps.
 
Last edited:
Oh my God.

LOLOLOLOL I had NO IDEA!

I thought I was asking just a harmless little question that probably has an obvious answer attached.

Thank you so much for that in depth explanation Thierry! Outstanding!

Yes, it was harmless! LOL. You're welcome!

I guess that would be the equivalent to the Bayer Pattern on my Olympus! Right?
Yes, it is the Bayer Pattern

What is sinars policy on warranty of such sophisticated backs? How many years are standard? Can that be extended for a additional fee?
We have a contractual 2-year warranty period for all products, included digital backs PLUS an additional FREE 1-year warranty as soon one registers the product in our SOG (Sinar Owners' Group), which is done in 1 minute and foc. So totally a 3-year warranty. We guaranty as well repairs and service to our products for 10 years after their discontinuation.
So I guess one can say that it is a pretty secure investment.

Is there something in place to cover the owner in case it has to be send in for repair for a longer time, can he get a substitute if he has to complete work, urgent assignments etc.? If so, at what costs?
This is dealt with by our distributors. Certainly there is the availability of service/back-up units


Roughly explained? You gotta be kiddin me!
Yes, I was!

Seriously, very understandable in deed!

We have to go for a few Koelsch when we meet in Cologne, I owe you one! Hope you can make time then. Have you ever been in "Die Malzmuehle"? Best beer in town, well 15 years ago that was, and hearty food! I am sure they are still around.
Thanks!

P.S. Was it correct to say there is no price difference in single or multishot for sinar backs? - I think I read that somewhere on another forum. -
Prices can (do) vary from one country to another. But basically, the Sinarback eMotion 75 single shot (33 MPx) has the same price as the eVolution 75H multishot (33 MPx).
All Sinarbacks multishot do of course also have buil-in the single shot capability.


Just out of curiousity, I understand this was developed for studio/product applications mainly, however, do you know by any chance whether some landscape shots exist that were taken with a multishot? The odd totally windstill day, lake, reflections, something like this. I imagine this to be possible and would love to see such results! I saw a shot made from a nautilus on a HP presentation that was made with Sinar, I could not take my eyes of it, such an excellent execution, this picture was just outstanding! Now I wonder whether this was a multishot perhaps.

There have been landscapes taken with multishot backs. In the early stages of digital I did even take a landscape with a Dicomed scanner and about 20 minutes exposure. But yes, of course, one has to make sure that absolutely nothing is moving in the subject.

Best regards,
Thierry
 
There have been landscapes taken with multishot backs. In the early stages of digital I did even take a landscape with a Dicomed scanner and about 20 minutes exposure.

WOW, can you post that 20 minute scan and the multishot pictures Thierry? ...

3 years cover and 10 years repairs and parts is reassuring, however, the operating instructions do not state the type of warranty, may be it is in terms and conditions, does it cover 3 years parts and labour globally, regardless where the back was purchased? Say I purchased it in Asia, and need service in europe, I would expect that not to be a problem.

The combined operating instructions (rev00- 5.2.2008) for the 54LV/75 and 75LV state a few things that trigger questions:

The sensitivities that can be set on the Sinarback eMotion range from ISO 25 to ISO 800
depending on the model.

What are the steps up from ISO25, and most of all, which model can operate ISO 25?

The technical data at the end of the document states ISO 50-400 for the 54LV and ISO 100-800 for the 75/75LV. The operating instructions for the eVolution 75h also speaks of ISO 50-400. No mention of ISO 25.

When formatting the internal memory of the Sinarback eMotion the reference data
that are saved on the Sinarback will be deleted. This reference data, however, can be
recalled from the reference CD that is delivered with the digital back.

What does this reference data refer to?

Scouting the Internet to learn more about it I read on the Alpa site that the Kodak chips are supposed to be more prone to color cast than the Dalsa, however the Dalsa would be more prone to "centerfold". I understand that there are special software tools available to compensate/deal with it, but what on earth is centerfold?

eusa_pray.gif
... praying that this question does not require you to write another "rough explanation" <smile>

P.S.
Just a thought, I would think it is an idea to split this thread from post#4 on into a sperate thread titled "sinar digital backs explained!" or something like that. There is so much valuable information Thierry has given us here.
 
WOW, can you post that 20 minute scan and the multishot pictures Thierry? ...
Sorry, they are all with my colleague whom with I did the shots, and I've lost contact with him! But you can believe me, they where astonishingly detailled

3 years cover and 10 years repairs and parts is reassuring, however, the operating instructions do not state the type of warranty, may be it is in terms and conditions, does it cover 3 years parts and labour globally, regardless where the back was purchased? Say I purchased it in Asia, and need service in europe, I would expect that not to be a problem.
Yes, warranty is worldwide and covers parts and labour, under the normal circumstances (no wrongdoing by the customer).

The combined operating instructions (rev00- 5.2.2008) for the 54LV/75 and 75LV state a few things that trigger questions:

What are the steps up from ISO25, and most of all, which model can operate ISO 25?

No, that's obsolete since the latest firmware update of the eMotion backs. This firmaware changes the NOMINAL ISO settings from ISO 50 to 400 for the eMotion 22 and 54 LV (previously 25 to 200), and from ISO 100 to 800 for the eMotion 75 and 75 LV (previously 50 to 400). Please note that this has not changed the effective ISO sensitivity, only the nominal. The reason why is because our ISO rating was low and conservative by 1 f-stop.

The technical data at the end of the document states ISO 50-400 for the 54LV and ISO 100-800 for the 75/75LV. The operating instructions for the eVolution 75h also speaks of ISO 50-400. No mention of ISO 25.
See my explanation above

What does this reference data refer to?

Scouting the Internet to learn more about it I read on the Alpa site that the Kodak chips are supposed to be more prone to color cast than the Dalsa, however the Dalsa would be more prone to "centerfold". I understand that there are special software tools available to compensate/deal with it, but what on earth is centerfold?
The centerfold issue is something rather long to explain, and would need at least an explanation as long as the one about multishot! But I will skip this one and try to make it short!

It comes from the way the sensor is built, and as a result does sometimes and under certain light conditions show a vertical centerline with with colour casts on each side of it (sometimes even horizontal lines). This issue is automatically dealt with and corrected in Captureshop, and you don't have to worry about it (unlike other manufacturers).
 
Michel,

what I can tell you is that many tests have been done with some "developer kits", and since many years: up to now those tests have not be giving satisfaction, on different levels.

Best regards,
Thierry

Thierry,

Could it be possible to follow the Foveon sensor path for Sinar ? Then all backs would be single shot ?

Michel
 

Ray West

New member
Hi Thierry,

A very good explanation of a complicated method of overcoming what on the face of it, is a simple problem.

So, if, the exposure is calculated as say 1/200 second, how long does it take for the four or 16 shots to to made, and possibly the in camera processing/buffer size? Does the time vary much in relationship to the shutter speed? If you have mentioned it before, I'm sorry if I missed it. The technique is sort of the reverse of image stabilisation, maybe micro stitching.

Some interesting 'sums' needed, since I guess the photo sites are not on a rectangular grid.

The raw file that is output, is it possible to get each image for each position from that, or are they completely added together (thinking if something moving relatively fast, could you get the movement as separate frames.)

Best wishes,

Ray
 
hi Ray,

I am not sure what your post is about and what it is refering to, if it is a question to me, and if yes, which question(s). Possibly I missed something, or then let me have my morning coffee.

But to answer your question about "how long it takes for a multishot":

- basically multishot is typically used in studios and with strobes (not exclusively, but mainly). The main factor slowing down the process is not the capture itself, related to the exposure time, but the recycling time of the power packs to be ready for the next shot.
- there is no buffer/processing in the back itself, other than the readout of the data and A/D conversion
- depending which computer (processor/ram) you are using, the data is readout/transfered/stored to the computer a little bit faster or a little bit slower, but all in all, and as said above, the main factor is the recycling time of the light power packs.
- typically, the readout/transfer/storage is taking about 1 sec. per frame, which means the back is ready for the next shot after about 1 second. Usually, you will set your "flash delay" to a longer time than this. Most known power packs are ready at full charge with full capacity after a longer time than 1 second: typically one sets a "flash delay" time of about 3 seconds between the shots, to be sure the pack is really at 100%.

Then, the technique to achieve the multishot is in fact using a piezo plate, moved by a tiny crystal on which one puts a certain voltage. As soon as one puts current on a crystal, this one will expend: depending on the voltage put on this crystal, it expends more or less. It is pretty easy to calculate the exact voltage to get the amount of dilatation needed, e.g. for a sensor having pixels with a size of 9 microns, this crystal needs to expend and push the piezo plate by 9 microns (when using the 4-shot mode, where the sensor is moved by one pixel after each shot). That's how the sensor is moved: a very precise technique well mastered. However, it starts to become very tricky when the pixels get smaller, like it is the case with the current sensors. That is the reason why you don't see yet any 16-shot with a 33 or 39 MPx sensor: it needs in this case a movement of 1/2 the pixel size, which is around 3 microns. Needless to say that this is much more difficult to obtain.

The 4 (or 16 in the case of the 16-shot) can in theory and basically be obtained separately, but in the practice the software will add them together (making sure that they map and that there is no mis-registration)

I hope this answers your question(s)?

Best regards,
Thierry

Hi Thierry,

A very good explanation of a complicated method of overcoming what on the face of it, is a simple problem.

So, if, the exposure is calculated as say 1/200 second, how long does it take for the four or 16 shots to to made, and possibly the in camera processing/buffer size? Does the time vary much in relationship to the shutter speed? If you have mentioned it before, I'm sorry if I missed it. The technique is sort of the reverse of image stabilisation, maybe micro stitching.

Some interesting 'sums' needed, since I guess the photo sites are not on a rectangular grid.

The raw file that is output, is it possible to get each image for each position from that, or are they completely added together (thinking if something moving relatively fast, could you get the movement as separate frames.)

Best wishes,

Ray
 

Ray West

New member
Hi Thierry,

Thanks for the answer. I was thinking that if the total full exposure time, or whatever you call the multishot mode time from start to finish, was short then for landscape work, out of doors, if each shot can be at a fast aperture, then the total exposure time may not be much different than a few years ago when slower films were used (a few years may equal 100 ;-)

So, keeping to simple figures, assuming an exposure which takes 1/100 sec - we need four, so total time for object to be stationary is (4 * 0.01) + (3 * m) seconds (where m is some memory writing function, or similar, necessary between each shot). If m is 1 second, then the total exposure time is 3.04 seconds, that being the time that the subject needs to be relatively stationary. It seems that m is about a second, based on your reply, and I'm thinking that in many situations, it is quite usable, in the same manner as the early film landscape photographers had to use the T or B setting (or simply removed a lens cap and counted), although I suspect that the softer lenses of the day and other factors masked some of the motion blur.

I would imagine that microstepping tests the resolving power of the lens.

Best wishes,

Ray
 
OK Ray, got you!

Yes, you are right, we are not far in multishot mode from the exposure times a "few" years back.

m = 1 sec. can be reduced with backs having a lower resolution (6 or 11 MPx) and a powerful Mac with a max of RAM.

Yes, you are right, microstepping makes only sense with the best lenses available, e.g. the HR lenses. In this respect the Rodenstock are outstanding lenses.

Best regards,
Thierry

Hi Thierry,

Thanks for the answer. I was thinking that if the total full exposure time, or whatever you call the multishot mode time from start to finish, was short then for landscape work, out of doors, if each shot can be at a fast aperture, then the total exposure time may not be much different than a few years ago when slower films were used (a few years may equal 100 ;-)

So, keeping to simple figures, assuming an exposure which takes 1/100 sec - we need four, so total time for object to be stationary is (4 * 0.01) + (3 * m) seconds (where m is some memory writing function, or similar, necessary between each shot). If m is 1 second, then the total exposure time is 3.04 seconds, that being the time that the subject needs to be relatively stationary. It seems that m is about a second, based on your reply, and I'm thinking that in many situations, it is quite usable, in the same manner as the early film landscape photographers had to use the T or B setting (or simply removed a lens cap and counted), although I suspect that the softer lenses of the day and other factors masked some of the motion blur.

I would imagine that microstepping tests the resolving power of the lens.

Best wishes,

Ray
 
Top