• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

News: Sinar Added to Leica's Portfolio!

Asher Kelman

OPF Owner/Editor-in-Chief
It was attempted previously but never went through. Now it's official:


Leica Camera AG is specialist camera manufacturer Sinar Photography AG
The Leica Camera AG, Solms, has the Swiss manufacturer of professional cameras, Sinar Photography AG, Zurich taken on Monday. The details of the transaction, both companies have agreed not to disclose. The Sinar Photography AG is the leading manufacturer of cameras for professional professional photography. As the only supplier in this segment holds Sinar digital ready solutions for professional photographers from camera lenses, digital backs and closure systems to workflow software. Marketing, product management, support, development and production of Sinar remain in Switzerland. Sales will be handled in the future on the Leica Camera AG, as well as through its worldwide dealer network Sinar Photography AG.

The Leica Camera AG thus enhancing its offering in the high-end digital cameras segment and expand its position in the market for professional photography further. Leica offers the Leica S-System according to its own brand philosophy to a highly mobile, integrated medium format system, which now is the perfect complement to the modular and particularly for the studio, industrial, and architectural photography suitable Sinar system. Synergies also arise in the areas of digital technology, sales and customer care.

Leica now holds ready a complete range of medium format camera to digital professional camera, making it the only complete provider of digital camera solutions above the Leica format 35 mm. Both brands are known for each of the highest quality standards and their decades long tradition of professional photography, in which the best picture and the photographer in focus.




Sinar is too important a brand to see go out of business and, like Minolta acquired by Sony and Pentax by Ricoh, this allows the heritage of a very fine camera maker to continue to thrive.

It's feasible also that the Hy6 Sinar-Leaf Rollei design MF body for digital back would be available to Leica and that would mean a platform for a Leica digital back. Could even be that Leica would further advance the existing Sinar back with their latest technology to make a shortcut to a new Lecia MF camera!

asher
 

Doug Kerr

Well-known member
Hi, Asher,
It was attempted previously but never went through. Now it's official:

Very interesting.

That translation of the notice has a bit of "Throw Mama from the train a kiss" to it. This is a little less jarring:

On Monday, Leica Camera AG, Solms, finalized the takeover of Sinar Photography AG, Zurich, the Swiss manufacturer of view cameras. Both companies have agreed that the details of the transaction will not be disclosed. Sinar Photography AG is the leading manufacturer of equipment for professional view camera photography and the only provider in this segment to offer complete digital solutions for view camera photographers. The Sinar product portfolio offers everything from cameras and lenses to digital backs, shutter systems and workflow software. Marketing, product management, support, development and production will remain in the hands of Sinar in Switzerland. Sales and distribution of the product portfolio will be handled by Leica Camera AG and the worldwide dealer network of Sinar Photography AG.

With this takeover, Leica Camera AG completes the company’s portfolio in the high-end digital camera segment and further expands its position in the market for professional photographic equipment. With the Leica S-System, Leica offers a high-mobility, integrated, medium-format digital camera system that is completely in line with the company’s long-standing brand philosophy. This system is now ideally complemented by the modular Sinar system and its particular suitability for studio, industrial and architectural photography. Synergies will also develop in the fields of digital technology, sales and distribution and in customer care.

Leica now possesses a complete product portfolio from a medium format digital camera system to digital view cameras and, as a result of the takeover, is now the only full range provider of digital camera systems in formats larger than the 35 mm Leica full-frame format. Both brands are renowned for their highest quality standards and decades of commitment to the needs of professional photographers and the creation of the best possible images.

About Sinar Photography

The history of the company with its headquarters in Zurich, Switzerland, began in 1879 when the Swiss photographer Carl August Koch opened a studio in Schaffhausen. His grandson, Carl Hans Koch, designed and constructed the first Sinar camera and put it on the market in 1948. Thanks to its precise construction and a uniquely versatile concept of modular and standardised components, it quickly became an essential tool in leading photo studios around the world. The name ‘Sinar’ stands for Studio, Industrial, Nature, Architectural and Reproduction photography.

About Leica Camera

Leica Camera AG is an internationally operating, premium-segment manufacturer of cameras and sport optics products. The legendary status of the Leica brand is founded on a long tradition of excellence in the construction of lenses. And today, in combination with innovative technologies, Leica products continue to guarantee better pictures in all situations in the worlds of visualisation and perception. Leica Camera AG has its headquarters in Solms, in the state of Hesse in Germany, and a second production site in Vila Nova de Famalicão, Portugal. The company operates branch offices in England, France, Japan, Singapore, Switzerland, South Korea, Italy, Australia and the USA. New and innovative products have been the driving force behind the company’s positive development in recent years.​

Ref:

http://leicarumors.com/2013/11/26/l...-the-swiss-manufacturer-of-view-cameras.aspx/

Best regards,

Doug
 
It was attempted previously but never went through. Now it's official:


Leica Camera AG is specialist camera manufacturer Sinar Photography AG
The Leica Camera AG, Solms, has the Swiss manufacturer of professional cameras, Sinar Photography AG, Zurich taken on Monday. The details of the transaction, both companies have agreed not to disclose. The Sinar Photography AG is the leading manufacturer of cameras for professional professional photography. As the only supplier in this segment holds Sinar digital ready solutions for professional photographers from camera lenses, digital backs and closure systems to workflow software. Marketing, product management, support, development and production of Sinar remain in Switzerland. Sales will be handled in the future on the Leica Camera AG, as well as through its worldwide dealer network Sinar Photography AG.

The Leica Camera AG thus enhancing its offering in the high-end digital cameras segment and expand its position in the market for professional photography further. Leica offers the Leica S-System according to its own brand philosophy to a highly mobile, integrated medium format system, which now is the perfect complement to the modular and particularly for the studio, industrial, and architectural photography suitable Sinar system. Synergies also arise in the areas of digital technology, sales and customer care.

Leica now holds ready a complete range of medium format camera to digital professional camera, making it the only complete provider of digital camera solutions above the Leica format 35 mm. Both brands are known for each of the highest quality standards and their decades long tradition of professional photography, in which the best picture and the photographer in focus.




Sinar is too important a brand to see go out of business and, like Minolta acquired by Sony and Pentax by Ricoh, this allows the heritage of a very fine camera maker to continue to thrive.

It's feasible also that the Hy6 Sinar-Leaf Rollei design MF body for digital back would be available to Leica and that would mean a platform for a Leica digital back. Could even be that Leica would further advance the existing Sinar back with their latest technology to make a shortcut to a new Lecia MF camera!

asher
This is absolutely great news Asher… what failed 5 years ago, is now a reality. There are some really important consequences out of this "marriage":
1. MFDB future survival is secured…
2. MF competition is secured, even if Hasselblad will crush…
3. Multishot MFDB technique survival is also secured
4. View camera digital photography will survive

It won't surprise me if the following will also become true in the near future…
A. If a new series of "self contained" sinarbacks will soon be re-released (they only make tethered backs - all MS currently).
B. If a new Leica (or Sinar) MF camera with interchangeable backs capability to support Sinarbacks better, will also be introduced.
 

Asher Kelman

OPF Owner/Editor-in-Chief
This is absolutely great news Asher… what failed 5 years ago, is now a reality. There are some really important consequences out of this "marriage":
1. MFDB future survival is secured…
2. MF competition is secured, even if Hasselblad will crush…
3. Multishot MFDB technique survival is also secured
4. View camera digital photography will survive

It won't surprise me if the following will also become true in the near future…
A. If a new series of "self contained" sinarbacks will soon be re-released (they only make tethered backs - all MS currently).
B. If a new Leica (or Sinar) MF camera with interchangeable backs capability to support Sinarbacks better, will also be introduced.


Theodoros,

How do infer that mullit-shot is also secured for the future?

Is that one of Sinar's strengths? Nicolas claris had one, f course, maybe we'll hear from him.

Asher
 
Theodoros,

How do infer that mullit-shot is also secured for the future?

Is that one of Sinar's strengths? Nicolas claris had one, f course, maybe we'll hear from him.

Asher
Asher, Sinar has discontinued ALL their self-contained (i.e. able to shoot untethered) MFDBs and they only have in their line currently 3 MFDBs that are all capable of multishot and all of them are tethered only. Nicolas used the Sinarback 75LV (probably the best ever single shot MFDB IMO out of all that I've tried) which was "self contained" (with file storage card, battery and rear screen - it even has internal memory of 6GB and can shoot without a card) and is now discontinued as with all other "self contained" Sinarbacks because of the discontinuation of the Rollei XY6 camera, which was the main "support" camera of the system…. That is the reason which leads me to think that Sinar (or Leica) will proceed to reintroduce "self contained" backs and that it is now very possible to also see a new Sinar or Leica MF camera that will take interchangeable backs, so that there is a new "support" camera to the backs, just like P1 and Leaf are using Mamyia at the moment.
The continuation of multishot backs is secured because Sinar has always been the strongest supporter of MS, it is basing their digital marketing on MS and their leading MFDB (not to mention that currently they only do MS MFDBs) has always been a multishot type… They even proceeded to make (their latest camera) a special version of their view camera for copying artwork, which is aimed to support their current MS MFDBs, because MS is essential for copying artwork.
 

Doug Kerr

Well-known member
Hi, Theodoros,

Am I correct that in "multishot" operation, a CFA sensor is used, but for each image, three or four exposures are taken, with the sensor shifted between so that each pixel location of the image is captured in succession by each of three photodetectors (with different chromatic filters) or each of four photodetectors (two of them perhaps having nominally the same filter)?.

Thus each "color aspect" of the image is directly captured at every pixel location.

Is that the actual significance of the term?

Thanks.

Best regards,

Doug
 
Hi, Theodoros,

Am I correct that in "multishot" operation, a CFA sensor is used, but for each image, three or four exposures are taken, with the sensor shifted between so that each pixel location of the image is captured in succession by each of three photodetectors (with different chromatic filters) or each of four photodetectors (two of them perhaps having nominally the same filter)?.

Thus each "color aspect" of the image is directly captured at every pixel location.

Is that the actual significance of the term?



Hi Doug,

There are two-flavors of multi-shot.

One is a resolution enhancing feature, with a (2-dimensional) half sensel shift, thus effectively doubling the sample density, albeit with overlapping sampling apertures (which helps the MTF roll-off towards Nyquist).

The other is a color accuracy sampling feature, with a (2-dimensional) full sensel shift, thus allowing each sensel position to acquire full R/G/B samples despite a Bayer CFA equipped sensor array.

These two features can be combined, which requires 16 exposures per sensel position.

That obviously also requires a lot of time to acquire a full resolution image, and a rock solid setup to avoid camera and subject movement during the total capture cycle. Also, a very stable illumination is required to avoid brightness fluctuations between the sub-exposures.

The sensel offsets are piezo element driven. When everything goes well, a very high spatial and color resolution, high photon count and thus high SNR image will be the result.

Cheers,
Bart
 
Hi, Theodoros,

Am I correct that in "multishot" operation, a CFA sensor is used, but for each image, three or four exposures are taken, with the sensor shifted between so that each pixel location of the image is captured in succession by each of three photodetectors (with different chromatic filters) or each of four photodetectors (two of them perhaps having nominally the same filter)?.

Thus each "color aspect" of the image is directly captured at every pixel location.

Is that the actual significance of the term?

Thanks.

Best regards,

Doug
Hi Doug… you are correct, in multishot there is multiple capture in "true colour" of a (still) scene, because a sensor's Bayern pattern is used in positions that differ by one pixel, so that the same pixel position records the same lighting information twice in green, once in red and once more in blue for all different pixels… More than that, there are very few backs that are doing 16x multishot capture (imacon calls it "microstep") where the sensor is moved by half a pixel and the process is done 4x4=16 times in each direction.

This way,

A. A pixel is split in four true colour pixels and real resolution is multiplied by four…,

B. Even the spaces between pixels are recorded with true colour photon information,

C. The "load" on the lens (the lens resolution requirement) is still the one of the original sensor (since each capture is made on a given resolution image area), thus the problem of having the lens unable to satisfy the sensor's resolution is eliminated.

D. The MFDBs performance in multishot (this also applies in 4X MS captures), is independent from the processor's technology (in fact the image processor of the MFDB isn't involved at all on image quality) and
is up to modern technology as the software advances in time… hence MS captures are up to the latest image processing software no matter how old the sensor or the MFDB is…

Just to give you an idea of how things compare, (in case you don't have own experience) If you was to compare Asher's 80mp back with my Imacon 528c (used in 16x micro step - 88mp result) in a still shot, the result would have an obvious to spot resolution difference even if you would print at A4 size and the colour difference would be simply incomparable, no matter if you would think the the 80mp's back colour is superb… more than that, the DR difference is more than 2stops… Simply, trying to think or even judge from reading an article what high resolution MS looks like is impossible… It's incomparable with anything, you have to see it to believe it and you have to see it done correctly, because many are impressed by having seen "poorly done" MS…
 

Doug Kerr

Well-known member
Hi, Theodoros,

Hi Doug… you are correct, in multishot there is multiple capture in "true colour" of a (still) scene, because a sensor's Bayern pattern is used in positions that differ by one pixel, so that the same pixel position records the same lighting information twice in green, once in red and once more in blue for all different pixels… More than that, there are very few backs that are doing 16x multishot capture (imacon calls it "microstep") where the sensor is moved by half a pixel and the process is done 4x4=16 times in each direction.

<snip>
Thank you so much. That is all very helpful to me.

Best regards,

Doug
 

Doug Kerr

Well-known member
Hi, Theodoros,

More than that, there are very few backs that are doing 16x multishot capture (imacon calls it "microstep") where the sensor is moved by half a pixel and the process is done 4x4=16 times in each direction.

This way,

A. A pixel is split in four true colour pixels and real resolution is multiplied by four…

Does that mean that the image output has four times as many pixels as there are sensels in the sensor?

Thanks.

Best regards,

Doug
 
Hi, Theodoros,



Does that mean that the image output has four times as many pixels as there are sensels in the sensor?

Thanks.

Best regards,

Doug
Exactly Doug! In 16x, it is the only case in photography where you have REAL quadrupling of resolution, you can't have this with an 80mp back over a 22mp back (no where near actually) or with 4x5 film over 8x10 film! The image with 16x is (as far as resolution is concerned) real four times better than 4x… now just imagine that any 4x (of a 22mp sensor and above) is already out resolving an 80mp back, this is because all kinds of aberrations are eliminated, as well as any possibility for moire and per pixel noise is drastically reduced no near zero!
In fact, it is a good test to check that you are doing everything right, …you shoot a subject at 4x and then at 16x, if you find that 16x is not 4 times better than 4x you are doing something wrong! I say this, because you'll find many posts on web forums where people claim that "16x is almost impossible to do", "you can't breath when you are doing 16x", "road noise can ruin it", "4x is sharper because they get blur when doing 16x" …and other crap! They are doing something wrong! I get 100% success with my 16x shots (in fact I only shoot 16x when in MS mode), with music on the radio and not only with the "quiet" Contax 645, but with the Fuji GX680 too, which is considered by many as "impossible to do even 4x with" because you can't lock the mirror up permanently (you can't on the Contax either) and the huge mirror "slaps" the camera creating huge amount of vibration… Nonsense I say… Maybe they should better check their ultra expensive "super-duper" tripod twice for quality… maybe it's not as great as they think!
 

Doug Kerr

Well-known member
Hi, Theodoros,

Exactly Doug!

Having confirmed that, then I note you say:

C. The "load" on the lens (the lens resolution requirement) is still the one of the original sensor (since each capture is made on a given resolution image area), thus the problem of having the lens unable to satisfy the sensor's resolution is eliminated.

Well, suppose we will generate an image 8000 px high by way of this technique (using a sensor we think of as 4000 px high).

I think you are suggesting that, to "fully support" this image (that is, so that the lens MTF is not limiting with regard to the resolution of the delivered digital image), the lens need only have the resolution it would need to have to fully support a 4000 px high image under "normal" technique.

If that's what you are suggesting, I don't think so.

If you are suggesting something else, help me to understand just what that is.​
If the lens, by virtue of its MTF, generates an optical image that will (just) "support" a 4000 px high digital image, then if we sample that optical image at a pitch corresponding to 8000 px high, the resolution potential of that 8000 px high digital image is not "fully supported" by the lens (that is by the optical image generated by the lens.

This is equally true if we sample 8000 times in the picture height:

• With a sensor 8000 px high, or

• In four stages, with a sensor that is 4000 px high.

If your model were true, then we could generate an 8000 px high digital image, with its resolution potential not limited by lens MTF, by using a lens whose MTF only would fully support an image (under normal technique) 800 px high and sampling it in 100 stages.

Overlaying four "staggered" 4000 px high digital images sampled from an optical image with resolution limited by the lens MTF to support a 4000 px high image cannot create an 8000 px high image with resolution to match. That degree of "detail' is just not present in the optical image generated by the lens (which of course "has no idea" what we are going to do with it, how if at all we are going to capture it). We cannot "increase" the amount of detail present by sampling it in 16 passes.

Best regards,

Doug
 
Hi, Theodoros,



Having confirmed that, then I note you say:



Well, suppose we will generate an image 8000 px high by way of this technique (using a sensor we think of as 4000 px high).

I think you are suggesting that, to "fully support" this image (that is, so that the lens MTF is not limiting with regard to the resolution of the delivered digital image), the lens need only have the resolution it would need to have to fully support a 4000 px high image under "normal" technique.

If that's what you are suggesting, I don't think so.

If you are suggesting something else, help me to understand just what that is.​
If the lens, by virtue of its MTF, generates an optical image that will (just) "support" a 4000 px high digital image, then if we sample that optical image at a pitch corresponding to 8000 px high, the resolution potential of that 8000 px high digital image is not "fully supported" by the lens (that is by the optical image generated by the lens.

This is equally true if we sample 8000 times in the picture height:

• With a sensor 8000 px high, or

• In four stages, with a sensor that is 4000 px high.

If your model were true, then we could generate an 8000 px high digital image, with its resolution potential not limited by lens MTF, by using a lens whose MTF only would fully support an image (under normal technique) 800 px high and sampling it in 100 stages.

Overlaying four "staggered" 4000 px high digital images sampled from an optical image with resolution limited by the lens MTF to support a 4000 px high image cannot create an 8000 px high image with resolution to match. That degree of "detail' is just not present in the optical image generated by the lens (which of course "has no idea" what we are going to do with it, how if at all we are going to capture it). We cannot "increase" the amount of detail present by sampling it in 16 passes.

Best regards,

Doug
What you miss Doug, is that the lens "also moves" (but without any movement (!!!) whatsoever). You do get 8000 lines on a sensor from a lens able to provide 4000 lines! You see, the sensor is not still but moves to another position when the lens records another 4000 lines, in practice what happens, is that another pixel is created in-between the existing pixels and the pixels that are in the "starting position" are kind of "shrunken" in size by half (as far as the result is concerned).
 

Asher Kelman

OPF Owner/Editor-in-Chief
What you miss Doug, is that the lens "also moves" (but without any movement (!!!) whatsoever). You do get 8000 lines on a sensor from a lens able to provide 4000 lines! You see, the sensor is not still but moves to another position when the lens records another 4000 lines, in practice what happens, is that another pixel is created in-between the existing pixels and the pixels that are in the "starting position" are kind of "shrunken" in size by half (as far as the result is concerned).

Theodoros,

Let me clarify this matter. It's necessary to consider two separate sections of the information chain. The lens will generate an image focused on a plane, hopefully coincident with the film or sensor. Whatever data and detail the lens transmits there, can be discovered if we have a way of registering, recognizing and discriminating between the flux patterns all over that plane.

We have various ways of translating the shower of photons of so many wavelengths and intensities into an accurate representation of the object being photographed.

One method is to have the photonic energy displace an electron in the surface of the focus plane, and for that to be captured by silver nitrate crystals and so reduce that to a fine grain of elemental silver.

The other method is to have the electron counted as it builds up a charge in the sensel well that's proportional to the number of photons that arrive at that tiny area.

In each case, another procedure is required to develop an image from the consequences of electron capture.

The size of each well means that it cannot discriminate objects smaller than itself. So a procedure to create the equivalent of tinier pixels might allow more detail to be discovered.

However, this line of image MTF enhancement has a major limit. That's because one cannot do better than translate the spatially arranged photon effects on the sensor surface. One depends on the incoming stream of data and so, of course, one cannot translate more fine detail than the lens delivers!

Given the aperture, focus setting and positioning of classical optics, there's only one possible set of data, and no more can be transmitted. If that lens is stationary, the data is fixed too. We could discuss separately stepping a lens, but I won't touch that right now. Usually, the lens is not the limiting factor in getting the most detail in an image. More often, we degrade the image by the photonoc light waves scraping against the edges of the tiny diaphragm aperture, causing ripples which spread information over adjacent sensels. Also, of course, we lose a lot by having a filter to blur the image to prevent moire or else by having a Bayer array where colors and detail are cleverly reconstructed. It's this latter set of compromises that the multishot camera rescues us from having to accept. By moving at 0.5 sensels with a stepping motor, the incoming light on the focus plane is now sampled at close enough increments that we would be able to recreate much more accurately the actual true color and lost detail, as if we had perfect sigma Foveon RGB sensel wells of 100% accuracy.

We are not creating more physical resolution than the lens can provide from its native MTF, but instead, reversing the loss inherent in Bayer imaging.

So a 40 MP sensor will normally yield the equivalent of a 16 MP or so Foveon sensor. Pixel shift gives us the 40 MP despite the clumsy, slow but wonderfully effective correction one can make by shifting the sensor in a rock solid camera, 0.5 sensels in every direction.

Also, if one did this with a constant light source, constant in output, distribution and temperature, there would a perfect reproduction with zero noise too.

However, as far as I can fathom, with classical optics, one can only approach but never exceed the MTF of the taking lens.

Asher
 

Doug Kerr

Well-known member
Hi, Bart,

Thanks for your clear description of multishot operation.

There are two-flavors of multi-shot.

One is a resolution enhancing feature, with a (2-dimensional) half sensel shift, thus effectively doubling the sample density, albeit with overlapping sampling apertures (which helps the MTF roll-off towards Nyquist).

Resolution in the sense of pixel count. Whether the actual resolution increases or not depends on the MTF to the optical image.

A wider sampling aperture (in terms of the sampling period) causes greater rolloff of MTF (as seen in the digital image) as we approach the Nyquist frequency.

Best regards,

Doug
 
Theodoros,

Let me clarify this matter. It's necessary to consider two separate sections of the information chain. The lens will generate an image focused on a plane, hopefully coincident with the film or sensor. Whatever data and detail the lens transmits there, can be discovered if we have a way of registering, recognizing and discriminating between the flux patterns all over that plane.

We have various ways of translating the shower of photons of so many wavelengths and intensities into an accurate representation of the object being photographed.

One method is to have the photonic energy displace an electron in the surface of the focus plane, and for that to be captured by silver nitrate crystals and so reduce that to a fine grain of elemental silver.

The other method is to have the electron counted as it builds up a charge in the sensel well that's proportional to the number of photons that arrive at that tiny area.

In each case, another procedure is required to develop an image from the consequences of electron capture.

The size of each well means that it cannot discriminate objects smaller than itself. So a procedure to create the equivalent of tinier pixels might allow more detail to be discovered.

However, this line of image MTF enhancement has a major limit. That's because one cannot do better than translate the spatially arranged photon effects on the sensor surface. One depends on the incoming stream of data and so, of course, one cannot translate more fine detail than the lens delivers!

Given the aperture, focus setting and positioning of classical optics, there's only one possible set of data, and no more can be transmitted. If that lens is stationary, the data is fixed too. We could discuss separately stepping a lens, but I won't touch that right now. Usually, the lens is not the limiting factor in getting the most detail in an image. More often, we degrade the image by the photonoc light waves scraping against the edges of the tiny diaphragm aperture, causing ripples which spread information over adjacent sensels. Also, of course, we lose a lot by having a filter to blur the image to prevent moire or else by having a Bayer array where colors and detail are cleverly reconstructed. It's this latter set of compromises that the multishot camera rescues us from having to accept. By moving at 0.5 sensels with a stepping motor, the incoming light on the focus plane is now sampled at close enough increments that we would be able to recreate much more accurately the actual true color and lost detail, as if we had perfect sigma Foveon RGB sensel wells of 100% accuracy.

We are not creating more physical resolution than the lens can provide from its native MTF, but instead, reversing the loss inherent in Bayer imaging.

So a 40 MP sensor will normally yield the equivalent of a 16 MP or so Foveon sensor. Pixel shift gives us the 40 MP despite the clumsy, slow but wonderfully effective correction one can make by shifting the sensor in a rock solid camera, 0.5 sensels in every direction.

Also, if one did this with a constant light source, constant in output, distribution and temperature, there would a perfect reproduction with zero noise too.

However, as far as I can fathom, with classical optics, one can only approach but never exceed the MTF of the taking lens.

Asher
I don't think this is the case here Asher… MS is quite different than Foveon and 16X is 4xMS(4x), which means that in reality the result is that by shifting the sensor (or lowering, or moving it horizontally) you shoot a new image area, so in reality what happens is that you are expanding the image area by four but without changing the AOV of the lens or the AOV of the imaging area! This means that the lens/image area combination develops its full resolution ability for each out of the four image areas it creates (all with true colour), but only that the resolution result is of 4x of what it would be if a single area was shot with the same lens in MS …what happens in reality is that you have shot a "theoretical" imaging area that is 4x the original one that the sensor bares, while at the same time keeping the lens to sensor's actual resolution capability for that area constant. This means that you get the same result as if you would have an imaging area four times the size and a lens that would serve this "theoretical" imaging area, that has "new" characteristics that is providing the same MTF characteristics as before per unit of sensor area, but this time for that new "theoretical" area that is four times the size.
 
Hi, Bart,

Thanks for your clear description of multishot operation.



Resolution in the sense of pixel count. Whether the actual resolution increases or not depends on the MTF to the optical image.

A wider sampling aperture (in terms of the sampling period) causes greater rolloff of MTF (as seen in the digital image) as we approach the Nyquist frequency.

Best regards,

Doug
Doug, I think you approach things like if there is a single shot made… look above for my reply to Asher. In reality, with 16x you shoot four times the imaging area size, while you keep the per real area resolution capability of the lens and the pixel size. Additionally you do that in "true colour"!
 

Asher Kelman

OPF Owner/Editor-in-Chief
Doug, I think you approach things like if there is a single shot made… look above for my reply to Asher. In reality, with 16x you shoot four times the imaging area size, while you keep the per real area resolution capability of the lens and the pixel size. Additionally you do that in "true colour"!
Theodoros,

Imagine that bullets or gold or people were being delivered to the focal plane. All one is doing in multishot is moving the sensor to more accurately detect what arrives. One cannot alter the physics of what is traveling down the "supply chain".

What arrives at the imaging plane are millions of individual "sets" of photons, each set corresponding to a finite unit area on the subject. The light reflected or emitted from that particular minute area arrives at the front lens surface at a myriad of different points all over the surface of the lens. However, they are each refracted by a series of lenses so that,(almost), all of them, (in the best circumstances), arrive at exactly one minute sharp point on the focal plane. Unfortunately, in all lenses, many adjacent sets of points get overlapped by imperfect focussing of their constituent photon members from all over the surface of the lens. The "better" the lens, the more perfectly does the glass reconstitute each point from the subject to the image on the focal plane.

No maneuver of the image plane can do more than more accurately sample the flux of photons arriving over a given time. What's needed is some methodology to provide more discrimination by the lens for "safe passage" and reconstitution of all the constituent photons for each point coming from the subject with less overlap. The sensor cannot do that no matter how many images one takes.

Asher
 

Doug Kerr

Well-known member
For clarity's sake, let's look at the microshift multishot principle with a "monochrome" sensor, so as to decouple the "true pixel color detection from a CFA sensor" and "resolution" issues. This being the case, I will refer to the individual photodetectors on the sensor as "pixels" rather than "sensels".

The digital imaging process seeks to take a continuous variable (the variation in the illuminance of the optical image created by the lens) and represent it digitally by sampling (in a two-dimensional grid) and then digitization of the sample values.

To make the numbers simple, let's assume a "square" image and "square" sensors.

We will considered the generation of an optical image of fixed overall dimensions by a certain lens, having a certain MTF, and sensors of matching overall dimensions..

Three "sampling" situations are of interest:

Case A

We have a sensor with pixel dimensions 2000 × 2000. It generates 4,000,000 samples of the optical image, which become 4,000,000 pixels of the delivered digital image.

The highest spatial frequency within the optical image that can be represented by this collection of samples is just under 1000 cycles per picture height (the Nyquist limit).

If the MTF of the lens is "reasonably high" at a spatial frequency of 1000 cycles per picture height, but declines substantially beyond that, thus the highest frequency of the scene that will be present in the optical image is almost 1000 cycles per picture height. Thus we may simplistically describe the resolution of the optical image as 1000 cycles per picture height.

Since frequencies of up to just under 1000 cycles per picture height in the optical image are "captured" by our sampling process, all frequencies in the optical image are represented by the collection of samples (and thus by the digital image). We may then describe the resolution of the "camera" to be a little less than 1000 cycles per picture height.

Case B

We have a sensor with pixel dimensions 4000 × 4000. It generates 16,000,000 samples of the optical image, which become 4,000,000 pixels of the delivered digital image.

The highest spatial frequency within the image that can be represented by this collection of samples is just under 2000 cycles per picture height (the Nyquist limit).

Again, we assume that the MTF of the lens is "reasonably high" at a spatial frequency of 1000 cycles per picture height, but declines substantially beyond that. The highest frequency of the scene that will be present in the optical image is almost 1000 cycles per picture height.

Although frequencies of up to just under 2000 cycles per picture height in the optical image (if present) are "captured" by our sampling process, there are not any frequencies of the "scene" above 1000 cycles per picture height present in the optical image. Thus there will not be represented in the digital image any frequencies of the scene at or above above 1000 cycles per picture height.

We may then describe the resolution of the "camera" to be a little less than 1000 cycles per picture height. This is a result of the MTF of the lens, which is not fully suitable for this form of combat.

Case C

We have a sensor with pixel dimensions 2000 × 2000. It generates 4,000,000 samples of the optical image, which become 4,000,000 pixels of the delivered digital image.

We "interrogate" it four times, shifting it with respect to the optical image by 1/2 the pixel pitch vertically and/or horizontally.

Each interrogation produces 4,000,000 samples of the optical image. These samples are digitized and consolidated into a data set of 16,000,00 samples. The points in the optical image they represent are at a spacing of 4000 samples per picture height.

The highest spatial frequency in the optical image that (if present) that such a collection of samples (the overall digital image) can represent is just under 2000 cycles per picture height (the Nyquist limit).

Again, we assume that the MTF of the lens is "reasonably high" at a spatial frequency of 1000 cycles per picture height, but declines substantially beyond that. The highest frequency of the scene that will be present in the optical image is almost 1000 cycles per picture height.

Although frequencies of up to just under 2000 cycles per picture height in the optical image (if present) are "captured" by our sampling process, there are not any frequencies of the "scene" above 1000 cycles per picture height present in the optical image. Thus there will not be represented in the digital image any frequencies of the scene at or above above 1000 cycles per picture height.

We may then describe the resolution of the "camera" to be a little less than 1000 cycles per picture height. This is a result of the MTF of the lens, which is not fully suitable for this form of combat.

An assertion

It has been asserted that the generation of the 16,000,000 samples of a certain optical image through four interrogations of a 2000 px × 2000 px sensor, and staggered locations (as described for Case C) will yield a higher overall resolution score for the camera than if the 16,000,000 samples are gotten by a single interrogation of a 4000 px × 4000 px sensor (as described for case B).

I know of no phenomenon that would make that so.

On the sampling aperture

When we first study the representation of a continuous variable by sampling, we assume "point sampling": the sample value is the value of the variable at the instant/location defined for the sample.

In a digital camera sensor, this would imply a photodetector with an infinitesimal light acceptance window.

Of course, such would be very disadvantageous from a standpoint of signal-to-noise ratio. Accordingly, we typically arrange for the photodetector's light acceptance window to be larger, perhaps approaching in height and width the pixel pitch.

The light acceptance window, or the time window when sampling an electrical signal, is called the "sampling aperture".​
Thus the sample value becomes more nearly the average of the value of the variable over one sampling "interval".

The effect of this is that the representations of different frequency components of the same actual amplitude, by the set of samples, declines with increasing frequency. The "fat" sampling aperture, like a wide point spread function, becomes a sort of low-pass filter in the pipeline. Thus the MTF of the entire system, up to the digital representation, declines with increasing spatial frequency.

We may be able to compensate for that "rolloff" of response in subsequent signal processing (aperture equalization).

Now let's look at our three cases above. We will assume that for both sensors involved, their sampling apertures (acceptance windows) have very nearly the height and width of the pixel pitch.

In case A, the aperture is about the pixel pitch of the sensor (and thus about the sampling pitch). A certain rolloff of response (with respect to the pertinent Nyquist frequency) occurs.

In case B, the aperture is about the pixel pitch of the sensor (and thus about the sampling pitch). The rolloff of response (with respect to the pertinent Nyquist frequency) is the same as in case A.

In case C, the aperture is about the pixel pitch of the sensor (and thus about twice the sampling pitch). The rolloff of response (with respect to the pertinent Nyquist frequency) is substantially greater than in cases A and B.

To the extent we do not compensate for aperture rolloff, this means that the potential resolution of Case C could be less than for Case B.

Best regards,

Doug
 

Doug Kerr

Well-known member
I know that last article was rather lengthy. Let me synopsize it, using some great simplifications:

1. In the optical image formed by the lens, there are only present those frequencies from the scene that are accommodated by the MTF of the lens.

2. To digitize that optical image, we sample it at certain intervals (in both directions). The highest frequency from the image that is "captured" by that process is (a little less than) half the "sampling frequency". Of course, no frequency that is not in the image is captured.

3. We can collect the samples according to our chosen sampling plan several ways.

• We can collect them essentially all at once (as in normal digital cameras).

• We can collect them "one line at a time", as with a scanning back.

• We can collect them in perhaps four staggered passes, as in microshift operation of multishot backs.

All that matters to the "frequency limit" of the entire imaging chain is the final overall pattern of samples.

4. There cannot be any frequencies represented in the collection of samples (and thus in the digital image) that are not present in the optical image (where would they come from). Thus the MTF of the lens is a limit on the resultion of the overall chain.

5. Collecting the set of samples in multiple passes cannot evade that. Doing such does not change the frequency composition of the image. (The lens has no idea that the image is to be sampled, or how.)

Best regards,

Doug
 
Resolution in the sense of pixel count. Whether the actual resolution increases or not depends on the MTF to the optical image.

Hi Doug,

Yes, and no. Denser sampling will allow to extract more (high) spatial resolution from any lens that has a decent performance. It will also produce a higher MTF for the lower spatial frequencies, a bit like this comparison:

SamplingDensity1.png


SamplingDensity2.png

A wider sampling aperture (in terms of the sampling period) causes greater rolloff of MTF (as seen in the digital image) as we approach the Nyquist frequency.

Correct, the larger sampling aperture for the denser multi-step sampling will start to roll-off faster (which helps to reduce aliasing), but effective spatial resolution will still be larger than for the less dense single-step sampling (see chart below), and it will still maintain some of the higher MTF response at lower spatial frequencies, which will add micro-detail contrast and thus resolution (lower contrast detail will remain resolvable).

SamplingDensity3.png

Theodoros could assist in demonstrating the actual (real lens and sensor combined MTF) and quantifiable (using a slanted edge) benefit by shooting (at a distance of 50x focal length would be useful to give the highest possible test level for even the best lenses on earth) a print of my test target with single and 16-step, and post an unsharpened Raw conversion (unsharpened to reduce converter effects).

Cheers,
Bart
 

Doug Kerr

Well-known member
Hi, Bart,

Thank you for that excellent information.

It now seems to be that there is an advantage of multi-step sampling (vs single-step sampling with the same final sample density) in that the particulars of the apertureg MTF associated with the larger relative sampling aperture in the multi-step case actually leads to a better overall MTF, in effect giving better exploitation of the lens MTF (for the kinds of MTF we are typically dealing with).

I had in fact missed that. Theodoros's discussions somehow failed to illuminate that for me.

I need to reflect on your excellent graphs to fully grasp how this happens.

Thanks again.

Best regards,

Doug
 

Doug Kerr

Well-known member
Hi, Bart,

Yes, and no. Denser sampling will allow to extract more (high) spatial resolution from any lens that has a decent performance. It will also produce a higher MTF for the lower spatial frequencies, a bit like this comparison:

SamplingDensity1.png

Is the response we see here (if we were to take out the effect of the lens MTF) primarily due to aperture rolloff?

Could you remind me of the meaning of describing lens blur in terms of a sigma multiplier?

SamplingDensity3.png

It seems that in the green case the pitch of the sensor is 3 µm, thus the sample pitch of the complete sample ensemble (under 16-step operation) is 0.75 µm.

I am surprised that the use of the 16 step technique with that sensor results in a worse overall MTF (even though 4x the pixel count).

What am I missing?

Best regards,

Doug
 

Doug Kerr

Well-known member
Hi, Bart,

Let me focus on the original concern I had with Theodoros' discussion of multishot and microshift multishot technique.

As I interpreted it, he tells us that when a certain pixel count is attained by microshift multishot technique, then for some desired image "resolution", the demands on lens MTF to "support" that are less severe than if the same pixel count is attained directly (with a finer-pitch sensor).

Can you help me find the path to that result?

So far as I can see, the only differences between the two techniques is the difference in aperture size (relative to the consolidated pixel pitch). The relative aperture width will be greater in the microshift multishot case. That would seem to lead to a steeper decline in the sampling (aperture) MTF.

I am still having trouble seeing how that would better exploit the MTF of the lens.

What am I missing here?

Best regards,

Doug
 
Theodoros,

Imagine that bullets or gold or people were being delivered to the focal plane. All one is doing in multishot is moving the sensor to more accurately detect what arrives. One cannot alter the physics of what is traveling down the "supply chain".

What arrives at the imaging plane are millions of individual "sets" of photons, each set corresponding to a finite unit area on the subject. The light reflected or emitted from that particular minute area arrives at the front lens surface at a myriad of different points all over the surface of the lens. However, they are each refracted by a series of lenses so that,(almost), all of them, (in the best circumstances), arrive at exactly one minute sharp point on the focal plane. Unfortunately, in all lenses, many adjacent sets of points get overlapped by imperfect focussing of their constituent photon members from all over the surface of the lens. The "better" the lens, the more perfectly does the glass reconstitute each point from the subject to the image on the focal plane.

No maneuver of the image plane can do more than more accurately sample the flux of photons arriving over a given time. What's needed is some methodology to provide more discrimination by the lens for "safe passage" and reconstitution of all the constituent photons for each point coming from the subject with less overlap. The sensor cannot do that no matter how many images one takes.

Asher
Asher, it's not ONE focal plane, it's FOUR focal planes, that's what you don't take into account.
 

Doug Kerr

Well-known member
Hi, Bart,

This example synopsizes the conundrum with which I wrestle.

I will use the simplistic concept of "resolution" as a single numerical metric, recognizing that of course the matter is not that simple. (And the key to the dilemma may reside in that naïveté).

I will assume a monochrome sensor situation so as to avoid any complication over CFA considerations.

We want a square digital image, from a certain scene, with a resolution of 3000 cycles per picture height. Considering at the least the effect of the Kell factor, we decide to have an image 8000 px × 8000 px.

Our camera has an image 40 mm × 40 mm in size.

We consider two implementations.

Case X - conventional technique

We will use a 40 mm × 40 mm, 8000 px × 8000 px sensor. The pixel pitch is 5 µm, and we assume the sampling aperture to be rectangular, 5 µm × 5 µm (and uniform in sensitivity over its area). We will use conventional technique, and extract (with the sensor fixed) 64 M sample values from the sensor, which are digitized and become 64 M pixels in our output image. The sample pitch of those 64 M pixels referenced to the focal plane is 5 µm.

Case Y - microshift multishot technique

We will use a 40 mm × 40 mm, 4000 px × 4000 px sensor. The pixel pitch is 10 µm, and we assume the sampling aperture to be rectangular, 10 µm × 10 µm. We will use conventional microshift multishot technique, with 4 passes, the x and/or y offset of the sensor between passes being 5 µm.

In each pass, we will extract 16 M sample values from the sensor, a total of 64 M values, which are digitized and become 64 M pixels in our output image. The sample pitch of those 64 M pixels referenced to the focal plane is 5 µm.

Observation

Now, it seems to me that, for any given optical image on the sensor, the result in case X and Case Y would be identical except for the effect of difference in sampling aperture. The sampling aperture width in case Y is 2 times what it is in case X. Thus we would expect the sampling MTF in case Y (of the form sin(x)/x ) to be "compressed along the frequency axis" by a factor of 2 compared to case X.

The question

Now, the question is, what properties of the lens (perhaps notably what MTF) do we need in order to attain our desired resolution, in the deliverable image, of 3000 cycles per picture height?

In particular:

• What properties do we need if using the technique of Case X?

• What properties do we need if using the technique of Case Y?

Will our needs for lens performance be "less demanding" (that is, perhaps met by a "less good" lens) in case Y than in case X. If so, why?


Thanks for your help with this.

Best regards,

Doug
 
Theodoros could assist in demonstrating the actual (real lens and sensor combined MTF) and quantifiable (using a slanted edge) benefit by shooting (at a distance of 50x focal length would be useful to give the highest possible test level for even the best lenses on earth) a print of my test target with single and 16-step, and post an unsharpened Raw conversion (unsharpened to reduce converter effects).

Cheers,
Bart

Hi Bart, I am sorry, I have better things to do from testing all the time to prove the obvious to Doug or Asher, I hope this, (that was shot at a little more than 6m using the 120mm Apo macro Zeiss on Contax645 with 528c used at 16x as MFDB) is good enough to assist you… I am sorry, that I can't support your (correct) position further, but I am under pressure currently… Thanks for getting into the conversation and for supporting the obvious… I also think that a simple mechanical issue that is uniquely applied to current technology, could be explained by having people to understand the mechanical difference which is what makes the difference at the matter… It is obvious to me that Asher and Doug, don't realise that there are FOUR imaging areas/focal planes that are shot… not one, …four!

picture_020dssitee.jpg


The above image is highly compressed to be published on my web site, so that it won't be able to be printed at more than useless, the one below is of 9cm (three feet) hight and was shot at approx 2.5m distance, using the same lens… compression is the same useless for web publish…

picture_027cssitex.jpg


I hope this helps… If Asher and Doug would see the "real thing" printed though, it would help a million times better… Thanks, I won't be absent from the conversation, I just can't help with "testing" currently.
 

Jerome Marot

Well-known member
There are some examples on the Hasselblad web site if people are interested. Here for product photography (a pair of shoes and chronometer) and here for "architecture" photography (the inside of a library).

You'll need to download the following files for a meaningful comparison:
-the H4D-50 as a reference of single shot.
-the H4D-50MS is the same, but with color multishot
-the H4D-200MS improves the resolution by 4 with multishot (and also uses the improved color of the file above).

The samples are tiff files. They need be, since jpeg subsamples the color information and thus removes much of the advantage of multishot systems. Obviously, the file size is huge.
 
Hi, Bart,

This example synopsizes the conundrum with which I wrestle.

I will use the simplistic concept of "resolution" as a single numerical metric, recognizing that of course the matter is not that simple. (And the key to the dilemma may reside in that naïveté).

I will assume a monochrome sensor situation so as to avoid any complication over CFA considerations.

We want a square digital image, from a certain scene, with a resolution of 3000 cycles per picture height. Considering at the least the effect of the Kell factor, we decide to have an image 8000 px × 8000 px.

Our camera has an image 40 mm × 40 mm in size.

We consider two implementations.

Case X - conventional technique

We will use a 40 mm × 40 mm, 8000 px × 8000 px sensor. The pixel pitch is 5 µm, and we assume the sampling aperture to be rectangular, 5 µm × 5 µm (and uniform in sensitivity over its area). We will use conventional technique, and extract (with the sensor fixed) 64 M sample values from the sensor, which are digitized and become 64 M pixels in our output image. The sample pitch of those 64 M pixels referenced to the focal plane is 5 µm.

Case Y - microshift multishot technique

We will use a 40 mm × 40 mm, 4000 px × 4000 px sensor. The pixel pitch is 10 µm, and we assume the sampling aperture to be rectangular, 10 µm × 10 µm. We will use conventional microshift multishot technique, with 4 passes, the x and/or y offset of the sensor between passes being 5 µm.

In each pass, we will extract 16 M sample values from the sensor, a total of 64 M values, which are digitized and become 64 M pixels in our output image. The sample pitch of those 64 M pixels referenced to the focal plane is 5 µm.

Observation

Now, it seems to me that, for any given optical image on the sensor, the result in case X and Case Y would be identical except for the effect of difference in sampling aperture. The sampling aperture width in case Y is 2 times what it is in case X. Thus we would expect the sampling MTF in case Y (of the form sin(x)/x ) to be "compressed along the frequency axis" by a factor of 2 compared to case X.

The question

Now, the question is, what properties of the lens (perhaps notably what MTF) do we need in order to attain our desired resolution, in the deliverable image, of 3000 cycles per picture height?

In particular:

• What properties do we need if using the technique of Case X?

• What properties do we need if using the technique of Case Y?

Will our needs for lens performance be "less demanding" (that is, perhaps met by a "less good" lens) in case Y than in case X. If so, why?


Thanks for your help with this.

Best regards,

Doug
Doug, the pixel size IS STILL 9μΜ …not 4.5, it's 9! (I use the actual size of the pixel that I use - I understand you used 10 and 5 as an example) …It's the imaging area that is multiplied by four in the result, the pixel is not shrunk, each capture is still on a full pixel and at the same image area… when I said earlier that "pixel is split at four true colour ones", it means that you get four new pixels of the same characteristics as if it was one… meaning that each one of them has got the same size (9μM) and is at the same position (only different by half a pixel movement) on the same image area measuring 37x49cm... It's not one area of 37x49 shot on 88mp, it's FOUR areas of 37x49=An area of 74x98cm as a result, but not added together like we do with "merge", instead, it's like one area of 74x98 if it was shot with a lens that would show the same AOV as it does at 37x49 but has four times the resolution to serve an area of 47x98 of the same AOV… I hope this helps.
 
Top