Pixels and Subpixels

Once I wondered: 'Why are RAW files of my camera Nikon D700 weigh that little? '. While looking for an answer, I found very interesting information.

About pixel sell

About pixel fraud

So, when shooting, I sometimes use the RAW file format, (Nikon calls it NEF - Nikon Electronic Format - Nikon electronic file format). Nikon RAW files have certain settings, I usually use 14-bit color depth with lossless or no compression at all. In general, NEF files with 14-bit color depth and without compression weigh about 24.4 MB. In the picture below, I showed the size of my files in bytes.

NEF file sizes on my Nikon D700 camera

NEF file sizes on my Nikon D700 camera

As you can see, the files are almost the same size. Take, for example, the ARK-4820.NEF file, its weight is 25 bytes, or 621 MB. Bytes to megabytes are translated very simply:

25 624 760 / 1 = 048

I would like to draw your attention to the fact that the different weight of RAW (NEF) files is due to the fact that they carry not only useful 'raw' information, but also a small preview picture, as well as EXIF data module. The preview image is used to quickly view the image on the camera monitor. When viewing quickly, the camera does not require downloading a heavy 25 MB file, but simply pulls out a thumbnail preview image and displays it on its display. These pictures are most likely encoded using JPEG, and the JPEG algorithm is very flexible and for each individual thumbnail requires a different amount of information for storage.

14-bit color depth means that each of the three main shades encoded with 14 bits of memory. For example, when you click on the 'question mark' button on the corresponding camera menu item Nikon D700 You can read the following:

'NEF (RAW) images are recorded at 14-bit color depth (16384 levels). In this case, the files have a larger size and more accurate reproduction of shades'

The color is formed by mixing three basic shades - red R (Red), blue B (Blue), green G (Green). Thus, if we use 14-bit color depth, we can get any of 4 398 046 511 104 colors. (any of the four billion three hundred ninety eight billion forty six million five hundred eleven thousand one hundred and four colors).

It is easy to calculate: 16384 (R) * 16384 (G) * 16384 (B)

In fact, 4 billion is much more than necessary, for normal color reproduction, such a large supply of colors is used for easy image editing... And to encode one 'pixel' of the image in this way, you need 42 bits of memory:

14 bits R + 14 bits G + 14 bits B = 42 bits

My Nikon D700 creates images of the maximum quality of 4256 by 2832 pixels, which gives exactly 12 pixels (about 052 million pixels, or simply 992 megapixels). If encode images from my Nikon D700, without the use of data compression and compression algorithms, and with a 14-bit color depth, it turns out that you need to use 506 bits of information (we multiply 225 bits / pixel by 664 pixels). This equals 42 bytes, or 12 MB of memory.

Question: why is it estimated that about 60 MB of memory is needed for one image, but in reality I only receive files at 24.4 MB? The secret lies in the fact that the original RAW file does not store the 'real' pixels, but information about the subpixels of the CMOS matrix Nikon D700.

In the description for the camera you can see this:

Excerpt from the instructions for the Nikon D700

Exposure from the instructions for the Nikon D700

That is, the instructions say about 'effective pixels' and about the 'total number' of pixels. It is very easy to calculate the number of effective pixels, it is enough to shoot in the JPEG L Fine mode and get a picture of 4256 by 2832 pixels, which is equal to the previously described 12 052 992 pixels. If we round up, we get the MPs declared in the instruction 12.1. But what is this 'total number of pixels', which is almost one million (1MP) more (12,87MP)?

To understand this, just show what the photosensitive camera sensor looks like Nikon D700.

Bayer Filter

Bayer Filter

If you look closely, the Bayer Matrix does not create any 'multicolor' image. The matrix simply registers green, red and blue points, and there are twice as many green points as red or blue.

Not real pixels

The 'not real' pixel

In fact, this matrix does not consist of pixels ('in the usual sense'), but of sub-pixels or register cells. Usually implythat a pixel is a point in an image displaying any color. On CMOS sensor Nikon D700 there are only sub-pixels, which are responsible for only three basic hues, on the basis of which the 'true', 'multicolor' pixels are formed. The Nikon D700 sensor has approximately 12 of these sub-pixels, which are referred to in the manual as 'effective'.

There is no 'real' 12MP on the Nikon D700 sensor. And the 12MP we see in the final image is the result of a hard mathematical interpolation of 12.87 Mega sub-pixels!

If averaged, then each sub-pixel is converted by algorithms into one real 'pixel'. This is due to the neighboring pixels. This is where it is hidden 'pixel street magic'. Similarly, 4 billion colors are also the work of the debayering algorithm.

Sub-pixels and pixels

The main idea of ​​the article: sub-pixels and pixels. 12 million sub-pixels are sold to us for 12 million 'real pixels'.

To put it very roughly, marketers have called the sub-pixels of the Bayer filter 'pixels' and thus made a substitution of the meaning of words. Everything is tied to what exactly needs to be understood by the word 'pixel'.

Let us return to the calculations of the volume of files. In fact, the NEF file stores only 14-bit information for each sub-pixel of the Bayer filter, which is actually the same hue depth. Given that there are 12 such sub-pixels on the matrix (the approximate number is indicated in the instructions), then the storage of information obtained from them will require:

12 * 870 bits = 000 bits or 14 MB

And yet I never got the 24,4 MB that I see on my computer. But if we add data from the received 21,48 MB EXIF and JPEG PreviewImage then you can get the initial 24,4 MB. It turns out that the RAW file also stores:

24,4-21,48 = 2,92 MB of additional data

Important: similar calculations can be made for cameras using CCD sensors and uncompressed RAW files - Nikon D1, D1h, D1x, D100, D200as well as JFET (LBCAST) matrices - Nikon D2h, D2hs. In fact, there’s no difference CCD or CMOS - they still use the Bayer filter and subpexili for imaging.

But Sigma cameras with Foveon matrices have a much larger RAW file size for the same 12 MP compared to the CMOS matrix, which encode one real pixel using three pixels of the primary colors (as expected), this only confirms my reasoning. By the way, another appeared on Radozhiv interesting article и another one.

Conclusions

In fact, cameras using Bayer filter matrices (CCD, CMOS - it doesn't matter) have no declared real number of 'true' pixels. On the matrix there is only a set of sub-pixels (photo elements) of the Bayer pattern, from which, using special complex algorithms, 'real' image pixels are created... In general, the camera does not see a color image at all, the camera's processor deals only with abstract numbers responsible for a separate shade of red, blue or green, and the creation of a color image is it's just math tricks... Actually, this is why it is so difficult to achieve 'correct' color rendition on many CDCs.

Materials on the topic

  1. Full frame mirrorless systems... Discussion, choice, recommendations.
  2. Cropped mirrorless systems... Discussion, choice, recommendations.
  3. Cropped mirrorless systems that have stopped or are no longer developing
  4. Digital SLR systems that have stopped or are no longer developing
  5. JVI or EVI (an important article that answers the question 'DSLR or mirrorless')
  6. About mirrorless batteries
  7. Simple and clear medium format
  8. High-speed solutions from Chinese brands
  9. All fastest autofocus zoom lenses
  10. All fastest AF prime lenses
  11. Mirrored full frame on mirrorless medium format
  12. Autofocus Speed ​​Boosters
  13. One lens to rule the world
  14. The impact of smartphones on the photography market
  15. What's next (smartphone supremacy)?
  16. All announcements and novelties of lenses and cameras

Comments on this post do not require registration. Anyone can leave a comment.

Material prepared Arkady Shapoval.

 

Add a comment:

 

 

Comments: 153, on the topic: Pixels and subpixels

  • anonym

    Still correct)))
    million = 10 ^ 6
    billion = 10 ^ 9, billion = 10 ^ 9, only the first name is traditional and accepted in Russian, and the second is scientific and international.
    trillion = 10 ^ 12
    and here is the number 4 398 046 511 104 - four trillion three hundred ninety eight billion (billion) forty six million five hundred eleven thousand one hundred and four.

  • Anonymous 1

    Well, exactly: math shizoowns show! From FMTI to photo enthusiasts! Mathematicians will soon build a model of the common cold and calculate the probable number of sneezes depending on the inert mass of the snot ... And on the basis of this model, they will deduce the optimal useful number of pixels in the matrix formed by the handkerchief ... Let's live richly and happily!

    • Novel

      Without mathematicians and their works, I strongly suspect, we would still have to do rock painting. Well, or paint with oil paints.

      Bayer filters and interpolation are one of the few ways to get images from a matrix. If we count four subpixels as one pixel, we get an image that is half the height and width. But at the same time we will lose that insignificant part of the information that is obtained with mixed use of different channels, including the doubled brightness information from two green subpixels.

      • Nicholas

        Four sub-pixels cannot be counted as one pixel, since each one, even though there is a filter, contains different information.

        • Alexander

          Why on the monitor 3 subpixels of RGB you consider as one? There is also different information in each. In Bayer, the difference is in plus one green subpixel, which should theoretically increase clarity to the detriment of color.

          • aLastMan

            In addition to Alexander.
            Green, an extra pixel is made to increase sensitivity to green.
            It so happened that a person is almost indifferent to blue, and he needs green.
            There is such a thing, spectral sensitivity: ЕY = 0,30ER + 0,59 ЕG + 0,11ЕB
            On computers, 16 bits of color are actually encoded as 565 (RGB) bits per pixel, one bit gives a measurement range twice as large. Which almost falls into the above formula.
            In order not to bother much with mathematics, we made another additional pixel, which is also technologically easier - to make all pixels the same.

            Therefore, 4 pixels (RGBRG) can easily be converted to 0,30ER + 0,59 EG + 0,11EB, and then to RGB.

  • Anonymous 1

    Just about, not photographers, but recipients of images, photo recorders, consumers of color information, pixel counters ... Have you tried to count the number of brush strokes in Leonardo's paintings? Hello !! Work is a sea, and the brain does not dry out ...

    • Alexander

      A young man, it is better to envy the brain in silence, but it is also better to study in order to narrow the gap, and envy decreases

  • Vadim

    I really want to insert my “5 kopecks” :)

    1) Firstly, I hope that after everyone has learned about the mechanisms of debayering, nothing will change in their lives: they will not begin to massively get rid of digital cameras, erase favorite photographs, remember suicide (“Oh, how could I endure such deception all this time? ”), etc.

    2) Secondly, someone asked the camera manufacturers a question about what THEY MEAN by the word “PIXEL”? Why are all those gathered here so sure that they were deceived, cheated, thrown? Did Nikon or Canon convince us that we are talking about RGB, RGGB or some other pixels with multiple channels? We are told about the number of matrix cells as a whole (total pixels) and how many UNIQUE combinations (i.e. what will then become pixels on the screen) cameras and software are collected from these cells (effective pixels).

    3) Thirdly, we are not talking about filling some voids with non-existent averaged (interpolation) or predicted data (extrapolation). The point is that subpixels can be combined in different ways during debayering. And each combination is unique, which means it can pretend to be a screen pixel.

    4) Fourth, it's damn good that they haven't raised the topic of how these strangely appeared pixels are displayed differently on different screens, and even more so - how a printed raster is formed from them! :)

    That's it, I put in “5 kopecks”, I'll run to take pictures, otherwise I feel that I am wasting precious time.

    • Denis

      I subscribe to every "penny" :)

    • Lynx

      your 5 cents exactly 4.
      Do you also produce cameras? )))

      • Vadim

        I’m thinking about it :).
        And quotes are needed for this, so that one thing can be called another.

    • R'RёS,R ° F "RёR№

      Well, at least one educated person.
      What is described in the article is not news at all, but for those interested, and especially physicists, a long-known truth. The way it is done is the only correct decision. What would you like? To make each pixel appear from the RGB triad? And these people then declare that 36 megapukels on the crop is a lot? What would then be the size of the microlenses if each pixel was formed from three sensitive elements? Learn materiel and then you will not be surprised at the real state of affairs. The manufacturer just cares about us, making the pixel "bold" and there is no deception here.

      • Oleg

        >> The way it is done is the only correct decision
        What is it right for? The fact that you have photos began to weigh more, and the number of blown megapixels is cooler.
        >> The manufacturer just cares about us, making the pixel "bold" and there is no deception here.
        Holy naivety. A producer is a bourgeois who cares only about his profit and nothing more. For example, marketers artificially programmatically kill some chips, thus forcing them to buy more expensive models. They substitute the terms so that the numbers are more silent that this term is substituted.

        • R'RёS,R ° F "RёR№

          Pixels are not bloated at all, but real ones !!! To claim bloated pixels - master the match first !!! I really wanted to say something nasty at these words, but I will refrain. Yes, you would be the first to howl about the diffraction threshold, lack of sharpness and huge noise, due to the minimum area of ​​photosensitive elements, if each pixel was formed from three sensors. Remember: a larger photosensitive element collects more photons, hence less noise, a lower diffraction threshold, better sharpness.

          • Oleg

            You are right about the size of the element, but this does not make debuting different from blurring. Debtion is essentially a banal blur. From the sensor 10 * 10, I will receive 100 bytes of information about the object. If you take a photo 10 * 10 then it is 300 bytes (3 bytes per pixel). Where did you get 200 bytes of new information about the object? And you averaged it from neighboring pixels, this is a commonplace blur.
            Thought experiment. Take a photo, say 100x100 (preferably cut). And according to the Bayer grid in each pixel, we leave only 1 subpixel, and the remaining 2 subpixels are extinguished. Can you use the debayerization of this photo to get the original image? Tell me, what was the brightness of the suppressed subpixels in each pixel? You won’t say exactly, you’ll spread the maximum from the neighboring ones.
            About materiel, you blow off your cheeks. If you do not agree, give arguments, and empty air crushing is useless.

          • Vadim

            The same amount of information from a RAW file can be analyzed and interpreted in different ways. With different quality, speed, accuracy, depending on the tasks. And a lot will depend on this.

            Spend some difficult shot in RAW through different converters: modern and ten years ago. Feel the difference in noise, detail, color. Have we increased the amount of initial information? No, we just learned how to work better with her and at the output we got a much more interesting option.

        • Vadim

          A manufacturer is a bourgeois who has good proofreaders, lawyers, lawyers, marketers and other smart people on staff. Therefore, the manufacturer KNOWS EXACTLY what IT IS NAMING “pixel”. And if this does not coincide with mine, yours or other opinions, these are already our problems.

          There is no deception, there is a word in which different people put different meanings.
          For you, for example, a pixel is a point visible on the screen, the characteristics of which are described by the brightness components of the RGB channels.
          For, say, a printer - a conditional point, which wretchedly tries to be displayed on the screen after conversion as an RGB pixel, but actually has 4 channels (CMYK).
          Manufacturers of specific cameras in relation to specific matrices have their own understanding of the pixel.

          There is no single correct solution to what should be called a “pixel” or “subpixel”. Everyone has their own opinion on this matter. It is important that we all, hotly discussing this topic, now understand the order of things and can call a spade a spade.

          • Oleg

            And on the screen and on the print, a pixel is an elementary part of the image that can be of any color, and not just red or blue or green. Manufacturers of cameras interpret the term pixel not like everyone else, but because it is more profitable for them. And the manufacturers of gigabyte drives are 10E9 bytes, not 1024 * 1024 * 1024. Because they are so profitable.

          • R'RёS,R ° F "RёR№

            That's for sure - in the bull's eye

          • Vadim

            The ISO prefix "giga-" stands for 10 to the ninth power and nothing else.
            This you “gibi” confuse with “giga-“.
            Explore binary prefix material:
            http://ru.wikipedia.org/wiki/%D0%94%D0%B2%D0%BE%D0%B8%D1%87%D0%BD%D1%8B%D0%B5_%D0%BF%D1%80%D0%B8%D1%81%D1%82%D0%B0%D0%B2%D0%BA%D0%B8

          • Vadim

            I mean, according to SI and JEDEC :)

          • R'RёS,R ° F "RёR№

            Vadim, the truth is completely on your side. I, in turn, had no idea that so many people do not even represent the process of collecting information from the matrix. I thought that all digital photographers already understood this process. But it turns out ...

          • Vadim

            Well, of course, a pixel can be of any color. Only this color needs to be indicated somehow. Monochrome pixels (not necessarily white-gray-black) can be described by one channel, color pixels - by three or more, while they can be represented in different ways (HSL, LAB, RGB, CMYK, etc.).
            Camera manufacturers indicate the total number of monochrome pixels (which carry information in one channel, regardless of which channel it is).
            The pixels on the screen are multichannel (i.e. we perceive them as such). How many channels there are, whether there are subpixels, depends on the imaging technology.
            And in color printing, pixels are also multi-channel. And channels, usually at least one more than when displayed on the screen. And also it all depends on printing technology.

            Those. pixels are different. Understanding this simple thesis will make life easier.

        • R'RёS,R ° F "RёR№

          Here read http://www.cambridgeincolour.com/ru/tutorials/camera-sensors.htm , only thoughtfully, very intelligibly painted.

          • Oleg

            http://arttower.ru/wiki/index.php?title=%D0%A4%D0%B8%D0%BB%D1%8C%D1%82%D1%80_%D0%91%D0%B0%D0%B9%D0%B5%D1%80%D0%B0
            Here is the goose example at the end. And at the end they write:
            As you can see in the picture, this image turned out to be more blurry than the original one. This effect is associated with the loss of some information as a result of the Bayer filter.

            Those. The Bayer filter does not restore the original info by 100%! he interpolates (smears) it from neighboring pixels.

          • Vadim

            Umatovy example :)))
            And what is the original image received without debayerization? Or is it filmed on tape and then scanned?
            Nobody says: let's shoot on three or four… matrices at the same time, we get true-multi-channel pixels, compare this with the wretched “Bayer grids” - and, like, feel the difference.
            In this epic discussion, people are trying to convey just a few points:
            1) ANY information about ANY object restored from ANY source is not original and has discrete quality. Conclusion: go to live concerts, travel, watch live sunsets, smell flowers, enjoy every spoon of borscht, love your loved ones, etc.
            2) if you are not an engineer (by nature), but a photographer (by vocation, or you have such a hobby), worry less about the deep technical details of the process of obtaining a photo (how sensor cells are charged, etc.). Let the engineers think about it. Conclusion: practice, improve your skill. Iron will get better over time. And whether you will become better depends only on you.
            3) in order not to feel cheated, specify the things you are interested in. People often call the same thing in different ways, or vice versa put different meanings in the same words. Conclusion: the ability to find a common language helps in the store, at work, and in family life.
            4) in disputes, truth is born. We're not here to flood Arkady's page or find out who's smarter. I've taken a fresh look at some aspects of image acquisition and analysis. This is interesting to me, because I am an ophthalmologist. Someone from this day will begin to correctly use the prefixes "kilo-", "giga-", "kibi", "gibi". Conclusion: constructive discussions make the world a better place :)

          • Oleg

            I didn’t know about death. It turns out Windows does not by standard show who could think.

          • Vadim

            By the way, yes, because of Microsoft, many spat on compliance with standards. I also recently found out about this.

  • Andrei

    To be honest, I absolutely do not care what they are called and how they work. I can't influence this. And we are talking about the visible spectrum ... with our flexible and at the same time very imperfect eyes. We all see and appreciate differently. Feegley then stuff his head with bullshit. Otherwise it will start now: you have the wrong matrix, you have “grenades of the wrong system” ...
    I do not think that someone deceived me or misled me ... I am more worried about buying another carcass or glass :), trips to various interesting places ... and so that my wretched eyes had time to see enough of the beauty.

  • Oleg

    I seem to understand the reason for the debate:
    1. For the manufacturer, a more attractive digital megapixel, well, that's understandable.
    2. Now the requirement for the camera to give the finished high-quality jpg-photo at the output. And so that the user does not have to bother with filters. Those. if you need to apply some kind of filter to improve the visible quality of the photo, then the camera should do it himself, and not leave it to the user. So he got a matrix of 10 * 10 values. Of course, the source of information there is only 100 bytes, and who needs them has raw. And who needs a ready-made photo, the camera will increase the real resolution by 2 * 2 = 4 times by interpolating the missing pixels (debut), then add sharpness, correct the color, and then a thread. In the end, you get a photo of the best visible quality. Yes, if you make the photo 10 * 10 correctly from the 5 * 5 matrix, then there will be no exaggerated information in it, but it will look really worse. Those. For myself, I concluded this, of the pixels written on the camera 1/4 real, and the rest are interpolated. Maybe this is not bad, the visible quality is important for people, and how it is received is actually taken or interpolated it is not important for them.

    • Novel

      Not understood. Take a RAW image, reduce it in half, and then scale it up to its original size. And then compare the "bloated pixels" after debayering with your honest pixels. The example is not entirely correct, but nevertheless, certain conclusions can be drawn.

      The Bayer filter is a smart engineering solution that allows optimal use of the matrix area. An “honest decision” is to take a square, place three sensors R, G, B in it (leave one corner empty), average everything down to one pixel and get a picture with sides 4000x3000 from a 2000x1500 pixel matrix. It is enough to add one more green sensor to the “empty” corner and we get “extra” 60% of the brightness information (the contribution of the green channel is 60%, 30% - red, 10% - blue). You can simply throw it away, or you can “mix” it with a mat. transformations (more complex than simple linear or even cubic interpolation) to the picture, having received much higher brightness resolution.

      • Oleg

        >> Take an image captured from RAW
        Where to get it? Any converter gives already processed.

        http://arttower.ru/wiki/index.php?title=%D0%A4%D0%B8%D0%BB%D1%8C%D1%82%D1%80_%D0%91%D0%B0%D0%B9%D0%B5%D1%80%D0%B0
        Here they took a picture of a goose. This is like the original object. They made the Bayer matrix from it, i.e. to get the matrix of the perfect camera with the perfect lens. Then a debutation was applied to this matrix, i.e. how would the camera do it. The resulting photo is not equal to the original! Not equal! Now, if the fotik had honest rgb pixels, then the original image would have turned out without any d-tions. And so this is a simple blur, which is evident when comparing the initial and final pictures of the goose.
        Yes, it is probably a bit more complicated transformation than simple linear or cubic interpolation, but this does not mean that it is not interpolation. D-tion is the real interpolation. From the wiki:
        Interpolation, interpolation - in computational mathematics, a way of finding intermediate values ​​of a quantity from an available discrete set of known values.
        It is.

        • Novel

          > The resulting photo is not equal to the original!
          Someone claim the opposite?

          If you act “honestly”, as you ask, then you will need to blind one from each set of four pixels, not four, as the debayering algorithm does. As a result, your image will be half the width and height. And if you enlarge it so that it matches the size of the original image, then the result after debayering will look SIGNIFICANTLY better, because it was obtained using the information that you suggest to throw out.

          The Bayer filter allows you to arrange photosensitive elements in the form of a lattice, using two green pixels instead of one increases the resolution and gives extra information for better transmission of green shades (and indirectly brightness).

          You can arrange elements on top of each other, there are also such matrices. Waves of different lengths penetrate to different depths, so you can get three times more information from one “geometric pixel”, but not everything is smooth there either. You can split the luminous flux into three and direct the light to three different matrices, but this is expensive and cumbersome.

          • Oleg

            If you take almost any picture (especially low resolution) and increase its resolution 2 * 2 times by bilinear interpolation, it will also look MUCH better without the 4th channel. But this is not a reason to do so with all files by increasing their size to 4! times.
            Yes, as I wrote above, the requirement for fotiks now seems to be this: fotik should give a ready-made jpg-photo of the highest possible quality, having done all the possible filters by itself. The size of the photo does not matter. How it is derived from real pixels or exaggerated is not so important.

          • Vadim

            Here's something Oleg cannot understand in any way, how single-channel (monochrome) pixels differ from multi-channel (color) pixels. And nothing can be done about it ...

            Accordingly, interpolation as averaging has nothing to do with it. Debaerization is not averaging, but getting a multichannel pixel from the monochrome group.

            In this case, of course, you need to use all possible combinations of groups, because this will give more REAL information than using each monochrome pixel only once, followed by upscaling.

            And this goose is stupid ... People took an image from a set of RGB pixels, cut them down to monochrome, i.e. threw out 2 channels (2/3 of the information), and then they say: "You see, the image is degrading." Of course, it is degrading, a third of information remains! And only thanks to debaerization, you can still look at this image.

          • Oleg

            Yes, people threw out 2/3 of the information. Well, the camera also doesn’t take 2/3 of the information, for example, the green component of the light that hits the red or blue cell simply disappears. Everything is correct with a goose.
            And how do you think you need to make an example of a goose? Think for yourself there is a goose photo, say 100 * 100 * 3 = 300 bytes, the Bayer matrix will take only 100 bytes. You can’t get 100 of bytes back. You won’t get the same 300. Do not agree? no problem let's make a goose the way you want. Explain how.

          • Vadim

            To throw it away and not to take it off are two different things. These mono-channel pixels carry information about only one channel. But they don't stop being pixels. And we were not promised any specific pixels. They promised pixels - they gave them pixels. But what kind of pixels they are, and how they differ from the screen ones, thanks to you, we have found out in great detail and "chewed".

          • Oleg

            Generally yes. Nowhere is it written that these are three-channel pixels. If you are reluctant to inflate the information in the file, you can put a lower resolution in the fotik and thereby get a honest translation of Bayer pixels in rgb, then the manufacturers leave the choice to the user. Probably in vain I slander them.
            Well then, the resolution of the monitors can also be written vertically 3 times as much, there too no one promised an rgb-pixel :).

        • Vadim

          Camera pixels and screen pixels are like tiles. There is cheaper, there is more expensive, there is more, there is less. But both are tiles, and that, and that are pixels.
          Since the pixels of the camera matrix are single-channel (monochrome), they contain three times less information than screen pixels. They are not equivalent. But they are pixels. Honest The real ones.

          What is even easier to explain?

          • Novel

            I dont know. The man stubbornly believes that he is being cheated by the manufacturers. They say, a Zhidomason collusion, everyone stupidly doubles the picture, sharpens it and sells it like a real one. So I'll upset him, in JPEG RGB is converted to YUV - a luminance channel and 2 color difference. All of the luminance is encoded, while for the color difference two pixels out of four are left. And this is after debayering! They fool the people, fool them ...

          • Oleg

            I firmly know that from the Bayer matrix 10 * 10 it is impossible to accurately restore the original picture 10 * 10 * 3 in any way, including the action. The final picture will be more blurry than the original and the Masons have nothing to do with it, pure mathematics. I don’t know how JPEG encodes, I will not argue (but I always thought that something was being placed in a Fourier series there). But your logic is this, let's add 200% of the inflated pixels to the image, anyway their part will be lost after JPEG compression. I note that JPEG is always a deterioration in picture quality (seen in contrasting pictures), and after JPEG not only inflated, but also real ones will be lost!
            The above example with a goose is very revealing. Or do you Roman also disagree with him? Let's make an example with a goose as you want, and make sure that we do not get the original image quality.

          • Vadim

            :)))
            There is no “original picture” of the goose. There is a goose :). This is the ORIGINAL SUBJECT. We can never get the perfect picture. We saw the contour - we want to see the body parts; we saw body parts, we want to see feathers; we saw the feathers we want to consider each "villi"; we saw every "villi", we want to see every tick sitting on it, etc. There is no limit to perfection.

            So the goose (and not its photo) is photographed with a camera on a matrix with a Baer lattice. This is the ORIGINAL PICTURE.

            Information when shooting does not become less. It is exactly as much as this type of matrix can physically initially get in this camera with these settings and these shooting conditions. Those. some monochrome pixels.
            Again, monochrome doesn't mean grayscale. It's just that each pixel carries information about one channel (in this case - red, blue, green, green # 2).

            If we initially had a multichannel (for example, multilayer) matrix, and for some reason we hung a Bayer lattice on top, we would, of course, lose about 2/3 of the information. But we have not lost anything, we initially have a less perfect sensor. Whatever you do with him, he will not "dig up" more information for us.

            Then the original information also does not disappear anywhere: it is analyzed, interpreted and converted into multichannel pixels that can be understood by the monitor screen. You can do this in one pass (each monochrome pixel is used once), or in several (all possible combinations of groups of 4 adjacent pixels are considered + possibly groups of larger sizes and irregular in shape). The more passes, the more chances you have to squeeze the most out of the available information, the more likely the resulting image will meet expectations. But the amount of initial information does not change in this case.

          • Oleg

            >> There is no "original picture" of the goose. There is a goose ...
            Wrong. We cannot endlessly describe the living goose. This is true. But we imagined an abstract goose which consists of squares. Well, it’s as if we are not photographing the goose itself, but photographing the goose. Let's say we have a picture of a goose 100 * 100 * 3. An ideal fotik with an ideal lens with an honest matrix 100 * 100 * 3 taking a picture of such a goose photo will obviously give the output image of the goose. An ideal fotik with a Bayer matrix of 100 * 100 * 1 will give a Bayer matrix, from which the goose’s original photo is not obtained by de-transformations, but a more blurry photo is obtained. Which is shown on that link.
            >> There is no less information when shooting. There is exactly as much of it as this type of matrix can physically initially acquire.
            Yes. This matrix is ​​able to receive from the object say 10 * 10 = 100 bytes of information, it does not become smaller, because we are not throwing anything. Its getting bigger, we add, inflated 200 bytes. As if we actually received 300 bytes from the object, but in fact only 100.
            Look, we need to translate 1 pixel of a single-channel Bayer matrix into 3-channel. Let it be a green pixel. But we do not physically know the value of either the red or blue color of the object in place of this green pixel. The matrix did not remove, cut off this Old. Where to get it, we take it from a neighboring pixel, naively assuming that the color values ​​of the neighboring pixels are approximately similar. But this is not so. For example, the source object is a bright yellow (red + green) dot of 1 pixel in size against a green background. And this point completely fell into this 1 single-channel green pixel. In neighboring pixels, there will be no information about its red component, there is only information about a green background. Deception will give us this dot of green. While an honest three-channel matrix will see that the point has a red channel, and in the end we get the yellow color of the point.

          • Denis

            “For example, the original object is a bright yellow (red + green) 1-pixel dot on a green background. And this point entirely fell into this 1 single-channel green pixel. In the neighboring pixels there will be no information about its red component ”- Your reflections only once again confirm the lack of knowledge of the hardware :)
            This situation is impossible, because before the matrix is ​​still a special filter.

          • Oleg

            They say on some models the anti-aliasing filter does not stand in front of the matrix. And where did you get that the blur radius of this filter is larger than the pixel size? Maybe it blurs by 0.1 pixels, and then a bright dot the size of a pixel may well be.

          • Denis

            Cameras without filter - one and a half models. Moreover, the absence of this filter is presented as a tool for professionals, people who know why they need it.
            And as for the degree of blurring - it’s as if for this purpose, so as to blur so that it gets to neighboring pixels. Even if at 0.1 pixels, this will mean that some of the information will already reach the neighbors. In practice, there is more. Once again I repeat, study the materiel before making conclusions about the inflation and interpolation.

            PS Arkady, please delete, above is my similar comment, it is not quoted there, everything moved out.

    • Novel

      Another "fair option" is to increase the number of pixels on the matrix, decreasing their linear dimensions. But here we run into a number of problems. Noises are growing (due to the dense arrangement of elements on the matrix, their heating is greater, the distortions introduced into the measurements are greater). Problems with diffraction begin (we wrote about this just above). The sensitivity decreases as the number of photons from the light source is still finite, and as the size of the photosensitive element decreases, the number of photons falling on it from the same scene will decrease. This is the road to nowhere.

      • Oleg

        >> Another "fair option"
        Not another, but the only honest option to increase the resolution is to increase the number of pixels.
        >> This is the road to nowhere.
        And the matrix manufacturers do not even know. The number of pixels on the matrix is ​​more and more.

        • Novel

          > And the manufacturers of matrices do not even know. The number of pixels on the matrix is ​​more and more.

          If at the same time the area of ​​the matrix grows - it is quite justified. As for the rest, marketing works primarily. Okay, the problem of noise is somehow solved as the technology improves. But diffraction has not yet been canceled, and the resolution of the optics must correspond to the resolution of the matrix, otherwise there will be no sense from the gain.

  • Novel

    >> I firmly know that from a 10 * 10 Bayer matrix it is impossible to accurately reconstruct the original 10 * 10 * 3 image in any way including the q-tion.

    This is absolutely certain. IF there was an ideal matrix, each physical pixel of which would produce a three-component image (and such exist, I already wrote, but they are far from ideal), then as a result of taking information from this matrix we would get a picture of better quality. But at this stage we can only determine the level of illumination of the sensor and generate in response to this some kind of voltage, which we then digitize. If you take a 4000x3000 matrix and do not cover it with color filters, you will get your honest 12MP monochrome image. If we put a Bayer grid of color filters on top of the matrix, then you will also get color information. We lose some of the brightness information due to filters. But, I repeat once again, this array will carry more information than an array of 2000x1500 averaged pixels.

    • Oleg

      >> this array will carry more information than an array of 2000x1500 averaged pixels
      The Bayer matrix 4000x3000 carries 12MB of information. To be honest, compress it to 2000x1500 RGB photos, then it will be 9MB of information, i.e. 25% loss (i.e. instead of 2 green channels, only 1). And if you do de-tion and stretch it to 4000x3000 RGB photos, then we get 36MB of information. Those. or lose 25% or inflate by 200%. In principle, we can inflate these 200% at any time, even in real time, right when viewing the photo.
      But 12 megapixels to the marketer is much more attractive than 3 megapixels. They want to write 12. But marketers, too, cannot translate the picture honestly, because if the camera says 12 megapixels, then it cannot produce a 3 megapixel photo at the output. Therefore, real single-channel 12 megapixels are written on the camera, and at the output the same 12 but partially blown three-channel. 12 = 12 and everything coincided and the customers have no questions.

      • Novel

        > Bayer Matrix 4000x3000 carries 12MB of information

        4000x3000x14 bit = 21 MB (not 12 already)

        > In principle, we can inflate these 200% at any time, even in real time, right when viewing a photo

        Have you ever processed RAW? How many decent converter cuts it off and “winds up” it paid attention?

        In all other respects, you don’t want to understand anything, so I don’t even see the point of convincing the opposite.

        • Oleg

          The converter works exactly as long as the hardware allows you. Fotik makes it much easier and faster. In computers, the percent is more powerful, here the algorithm is cooler. Tomorrow they will make the computer 100 times faster, programmers will write the filter even harder, and the computer will think the same a couple of seconds and make the photo 1% more attractive. This is not an indicator.
          >> In all other respects, you don't want to understand anything, so I don't even see any reason to convince you otherwise
          What do you want to convince me of? That de-tion is not interpolation? Is the Bayer matrix 10 * 10 capable of receiving Old from an object of more than 100 bytes? Or is it possible to make 100 bytes of real information from 300 bytes of real information? In what?

        • Denis

          Novel, it’s useless to prove :)
          Let the person continue to resize the photos 4 times and sleep peacefully, thinking that he is not losing anything and that only saves space on the hard drive :) And we are stupid and will continue to take full-size photos, in which a third of the information is invented :)

  • Novel

    > I don't know how JPEG encodes

    But take an interest. To begin with, converting RGB to YUV. We keep the luminance channel completely, we cut out the two color difference ones. You can remove half of the pixels from each U and V channel and the difference will be hard to see with the eye. As a result, 33% of the information was discarded relatively painlessly. In the video, by the way, they discard even more - about 50%.

    Well, then even more fun.
    We divide the received information into blocks, each is encoded separately. Each block is subject to FFT. Instead of a set of values ​​for each pixel, we get a set of frequencies. There we cut the upper frequencies depending on the compression ratio, coarsening the image more and more. And only then we use the Huffman compression itself.

    • Vadim

      Here I re-read, Roman, your post and thought that questions of interpretation and transformation of information border on philosophy :).

      For example:
      - I take my .nef file of 14 623 438 bytes;
      - I try to compress it in WinRAR, best compression - I get 14 582 483;
      - okay, the result is not great, I open .nef in Photoshop in sRGB, 8 bit / channel (how much did we have there?) And save it in .tif without compression, interleaved, without a color profile, I get 36 671 508 bytes;
      - similarly, but I use LZW (lossless) compression - 18 310 864 bytes;
      - similarly, but instead of LZW I press .tif with WinRAR - 13 bytes (that's where the street magic begins!).

      Now I'm trying to prepare the image for printing. To make everything transparent, I manually convert the opened in Photoshop to sRGB, 8 bit / channel RAW file to CMYK, save it to .tif without compression - 48 879 468 bytes.
      Similar, but with LZW - 30 bytes.
      Similarly, but uncompressed .tif is pressed by WinRar - 24 462 690 bytes.

      What's interesting: in fact, we didn't do anything special with the initial information. In the first case, it was transformed for adequate display on the screen. In the second, for printing. I don't even presume to judge where it is real, where it is bloated. I would say that these are different avatars of the same information (apart from the loss when converting to different color profiles).

      The question remains: how can a packaged, swollen 3-channel .tif take up LESS space than a similarly packaged .nef? Is there less REAL information?

      • Denis

        In RAW, you have uncompressed data from the matrix, without debayerization, when converting to TIFF you get an already debayered image, there are already three times more subpixels, which is why it weighs three times as much.

        • Vadim

          It’s clear, there are no questions

      • Denis

        But packing and compression is another question, it all depends on the algorithms and the allowable loss. RAR hardly compresses simple images, as you can see. Most likely, something was lost.

        • Vadim

          Bitness was lost, channels were added. And then it suddenly turned out that, provided that the information was correctly packaged, a full-fledged three-channel raster could occupy a volume less than a miserable NOT-deberized source.

          And the philosophy is what to consider as the REAL amount of the saved visual information? The size of the original RAW? The size of a full-weight raster? Which of the things described by me is the REAL amount of information that is COMPRESSED and which is EXPLODED?
          After all, it turns out that the same information can occupy a different volume depending on the presentation method and environment of use.

      • Novel

        Well, consider that you, in fact, have at the output from the matrix an image encoded in approximately the same way that is used in image compression formats. After all, when a part of color information is cut off in JPEG, no one shouts that it is being inflated? And there, after all, the easiest way to compress is to double on each side, and then stretch. However, they tinker with more complex transformations.

    • Vadim

      In short, I tried very hard to express the idea that the amount of useful information and the volume it occupies are two different things.

      And recently, it has become increasingly difficult to understand whether it is in a native, compressed or bloated state.

      • Denis

        The compressed RAW and JPEG files from my Nikon D1X take up almost equal volume, which is why I shoot only in RAW :) Both of them are compressed, although it is clear that there are more losses in JPEG.

        • Vadim

          In general, thanks for the pleasant and informative discussion.

          And most importantly, leaving thoughts about megapixels here, I shot a lot of nice frames these days, thinking about completely different things :)

      • Novel

        Well, based on the definition of the amount of information taken from the sensors - there is some value. But the way it is presented can be different. The lower the entropy, the smaller the message size, it is all information. More entropy means more message.

        We can lose some information by adapting to the properties of the output system (monitor) and the system of perception (eyes). For example, a monitor cannot display more than 8 bits per pixel (mostly). And the eye does not distinguish shades of color in the shadows. Etc. This reduces the amount of information and, accordingly, the size of the message. But due to compression, we reduce the entropy, leaving the message the same in volume. Converted to 16 or 32 bit format - increased entropy (the notorious “bloated pixels”).

  • Dmitriy

    On the D7100, 15,6MP RAW takes 20 to 26MB. Uncompressed RAW is recorded. But judging by how the final volume differs, compression is still clearly used, but without loss of quality and clipping of any information. Alas, this proves once again that the structure of a RAW file for a photographer is nothing more than a “black box” with which only a select few can work.

    • Arkady Shapoval

      Amateur cameras, including the D7100, do not support RAW without compression.

      • Dmitriy

        you are right about compression. however, as stated on page 67 of the D7100 instruction, in my case the compression is 20-40%. namely, “on” and not “before”. those. uncompressed RAW must be at least 15,6 MP * 14 bit / channel * 3 channels / 8 bytes / bit = 81 MB. Let's say rolls 40% compression: 81 * 0,4 = 32,8 MBt. and this is without metadata and previews. those are still “dark forest” turns out.

        • Arkady Shapoval

          Everything is really simple. No need to multiply by 3 channels.

          • Dmitriy

            I can’t understand why it’s not necessary to multiply by 3 channels. In total, the matrix gives a color gamut of 42 bits (i.e., 14 bits per channel). 42 bits, read 6 bytes. those. digitizing one pixel (RGB) takes 6 bytes. those. 15,6 megapixels require 93,6 MB.

          • Denis

            Dmitry, you inattentively read the article, or did not read at all. On the matrix, 15,6 are not full-color megapixels, but single-color. Therefore, nothing needs to be multiplied.

  • Dmitriy

    Seems to have caught the trick with manipulating data from the matrix. If one pixel is RGGB. Then from two physical pixels (RGGB + RGGB) you can actually get three pixels. due to the “midpoints” of neighboring cells (the first and third cells are independent separate cells, but the second is the second half of the first and the first half of the second). for such a model, the physical pixels are needed half as much. and the same method is used for horizontal and vertical passage of the frame. therefore, it is physically sufficient to have 4 times fewer pixels. then my 15,6 MP turns into 3,9 MP. and this is 3,9 megapixel * 42 bits / pixel / 8 bits / byte = 20,5 MB. This is clearly close to what we have. + metadata + preview.

  • Dmitriy

    To further exclude discussions on a pixel on a matrix, these are three colors or one, Nikon asked a support service:

    ”Dmitro Tishchenko: Good afternoon, Elena! I'm interested in a somewhat strange question on the D7100 matrix.
    Dmitro Tischenko: According to user instructions, its size is 24 megapixels. The question is: what does a manufacturer mean by a pixel? In this case, is a pixel a full matrix cell (RGGB) or a single color (RGGB = 3 pixels)?
    Elena: Full cell "

    That is, one pixel is a full-color tri-color pixel.

    • Denis

      “That is, one pixel is a full-color three-color pixel” - this is nonsense, answered by a person clearly not in the subject. Read at least Wikipedia before asking such questions and even more so believing such answers. Everything has long been described and painted many times.
      In the comments, the RawDigger program slipped through - it can put an end to the discussion on the topic of one or three colors. Open any RAW file, go into the settings, uncheck the 2x2 checkbox, and watch in RAW composite mode. This will be what the matrix saw, without debayering; when zoomed in, the Bayer massif is clearly visible.

      • Dmitriy

        This is the answer of those support. I also asked them a question in pursuit of their own answer regarding the apparent discrepancy in the amount of data received. the answer boiled down to the closed nature of data processing algorithms. a little higher, I suggested that the final pixels can be obtained with a much smaller volume of the initial cells of the Bayer filter (no one forbids maritz manufacturers not to talk about this - we are still interested in the final matrix size in pixels (RGB)). with RawDigger I will try to do the specified trick in addition (although yesterday I studied the detailed metadata of one of the RAVs. There it was also about pixels and they corresponded to the resolution of the final image, or the real image resolution should also be divided by 3). I wonder if the program will be able to show me the total number of cells that make up a color.
        http://ru.wikipedia.org/wiki/%D0%9F%D0%B8%D0%BA%D1%81%D0%B5%D0%BB%D1%8C - gives the answer that it is a color.
        http://ru.wikipedia.org/wiki/%D4%E8%EB%FC%F2%F0_%C1%E0%E9%E5%F0%E0 - we are not talking about pixels, but elements (filters).

        • Denis

          RawDigger after disabling debayerization will just show what the matrix has perceived (by the way, you can enable the display of inefficient pixels there, my Canon 350D has a black bar on the left and top). You can enlarge the image, go into the settings and tick 2x2, it will be clearly seen how the debayerization works. Image resolution does not change, pixel colors change, from monochrome to full color.
          According to the link there is such "Also a pixel is mistakenly called an element of a photosensitive matrix (sensor - from sensor element)"
          But, the number of sensels will not tell mere mortals about anything (this article is proof of that - not everyone understood the meaning), therefore they are called megapixels, in the expectation that users can thereby understand what resolution the photo will have (this whole kitchen with debayering don't care about 99% of photographers).

          • Dmitriy

            RawDigger - tried to study it in more detail. And yet I figured it out further. Yes, after all, on the matrix we are talking about subpixels (each color component separately) in fact. I got them the required 15,6 megapixels. Those. 15,6Msubpixels * 14 bit / subpixel + partial compression without loss of quality (20-40%) = 18,7-24,9 MB + preview 90-95 kBt + metadata. which is very similar to the truth. BUT! After converting to the same TIFF without preserving color channels, we get the same 15,6 MP, but after debayering. BUT! Bayer's array allows you to calmly and honestly interpolate almost the same 15,6 megapixels. How?!
            Bayer filter fragment:
            RGRGRG ...
            GBGBGB ...
            RGRGRG ...
            GBGBGB ...
            ...
            Generates pixels (RGB) from subpixels (the simplest and most obviously dumb option):
            RG+GR+RG
            GB BG GB

            GB+BG+GB
            RG GR RG

            RG+GR+RG
            GB BG GB

            those. from a 4x4 array has 9 full "honest" pixels.
            in order to compensate for the missing rows there are additional pixels. So it seems like the truth?

            • Alexey

              read my answer below!
              My opinion is that an HONEST pixel is that color components of which DO NOT PARTICIPATE in ANY OTHER pixels!

              And here we are simply fooled! Interpolation of pure water.

  • establishment

    IMHO somehow everyone went in cycles in color pixels and missed one moment.
    A camera with declared 12 megapixels really takes a 12 megapixel BLACK AND WHITE shot. Debyerization is essentially the restoration of colors for a twelve megapixel image. Creating one color pixel out of four would still lead to a deterioration in the detail of the image.

    • Alex

      Interestingly, is it possible to programmatically restore the original black and white image?

      • BB

        RawDigger to help you

    • Andreykr

      A 12-megapixel camera does not give a 12-megapixel black-and-white image, because the subpixels are covered with different color filters, the result looks like a grid and it must be mixed somehow (and it won't help to correct the brightness of the channels, each color filter changes the brightness of different shades), that is, debayering is still needed. And in the end, you have to put up with noise (the consequences of debayering). Why? Because there is displacement. At this point we shot with a red filter, and shifted to the right - with a green one. But these are different points of the image. By combining them, we either lose resolution or introduce noise.
      There are experimenters who removed the color filters from the matrix and thus got REALLY black and white images. Google: Scratching the Color Filter Array Layer Off a DSLR Sensor for Sharper B&W Photos. After such a manipulation, debayering is no longer needed. You can do this yourself.

  • Alexey

    But mine is not! I have 600d - 18.1 megaSUBpixel and type 17.8 real megapixels.

    Imagine that the lens focused to such an extent that the grains of the image fell into the size of one subpixel.
    And the wire would have got into the frame against the background of the sky.

    it would have gone a black spot in one row of subpixels. during debyerization, we get a 4-pixel gradient that changes smoothly from the color of the sky to half the brightness of the color of this sky and back again to the color of the sky. but we would never get a black pixel, for each debyer pixel has a pair of black 100% and a pair of bright 100%.
    when debaerizing with a step of +2, that is, pixelation without superimposing two subpixels from the last square to the new one - we would get a strict even 75% shaded pixel whose neighbors are the colors of the sky and it is 75% darker than its neighbors. This is the ideal case, but as you know, there are no such sharp lenses, and an ordinary whale lens at an aperture of f11 can cover only 2.5 subpixels with its maximum sharpness. here, when debayering at 17 megapixels, we will get an even more blurred gradient. But if we debayer at 4.5 megapixels, we get 100% black pixel in the place where the wire passes and 100% color of the sky around. This is the most common linear approximation. when the number of real pixels horizontally and vertically is doubled and the missing ones are obtained simply by interpolation. The problem is complicated by the fact that in Photoshop I could not achieve 4.5 megapixels from the rav, but with a clearly defined wire in the image!

    Therefore, the question is - no one has met any programs that can make a tiff out of a rav with real 4.5 pixels, which do not have common subpixels with each other?

  • Oleg

    Well, actually, a matrix pixel always meant a group of sensitive elements from which the final pixel is formed. By the way, the final pixel on the screen is also displayed not by one dot, but by the same group of multi-colored dots

    • Arkady Shapoval

      Yes, but if the monitor has 1024 * 720 pixels, then everyone understands that the display is responsible for the 1024 * 720 * 3RGB components (i.e., with a triad per real pixel), here the situation is different.

      • Oleg

        which other? the essence is the same, make an image that matches the declared pixels. And by the way, it’s far from always in the 3RGB monitor

  • Victor

    There is still a trick of the RGBG Bayer matrix: since there are twice as many green pixels in the buyer as the others, it is the green channel that has the highest sensitivity, the widest DD and the lowest noise level. This creates certain problems with the accuracy of metering, which can sometimes be solved using the tricky way to manually set the BB taking into account the difference in the channel sensitivity of the buyer - the so-called Uniwb
    In general, it is a pity that the manufacturers of DSC abandoned the RGBW matrix: https://ru.wikipedia.org/wiki/RGBW , where one pixel has no filter at all, that is, “white”. Such a matrix, of course, formed paler - "film" colors, but with exposure metering, sensitivity, noise and DD, things would have been much better.

Add a comment

Copyright © Radojuva.com. Blog author - Photographer in Kiev Arkady Shapoval. 2009-2024

English-version of this article https://radojuva.com/en/2013/08/ice-cube-m/

Version en español de este artículo https://radojuva.com/es/2013/08/ice-cube-m/