top of page
eBook Digital Photography course
iBooks button
eBook Rome
iBooks button

Sensor

PayPal ButtonPayPal Button

The sensor

 

The sensor (figure 01) is the heart of every digital camera even if it is not the only important component. The sensor transforms light that comes in through the lens and converts it into an electronic signal. The created signal is then amplified by an amplifier and converted to image data by the processor.

Every sensor has its own sensitivity to light and this sensitivity is dependent on the quality of the sensor and the type of sensor in the camera. If we look at the CMOS sensor, for example (see below), it is important to note that not all will be equal in terms of their sensitivity. Usually mounted sensors on reflex cameras are costlier and have a greater sensitivity.

It is sufficient to try two cameras of different price range in order to realize that, all

CMOS sensor

01 CMOS sensor.

conditions of light being the same (for example, testing various sensors in low light conditions), the best sensor will allow shorter times and the lowest ISO.

The sensor’s number of pixels

 

The sensor’s total number of pixels is a very important characteristic because it determines how many fundamental points (photosites) are subdivided on its surface. The more pixels there are the better the image resolution will be. The size of every pixel and the distance between each pixels is also very important for the electronic noise.

In order to reduce the noise generated by the sensor to a minimum, it is important that the pixels are large and spread out between them. Therefore, the best thing would be to have many pixels that are large on the sensor. In compact digital cameras, for instance, there are many pixels in a small space due to the camera’s compactness. On reflex cameras, we find more pixels (depending on the model), but the sensors are notably larger and therefore the pixels are also larger and farther apart which greatly improves the quality of the image generated. We should not confuse the resolution of an image with the sensor’s resolution (photosites/inch), but we can say that generally the quality of the sensor improves when its resolution decreases.

Both the sensor and image resolution are measured in pixels as a unit of area, but for the image, the greater resolution means a greater quality while for the sensor, the opposite is true. Therefore, in order to get a crisp and clean image, we have to have a sensor with not only many pixels but large pixels.

Between two sensors of the same size, the one with a lower number of pixels is generally the best.

 

Format

 

Another characteristic of the sensor is the format. All sensors are rectangular and therefore have a long side and a short side. When the ratio between the two (the long side divided by the short one) is equal to 1.5, the format of the imagine will be 3/2 but if the result equal to 1.3, the format is 4/3. In photography, the 4/3 and 3/2 formats are used the majority of the time. Usually, the 3/2 format is utilized in reflex cameras while 4/3 is used more prevalently in compact digital cameras (its ratio is closer to the square). In cinema, the preferred format is 21/9 while on televisions in the home in the last couple of years the format is 16/9. Cathode ray TV sets had a format of 4/3. You can see examples in the section “Image formats” on the “Resolution” page.

 

Size

 

Reflex camera sensors are also classified according to their dimensions. There are sensors of various sizes that range from just a few millimeters to up to 35mm. The names APS-C and 35mm both derive from the film camera. In order to identify the dimensions of a sensor, you usually use the length of the longer side of the rectangle (by knowing that the format is 3/2, the shorter side can be deduced). APS-C corresponds to roughly 23mm. Each manufacturer has adopted slightly different dimensions and attributes its proprietary name to the APS-C format. The full frame measures 35mm, and is equal for all manufacturers at the moment. Being familiar with the dimensions of the sensor is very important because a lens with a focal length of 50mm, mounted on a reflex camera with an APS-C sensor, will have an actual focal length of 75mm. If that same lens is mounted on a camera with a full-frame sensor, the focal length will be the true focal length on the lens: 50mm. This consideration is important to understand on a digital camera with an APS-C sensor because a telephoto lens becomes more powerful while a wide-angle lens loses a part of its visual field and vice versa 35mm sensors. Also important is knowing that all full frame lenses can be mounted on full frame and APS-C digital cameras (figures 02 and 03) while lens for smaller formats cannot be mounted (with some exceptions) on digital cameras with full-frame sensors (figure 04). When a lens is mounted on a camera with a sensor that has APS-C dimensions and by being familiar with the real focal length, it is necessary to multiply the number by 1.5. In other words, an 18-105mm lens on a digital camera with APS-C will transform the focal length by a factor of 1.5, thus becoming 27-157mm.

Raggi luminosi
Drawing of a lens (side)
Image from full frame lens with full frame sensor

02a Full frame lens and full frame sensor. The image is upside down because it was generated beyond the focal plane.

Vendicari, Sicily

02b Photograph shot with a full frame lens on a camera with a full frame sensor.

Raggi luminosi
Drawing of a lens (side)
Image from full frame lens with APS-C sensor

03a Full frame lens and APS-C sensor. The image is upside down because it was generated beyond the focal plane.

Vendicari, Sicily

03b Photograph shot with a full frame lens or APS-C on a camera with an APS-C sensor.

04a APS-C lens and full frame sensor. The image is upside down because it was generated beyond the focal plane.

Drawing of a lens (side)
Raggi luminosi
Image from APS-C lens with full frame sensore
Vendicari, Sicily

04b Photograph shot with an APS-C lens on a camera with a full-frame sensor.

In figure 02a, we can see how a full frame lens gives rise to a circular image. The rectangle of the full frame sensor is inscribed perfectly inside it. In figure 03a, we see how a circular image from the same lens is much larger than the APS-C sensor. In this case, the image will still be better because the sensor captures only the central portion of the circle, eliminating the imperfections that occur on the edges of the lens. As always, in this case, by looking at figure 03b, we can see how the smallest dimensions of the sensors make the image appear larger by a factor of 1,5 (note the difference with figure 02b).

In figure 04a, on the other hand, it is easy to note how the cone generated by the APS-C lens is too small to make a circular image that is sufficient to cover the full frame sensor. In this case, we can see from figure 04b that the photo is not usable owing to the fact that it turns out with a black border.

With this knowledge, I should make a clarification: there are some full frame cameras that are able to mount APS-C lenses by renouncing a portion of the sensor. In this case, the camera uses only the central part of the sensor and crops the image by a factor of 1.5.

 

CCD sensors

 

CCD sensors or charge-coupled devices are made with a grill of photosensitive pixels. Each pixel transforms  the light ray into an electrical charge that is transferred to the pixel next to it and so forth, running along the line and then passing to the next line. Therefore, the charges of each single pixel are read line by line and transformed into electrical current (an analogic signal). After the amplification, the signal is converted to a digital one by the processor. This type of sensor is very sensitive but consumes a lot of energy. They are also expensive, because their production requires highly specialized manufacturing techniques. The other defect is that they are not very fast, because they must transfer the charge. Up until a short time ago, CCD sensors were the best sensors for digital cameras and were the ones largely used.

 

CMOS sensors

 

CMOS sensors, or complementary metal oxide semiconductors, transform light into electrical current without transferring the charge from one pixel to another like CCD sensors. Because of this, they are much faster. Another difference is that the amplifier is integrated into the same chip, and therefore, the signal that comes from it goes directly to the image processor. CMOS sensors are much more economical than CCD sensors. This is because CMOS sensors are produced using the same production methods as microchips found in computers. They also consume much less energy than CCD sensors. However, some years ago they were less sensitive and suffered from a great deal of noise.

In 2010, though, CMOS sensors were greatly improved to the point that they not only appeared in reflex cameras but also compact digital cameras. In fact, their sensitivity has been notably increased, and the noise and dimensions reduced. 

CMOS consume much less than CCD, they are cheaper to produce, faster in creating images, perform better with burst shots, are compact and very sensitive and generate low electronic noise. There are two types of CMOS sensors: the main type which uses color filter array and another type called Foveon. The first is the most used while the second is used, up to December 2014, only in a few digital cameras.

 

Operation of a CMOS sensor with RGB color filter array (CFA)

 

The electronic sensor of the digital camera is made of a rectangle that contains some photosites  that capture the light and converts it to an electronic signal (figure 05).

Every photosite is equivalent to a photographic pixel. The photosite is sensitive to the intensity of the light, but not to its wavelength. This means that the photosites of the sensor are not able to distinguish colors.

05 CMOS Sensor.

Photosites

drawing of a CMOS sensor's photosites

Therefore, the light signal that strikes each photosite is measured by its intensity and transformed into an electronic signal without the necessary information that communicates the color. The image that results at this point has a gray tonality. Each photosite, in fact, receives the light with a different intensity that is represented by different shades from white (100% brightness) to black (no brightness). Therefore, if we imagine an image with trees and sun in the sky, the photosite that receives light from the sun would have a brightness equal to 100% and would be white; the photosite that receives light signals from the leaves of the trees would be gray (between black and white). This is where the color filter array RGB comes into play on the surface of the sensor.

This filter has the exact same dimensions of the sensor and is subdivided into different parts depending on the number of photo sites and pixels in the camera. Each of these parts fits together perfectly with the photo sites below.

It is like a mosaic of assembled filters between them that makes a grid (figure 06). As we see in figure 07, the filter is mounted on the sensor so that it makes each box comes together with the photosite below.

06 Color filter array RGB.

Color filter array RGB

07 Position of the filter over the sensor. 

The tesserae of the mosaic fit perfectly with the photosites.

drawing of a CMOS sensor's photosites
Color filter array RGB
drawing of a Color filter array RGB + CMOS

As seen in figure 06, each tessera of the mosaic-like filter can assume one of three colors: red, green or blue. Each of these allows the wavelength of one of the three colors to pass. Because of this, the photosites below know the intensity of the light measured that corresponds to the color of the filter above it. But each pixel of the photograph has to have information relative to these three colors, and therefore it is not enough. In fact, we create a “mosaic” where each tessera has only one color component. This is where the “Raw format” of the image comes from.

 

Demosaicing algorithm

 

In order to attribute the information of each of the remaining two colors to each pixel, we need to infer from those pixels nearby. To do this, we need to employ a demosaicing algorithm. The image generated up to this point is a mosaic composed of points, each of which has only one color and a specific brightness. The processor of the camera elaborates the data using the demosaicing algorithm, adding to each pixel the information on the two missing colors. In practice, the two missing colors of a pixel are estimated by taking into account the photosites nearby, each having only one other color (figure 08).

Drawing of a Color filter array RGB

08a/b Demosaicing.

At the photosite (red) highlighted in this figure, the information on the green and blue colors are missing. The information is calculated by the processor using the adjacent photosites.

There are different demosaicing algorithms that are more or less the same. The difference lies in the fact that those that are more refined require superior calculations and necessitate the presence of more powerful processors that are also more expensive. A different algorithm, for example, is represented by RGB-E that makes use of four colors; some of these algorithms make use of filters of different colors. However, the main function is along the same general lines, and the colors do not exceed more than four.

 

Color depth

 

As already noted, the sensor converts lights to an electronic digital signal. The light, being made up of photons, can be considered more or less an analog signal for the purposes of this discussion. An analog signal takes on continuous values and without any interruption between them. The electronic digital signal, on the other hand, is variable; that is, of all the values that the intensity of the light can assume, it assumes only those values that are discreet. 

High quality sensors dedicate 8 bits for each color which gives a color depth of 24 bit. 8 bits translates “2 to the 8th power” of color and therefore there are 256 variations. If we multiply 256x256x256 we get around 16.7 million possible colors, identified by the 24 bit value known as “True color”. But often, in the calculations, processors use values of 12 or 16 bit per channel in order to avoid a loss of details in the mathematical rounding. Then saving the file at 8 bit per channel is acceptable given that saving beyond 8 bit does not yield any appreciable improvement in the color depth.

 

Operation of the CMOS Foveon

 

The CMOS Foveon sensor is the other type of CMOS sensor that uses a different technology that does not need a CFA filter, because its photo sites also capture the wavelength information and the color.

In practice, for each point or pixel of the final image, there are three photosites dedicated to each of these that identifies the intensity of one of the three RGB colors from the light that hits it. Because of this, the three photo sites generate only one pixel but without the need of interpolation or demosaicing. This is an advantage because the information received from the light is not wasted, but a 9 MP Foveon sensor creates a photograph that only has 3 MP. Recently, Foveon has been acquired by Sigma that is the only company to produce cameras that use this kind of sensor. This sensor has a lower number of pixel than other models.

bottom of page