Setting up successful video surveillance requires attention to many details. The ultimate goal is to provide images of sufficient quality to support your surveillance objective – but what is image quality, and how do you adjust your camera to achieve your goal?
The basic principle behind most photography is that you need light! Light emitted from a light source (artificial or natural) is reflected by an object and then enters the camera through the lens. An image of the object is captured when the light rays hit the image sensor (or photographic film).
Figure 1: Light is reflected by an object and enters the camera through the lens.
The essence of this principle is that without light, there will be no image, and poor light will result in a poor image. Anything reducing the light between the object and the sensor will impair image quality. Examples are windows that partially block light, smoked dome covers or lenses with poor optics and small apertures.
If the scene you wish to view is lacking in light, you may need to add some. Auxiliary lamps illuminating the object can often increase image quality considerably.
Also, consider the fact that a camera mounted and tested during daytime can give entirely different results at night, or as the seasons shift. Make sure you understand the entire range of light in your scenario, and set up your camera accordingly.
Basic camera settings
The opening or aperture of a lens, also known as the iris, greatly affects the amount of light reaching the sensor. The f-number of the lens is the quotient of the focal length of the lens and the diameter of the opening. For example, a 50 mm lens with a 25 mm aperture would have an f- number of 2.0, as 50/25=2. The higher the f-number, the smaller the opening will be, and vice versa. A lower f-number means that more light will reach the sensor.
The aperture also affects the depth of field, that is, how much of the scene that is in focus at the same time. A wide open lens will have a very shallow depth of field. Objects slightly closer to or further from the camera than the set focus point will be out of focus. By increasing the f-number (thus closing the aperture), the depth of field increases, and the objects can be brought back into focus.
Figure 2: The aperture also affects the depth of field, that is, how much of the scene that can be in focus at the same time.
Another camera setting directly connected to the amount of light available in the scene is the shutter speed. This is the amount of time that the shutter is opened for, allowing light to enter and hit the sensor and create an image, for example 1/50th of a second. When there is more light available, the shutter does not need to stay open for as long, so faster shutter speeds are possible. As the light decreases, the shutter speed needs to be slower, to allow the sensor more time to get enough light to form an image. When the shutter speed is very slow, anything moving in the scene will appear blurred in the image, as the object’s position changes during the capture. This is called motion blur, and has a negative effect on both image quality and usability of video.
Many cameras employ an internal boost of the image signal, called gain. To enable image capture in low light without affecting the shutter speed or the depth of field, the weak sensor signal can be electronically amplified, resulting in a brighter image. A side-effect of this is that tiny imperfections in the image are also amplified and are reproduced as image noise.
Axis cameras automatically adjust the aperture, shutter speed and gain to produce an image that is always correctly exposed. It is also possible to specify a priority setting to favor either low noise or low motion blur, depending on your requirements.
Figure 3: Noise degrades image quality and generally increases the bandwidth needed for the video stream.
Advanced camera settings
All digital images are made up of small picture elements, called pixels. A pixel is the smallest individual component of an image, and each has a specific color and intensity. The total amount of pixels in an image is referred to as the resolution. A resolution of 1920×1080 means there are 1920 columns and 1080 rows (2 073 600 pixels total) of pixels making up the image. Another term for this specific resolution is 2 megapixels, as there are roughly 2 million pixels in the image.
Figure 4: A pixel is a point in the image with a specific color and intensity.
At a higher resolution, the camera can capture finer details in the scene, but since the value of each pixel needs to be stored and transferred in a video stream, the bandwidth requirement also increases. Depending on your operational requirements, you should adjust the resolution to provide sufficient image detail without exceeding your available bandwidth.
Visible light is composed of a wavelength spectrum where different wavelengths are perceived as different colors. Sunlight covers almost this entire spectrum and is generally considered to be white. Other light sources may have a bias toward higher or lower wavelengths in the emitted light, causing the whites to be slightly red-tinted or blue-tinted. When reflected by objects, this tint will transfer to the image, causing an unnatural appearance.
The wavelength bias of a light source is called its color temperature and is measured in degrees Kelvin. If the camera knows the color temperature of the incoming light, it can adjust the image to keep white objects white – a function called white balancing. Many cameras try to automatically determine the color temperature and then set the white balance. You can also set the white balance to a fixed color temperature depending on the light fixtures in the scene, for example fluorescent lamps or tungsten bulbs.
Figure 5: If your image is unnaturally blue, check your white balance settings!
Fluorescent lighting is very common in stores, warehouses and office environments. In this kind of lighting, the lamp turns on and off at a rapid pace, although to the human eye this appears as a steady flow of light. At certain camera shutter speeds, however, this flickering will create an undesirable effect in the video stream. Enabling the flicker-free option in the camera allows it to adjust its shutter speed to avoid the flickering effect. Depending on the geographical location, the power frequency will be either 50 Hz or 60 Hz, and this value must also be set in the camera, to get proper results from the flicker-free setting.
To produce a balanced exposure the camera adjusts the shutter speed and aperture according to the available light. In some scenes, there may be some areas that are much brighter than others, as caused by reflections, strong lights, or sunlight coming through a window. These overly bright areas may cause the camera to lower its exposure settings, thereby making most of the image too dark.
By enabling the backlight compensation setting, the camera will ignore any isolated bright areas and keep the exposure at a suitable level for the darker parts of the scene.
Figure 6: Overly bright areas can trick the camera into lowering its exposure settings, making most of the image too dark. This is solved by enabling the backlight compensation setting in the camera.
The difference between the darkest and brightest parts of a scene is called the dynamic range. If the dynamic range is wider than the capabilities of the camera’s sensor, the dark parts will be rendered as all black, and the bright parts will be all white.
Some cameras feature a Wide Dynamic Range mode (WDR), which uses various techniques to try to compensate for extremes of brightness in the scene. Try this setting if there are very dark and very bright areas in your scene. If possible, try to position and aim your cameras so as to avoid extreme variations in brightness.
Figure 7: The first two images show how the dynamic range in the monitored scene causes parts of the image to be overexposed or underexposed. In the image to the right, WDR dynamic capture has been used, resulting in a balanced image with all areas visible.
Quality and compression
Digital video can be compressed to use less bandwidth for streaming and to save storage space. Compression involves applying a mathematical algorithm to the numerical values that make up the video stream. The output is considerably smaller than when not compressed, but the video stream must also be expanded by a reversing algorithm before it can be viewed.
Most algorithms or codecs (an abbreviation for compressor/decompressor) achieve this partly by discarding information of little significance. During decompression, this missing data is restored by approximation, making the end result slightly different than the original recording. This is called lossy compression, as it does actually lower the image fidelity. At low compression ratios, the human eye will not notice this loss, but at higher compression ratios (for low bandwidth), the image quality will deteriorate, with noticeable artifacts in the image.
Different scenes can be compressed with varying results. A busy scene with a lot of motion will be more complex to compress, which results in more bandwidth being required, or an increase in image artifacts. You will need to tweak your compression settings until you find an acceptable trade-off between file size and image quality.
Choosing the right camera and lens for the job will have the greatest effect on image quality, but you can also accomplish a lot through proper positioning and setup of your camera.
Figure 8: At a low compression ratio (left), the naked eye will not notice any loss, but at higher compression (right) there will be noticeable artifacts in the image.
The correct image quality for your surveillance video depends on the objectives of your project. In this article we have discussed some important parameters affecting image quality; the amount and type of light, camera settings such as depth of field, gain, resolution, color temperature, backlight compensation, WDR and video compression.