All digital data is made up of ‘bits’, and that includes digital images. In computing, ‘bits’ are either on or off, so there are only two possible values. But when you use them together the combinations are multiplied so you can record a much wider range of values. The number of bits used is the ‘bit depth’. This is especially relevant in photography, where you need to be able to render subtle differences in tone and color.
Here’s a table showing the number of different values for different bit depths:
Bit depth | Number of tones | File format |
1-bit | 2 | |
2-bit | 4 | |
3-bit | 8 | |
4-bit | 16 | |
5-bit | 32 | |
6-bit | 64 | |
7-bit | 128 | |
8-bit | 256 | JPEG |
9-bit | 512 | |
10-bit | 1024 | HEIF (where offered) |
11-bit | 2048 | |
12-bit | 4096 | RAW (basic cameras) |
13-bit | 8192 | |
14-bit | 16384 | RAW (advanced cameras) |
15-bit | 32768 | |
16-bit | 65536 | TIFF (from a RAW file) |
… | ||
32-bit | 4294967296 | Merged HDR images |
Most images are shot in the JPEG format for convenience. This produces small, compressed files that don’t take up too much space, can be opened and displayed on any device and yet can still display the subtle tones needed for photographs.
Are JPEGs 8-bit or 24-bit?
They are both. JPEG is an ‘8-bit’ format in that each color channel uses 8-bits of data to describe the tonal value of each pixel. This means that the three color channels used to make up the photo (red, green and blue) all use 8-bits of data – so sometimes these are also called 24-bit images (3 x 8-bit).
Each color channel in an 8-bit JPEG can record 256 different shades, and while that doesn’t sound like enough to give a smooth transition of tones, you should remember that tones in a picture are rarely composed of one channel alone. These three RGB colour channels generally work in combination to produce a much subtler, wider range of tones.
Even so, if you edit a JPEG photo to shift the color balance or change the contrast, you can start to see these tones separating visually. This can show up as digital artefacts like ‘banding’ effects or ‘posterisation’, particularly in skies or other areas of even tone.
The only way round this is to capture images with a higher bit depth, as these are far less likely to show any tonal separation, even with heavy manipulation.
And the only way to do this is by shooting RAW files instead of JPEGs.
RAW files and bit depth
RAW files are captured at a much higher bit depth than JPEGs and therefore have much subtler tonal information. The contain the data captured by the sensor in an unprocessed state, so you do need to use RAW processing software to produce editable images.
Some older cameras captured 10-bit RAW files. This 2-bit advantage over JPEGs is still worth having, but is low by today’s standards. Now, even cheaper DSLRs and mirrorless cameras will capture 12-bit RAW files, which is enough to give a big step up in tonal subtlety and editing potential compared to JPEGs. More advanced cameras will capture 14-bit RAW files, which are better still. A few medium format cameras can even capture 16-bit RAW files.
Some photo editing programs will let you work with RAW files directly, but with others, including Photoshop, Affinity Photo and various plug-ins, you will need to process the RAW file into an editable image first.
You can create a JPEG from a RAW file, but then you’re back where you started. A better alternative is to produce a 16-bit TIFF image. This will be a much larger file, but far better for image editing. RAW processing software produces 16-bit TIFF images by ‘upsampling’ the RAW file, whether it’s a 12-bit or a 14-bit file. Even with this upsampling process, a 16-bit TIFF file will be a much better starting point for image manipulation than an 8-bit JPEG.
Bit depth versus dynamic range
There seems to be a lot of confusion around this. Some technical authors equate the bit depth of a camera’s RAW capture or processing pipeline with the range of tones – the dynamic range – that the sensor can capture, so that an increase of 1-bit in the capture system will yield an increase of 1EV in the dynamic range. This seems to be based on a spurious correlation between bit depth and dynamic range and a misunderstanding of how digital capture works.
Sensors do not capture data in bits. They capture analog light values. The camera’s A/D (analog-digital converter) converts these into digital values according to the camera maker’s own understanding of how to optimise the sensor’s performance. The bit depth it uses is not an indication of the range of tones the camera has captured, but the resolution (in data bit depth terms) used to distinguish between these tones. An 8-bit image can have the same dynamic range as a 16-bit image, but the 16-bit image simply uses more data to describe the tones within that dynamic range more accurately.
There are a couple of exceptions when it comes to image-editing.
- RAW files often have additional highlight and shadow information that’s not used in the default in-camera processing but can be brought out later. This is due in part to the extra bit depth in a RAW file, or at least its capacity to store more tonal data.
- A 16-bit image can appear to have more dynamic range than an 8-bit image because extreme shadow and highlight details respond better to editing. In fact, they respond better because of the extra bit depth and tonal information, not because there is more inherent dynamic range.
- HDR software will typically produce 32-bit merged images with a huge range of brightness values available for editing and manipulation. In this instance the extra bit depth IS used for extended dynamic range, though it’s still not necessarily a linear relationship. In a 32-bit HDR image, the dynamic range is dependent on the range of tones captured by the camera, not the bit depth of the file format.