Skip to content

Analogue to digital conversion

Because computer hardware is inherently digital, it cannot represent analogue signals with absolute fidelity. In order to allow the processing of an analogue signal, it must first be converted to a digital approximation. Some data is lost in this process, but usually the approximation is sufficiently accurate that the small discrepancies are negligible. Microprocessor boards typically have a dedicated component called an analogue-to-digital converter, or ADC, to do this job.

Sampling

The first simplification that occurs in the conversion process depends on the rate at which measurements are made. This is usually referred to as the sampling frequency. Figure 2 illustrates an analogue signal is sampled at 1Hz (one measurement per second).

Sampling

Figure 2: Sampling

Anything that happens between the sampling times is lost. For example, the peak of the signal occurs between timesteps 5 and 6 but does not coincide with any sampled point. The highest value in the digitised signal will therefore be slightly lower than the actual maximum value. The greater the sampling rate, the more closely the digital approximation will match the real analogue signal, but the fit will never be perfect. The scale of the error is illustrated in Figure 3 where the blue curve represents the actual analogue signal, and the orange curve is the digital approximation. The error is represented by the difference between the two curves. The figure shows that the digital approximation overestimates the actual value when the signal is falling and underestimates when the signal is rising.

Conversion error

Figure 3: Errors due to sampling

Quanitisation and resolution

Quantisation is the process of fitting a measurement of an analogue signal to the closest available digital representation. The conversion process effectively assigns each measured point to a value band whose size, or quantisation interval, is determined by the digital hardware. Resolution is the number of bits (N) in the binary word that represents the input value. Consider a sensor that outputs an analogue voltage in response to changes in some physical quantity. If the Full Scale (FS) input signal range is \(V_{REF}\), then the quantisation interval, \(q\), is determined as:

\[q = \frac{V_{REF} }{ (2^N-1) }\]

where \(N\) is the resolution, or wordlength, in bits.

Assuming an ADC with an 10-bit resolution (\(N=10\)) , and a reference voltage \(V_{REF}=1.023V\) :

\[ \eqalign{ q &= \frac{1.023}{(2^{10}-1)} \cr &= \frac {1.023}{(1024-1)}\cr &= 1mV } \]

This means that the sensitivity of the measurements will be ±0.5mV.

The quantisation interval can also be specified as the change in input voltage required to produce a change of 1 bit in binary output value. In the above example, this would be 1 millivolt/bit.

Obviously, the accuracy of the quantised signal is determined by the number of bits of resolution. For example, a 10-bit ADC provides \(2^{10} = 1024\) quantisation levels, and a 16-bit ADC offers \(2^{16} = 65536\) quantisation levels. Given again a \(V_{REF} = 1V\) then this would correspond to quantisation intervals of approximately 1mV/bit and 15.2μV/bit.

Resolution is often expressed as a percentage of full-scale (FS) range. For example, a 12-bit converter can resolve an input signal with an accuracy of 1/4096 or 0.024% and a 16-bit converter resolves to 1/65,636 or 0.015%. Assuming a 10-bit ADC with an input range of 1V FS then the ADC can resolve input changes down to 1V/1024, approximately 1mV or 0.1%.