# 3 Z-transform and Processing Sampled Data.

The Z Transform and Its More Important Properties.

In the previous chapter we looked what happens to a signal when we reduce it to a sequence of number via an ADC.  We will now move on to the next step in our original system in which we will develop the algorithms we will be using in the processor to convert the samples coming in from the ADC into the sequence we send out to the DAC. Figure 3.1 Basic Discrete Data System Layout.

# Section A) Z Transform

In chapter 2, we showed how effective the Nyquist model was at characterizing data that is a sequence of samples, and that the Fourier Transform (FT) of that modelled data elucidated some very interesting properties.  We will now take that sampled data and employ the Laplace transform.  Equation 3.1 is the Laplace transform of the input x(t) times the stream of impulses that create the Nyquist sampled signal.

$X_s (s) = \int_{0} ^{\infty} ( x(t) * \sum_{n=-\infty} ^{\infty} {\delta (t-n T)} * e^{-s t} ) dt$                   (3.1)

We will move x(t) under the sum and then based on the sifting property of impulse functions, we replace x(t) with x(n T), since t = n T is the only place that $\delta(t-n T)$ is not zero.

$X_s (s) = \int_{0} ^{\infty} ( \sum_{n=-\infty} ^{\infty} {x(n T) * \delta (t-n T) * e^{-s t} } ) dt$                   (3.2)

As was the case with the FT, we will somewhat dispense with mathematical rigor and assume the proper conditioning on the functions and interchange the summation and integral.  Do understand, this is done not to dismiss rigor, but to aid in understanding the process.

$X_s (s) = \sum_{n=-\infty} ^{\infty} (x(n T) * \int_{0} ^{\infty} ( \delta (t-n T) * e^{-s t} ) dt)$                   (3.3)

Employing the sifting property of the $\delta$ function, the integral disappears and it replaced with the summation of the values at t = n T.  Resulting in

$X_s (s) = \sum_{n=0} ^{\infty} ( x(n T) * e^{-s n T} ) dt$                                                 (3.4)

Then we will substitute in $z = e^{s T}$ and remove the redundant T from x(n T) and we will  have equation 3.5

$X_s (z) = \sum_{n=0} ^{\infty} ( x(n) * z^{-n} )$                                                         (3.5)

This summation equation (3.5) is known as the z transform, and much like the Laplace transform it will be our tool to understand and design discrete systems.  Since the z transform is the Laplace of a sampled signal, it will provide us with similar properties and results as the Laplace, but with subtle differences.

Section A2) Examples and The Form of z Transforms

To get an understanding of what the z transform represents, let’s consider a generic signal that we will use to represent sequences.  Let

$x(n) = a^n * u_s(n)$                                             (3.6)

In equation 3.6, the part $u_s(n)$ is known as a unit step sequence and is there to make the sequence zero for negative values of n.  The shape of this sequence is shown in Figure 3.2. Figure 3.2 Shape of Generic Sequence for Different Values of a.

As can be seen in the previous graph, $a^n$ has a shape similar to the dying exponentials common with linear systems.  Thus we expect it to have a similar action.  So we will compute the z transform of this signal, which starts by inserting it into the definition of the z transform.

$X_s (z) = \sum_{n=0} ^{\infty} ( a^n * z^{-n} )$                                                         (3.7)

Then we rewrite the sum as

$X_s (z) = \sum_{n=0} ^{\infty} ( (a z^{-1} )^n )$                                                         (3.8a)

or

$X_s (z) = \sum_{n=0} ^{\infty} ( (\frac{a}{z} )^n )$                                                         (3.8b)

In eq. 3.8a and 3.8b, we are computing the sum of a geometric sequence.  Although it may not automatically come to you, the sum of a geometric sequence is known and will rederived here.

The generic form of a sum of a geometric sequence is

$\sum_{n=0} ^{N-1} ( a^n )$                                                         (3.9)

Note it is a finite sequence, but we will later have $N \rightarrow \infty$.

The sequence is thus

$\sum_{n=0} ^{N-1} ( a^n ) = a^0 + a^1 + ...+ a^{N-2} + a^{N-1}$                                                         (3.10)

If we multiply the sum by a, we have

$a * \sum_{n=0} ^{N-1} ( a^n ) = a^1 + a^2 + ... + a^{N-1} + a^{N}$                                                         (3.11)

Now we take the difference 3.10 minus 3.11 and note that the second term in 3.10 is the same as the first term in 3.11.  This trend continues until the last term in 3.10, which matches the next to last term in 3.11.  Thus the difference is

$\sum_{n=0}^{N-1} ( a^n ) - a * \sum_{n=0} ^{N-1} ( a^n ) = a^0 - a^N$                                       (3.12)

Factoring out the sum and noting that $a^0 = 1$ we have

$(1 - a) \sum_{n=0}^{N-1} ( a^n ) = 1 - a^N$                                       (3.13)

and solving for the sum

$\sum_{n=0}^{N-1} ( a^n ) = \frac{(1 - a^N)}{(1 - a)}$                                    (3.14)

So if we have $N \rightarrow \infty$

$\sum_{n=0}^{\infty} ( a^n ) = \lim_{N \to \infty} \frac{(1 - a^N)}{(1 - a)}$                                    (3.15)

Then as $N \rightarrow \infty$ we have the sum approaching $\frac{(1)}{(1 - a)}$ provided |a| < 1

Returning to the z transform started previously we can see that

$X_s (z) = \sum_{n=0} ^{\infty} ( (\frac{a}{z} )^n ) = \frac{1}{1-\frac{a}{z}}$                                                 (3.16)

provided $| \frac{a}{z} |$ < 1

Consider what happens if z = a, or $\frac{a}{z} = 1$ , the original sum would be an infinite sum of 1’s which would approach infinity.  This point will be called a pole, since the it extends up to infinity.

Now it should be noted that z can be complex number and this has important implications on its use.  Consider if “a” were a complex number and x(n) = an were plotted out.  It would look like the following Figure 3.3 Sequence for an Where a is Complex.

It should be noted that the oscillatory nature of the sequence is very similar to the dying exponentials that are commonly the solution to a linear circuits impulse response.  In fact if we take the response of a linear system $e^{(\alpha + j \omega) t )$ and replace t with n T, we have

$e^{(\alpha + j \omega) n T } = (e^{ ( \alpha + j \omega ) T } )^n$                 (3.17)

And these are equivalent if

$a = e^{(\alpha + j \omega) T }$                                                                          (3.18)

Recall that previously we stated that |a| < 1 for the z transform to converge.  This will relate to the stability of the system.  Consider the absolute value of 3.18

$| a | = | e^{(\alpha + j \omega) T )} | = | e^{\alpha T} * e^{j \omega) T} |$                 (3.19)

$latex = | e^{\alpha T} | * | e^{j \omega) T )} |$                                                                          (3.20)

Now since $| e^{j \omega) T } = 1$ we can see that

$| a | = | e^{\alpha T} |$                                          (3.21)

And as long as alpha is negative, we will have a dying exponential and |a| < 1.

For this reason, the unit circle in the complex plane will play an important role in much of the analysis of the z transform.  The following video is meant as a visual or intuitive demonstration of this principle.

In the next section we will explore more about how to visualize and apply z transforms.

# Section B) The z Transform of a Delayed sequence.

We begin by considering the effect of delaying the sampled sequence by one sample.  We will first note that the z transform is a summation and looks like

$\sum_{n=0} ^{\infty} ( x(n) * z^{-n} ) = x(0) + x(1) z^{-1} + x(2) z^{-2} + x(3) z^{-3} + ...$           (3.22)

Note the … indicates that the sequence continues on in the manner shown.  The z transform of the delayed version, x(n-1), is then

$\hat X(z) = \sum_{n=0} ^{\infty} x(n-1) z^{-n}$   (3.23)

Writing out the summation we would have

$\hat X(z) = x(-1) + x(0) z^{-1} + x(1) z^{-2} + x(2) z^{-3} + ...$   (3.24)

Next we set out x(-1) and factor out a single $z^{-1}$ from the remainder.

$\hat X(z) = x(-1) + z^{-1} ( x(0) + x(1) z^{-1} + x(2) z^{-2} + ... )$         (3.25)

We can now observe that sum on the right, in parenthesis, is the z transform of x(n) or X(z) and thus

$\hat X(z) = x(-1) + z^{-1} X(z)$                                                 (3.26)

This property is analogous to the Laplace transform of the derivative of a function, and points to the Difference Equations (DE) that we will use to process discrete data.

# Section C) z Transform to Frequency Response

One of the primary approaches used to analyze systems and data is its frequency response.  So how might we go from the z transform to a frequency response?  We start by looking at sampled cosine and applying the Euler identity for a cosine we have

$x(n) = cos( \omega_o n T) = \frac{1}{2} ( e^{j \omega_o n T} + e^{- j \omega_o n T})$       (3.27)

Factoring 3.27 we have

$x(n) = \frac{1}{2} ( (e^{j \omega_o T} )^n + (e^{- j \omega_o T})^n )$                       (3.28)

Looking close at 3.28, we can see that it has the form of two sequences of the form $a^n$, where $a = e^{\pm j \omega_o T}$. Thus the z transform of the signal would  be

$X(z) = \frac{1}{2} ( \frac{1}{1 - z^{-1} e^{j \omega_o T}} + \frac{1}{1 - z^{-1} e^{- j \omega_o T}} )$            (3.29)

Thus the z transform would have two poles, at $z = e^{\pm j \omega_o T}$.  To make this more understandable, we convert to frequencies by $\omega_o = 2 \pi f_o$ and $T = \frac{1}{f_s}$ which will become

$X(z) = \frac{1}{2} ( \frac{1}{1 - z^{-1} e^{j 2 \pi \frac{f_o}{f_s}}} + \frac{1}{1 - z^{-1} e^{- j 2 \pi \frac{f_o}{f_s}}} )$            (3.30)

Another way to view this is to plot the location of the poles (z values where X(z) = $\infty$) whichi is at $z = e^{\pm j 2 \pi \frac{f_o}{f_s}}$ which is what the following video will display.

The prime thing to take away from this section is the fact that the z transform at the unit circle, represents the frequency response of the system.

# Section D.1) The z Transform of a Difference Equation

A difference equation will be the primary form used to process the discrete data.  A basic, second order, example is shown here

$y(n) = b_0 x(n) + b_1 x(n-1) + b_2 x(n-2) - a_1 y(n-1) - a_2 y(n-2)$                      (3.27)

The basic structure of this equations is that we create an output sequence, y(n), by computing a weighted average of current and past inputs (x(n), x(n-1) & x(n-2)) and past outputs (y(n-1) & y(n-2)).

If we take the z transform of 3.22 we would have

$Y(z) = b_0 X(z) + b_1(x(-1) + z^{-1} X(z)) + b_2 (x(-2) + z^{-1} x(-1) + z^{-2} X(z)) - a_1 (y(-1) + z^{-1} Y(z)) - e (y(-2) + z^{-1} y(-1) + z^{-2} Y(z))$                                                     (3.28)

It should be noted that we have applied the property in 3.21, recursively in this equation to find the z transform of x(n-2), as in

$Z\{ x(n-2) \} = x(-2) + z * Z\{ x(n-1)\} = x(-2) + z^{-1} ( x(-1) + z X(z)) = x(-2) + z^{-1} x(-1) + z^{-2} X(z)$             (3.29)

where Z{x(n)} is the z transform of x(n) or X(z).

Since the primary goal is to solve for the output, y(n), we will now solve 3.23 by first moving all the Y(z)’s to the Left Hand Side (LHS) and reordering with like terms on the Right Hand Side (RHS).

$Y(z) + d z^{-1} Y(z) + e z^{-2} Y(z) = b_0 X(z) + b_1 z^{-1} X(z) + b_2 z^{-2} X(z) + (b_1 + b_2 z^{-1}) x(-1) + b_2 x(-2 ) - (a_1 + a_2 z^{-1}) y(-1) - a_2 y(-2)$                                 (3.30)

Factoring our the Y(z) on the left hand side and similar approach for X(z) we have

$Y(z) (1 + d z^{-1} + e z^{-2}) = X(z) ( b_0 + b_1 z^{-1}+ b_2 z^{-2} ) + (b_1 + b_2 z^{-1}) x(-1) + b_2 x(-2 ) - (a_1 + a_2 z^{-1}) y(-1) - a_2 y(-2)$       (3.31)

Dividing both sides by $(1 + d z^{-1} + e z^{-2})$ yields

$Y(z) = X(z) \frac{( b_0 + b_1 z^{-1}+ b_2 z^{-2} )}{(1 + a_1 z^{-1} + a_2 z^{-2})} + x(-1) \frac{(b_1 + b_2z^{-1})}{(1 + a_1 z^{-1} + a_2 z^{-2})} + x(-2 ) \frac{b_2}{(1 + a_1 z^{-1} + a_2 z^{-2})} - y(-1) \frac{(a_1 + a_2 z^{-1})}{(1 + a_1 z^{-1} + a_2 z^{-2})} - y(-2) \frac{a_2}{(1 + a_1 z^{-1} + a_2 z^{-2})}$                                (3.32)

Now if we were to insert X(z) (based on our input) and values for x(-1), x(-2), y(-1) and y(-2) we could merge all the terms on the RHS into a rational function that can be inverse transformed to the sequence y(n).  This solution is however, little more than an academic exercise and we will walk through it in a later example.  However a more common application and use will be use the Y(z) to X(z) relationship to understand the effect of the difference equation.  To describe this relationship a common thing to do is set the initial conditions to zero.  This results in the classic form of

$Y(z) = X(z) \frac{( b_0 + b_1 z^{-1}+ b_2 z^{-2} )}{(1 + a_1 z^{-1} + a_2 z^{-2})}$                                (3.33)

or

$Y(z) = X(z) H(z)$                                                                                      (3.34)

where H(z) is known as the transfer function.

# Section D.2) Example of difference equation using z transforms.

Consider the DE below

$y(n) = 0.25 x(n) + 0.5 x(n-1) + 0.25 x(n-2) + y(n-1) - 0.5 y(n-2)$                                           (3.35)

Taking the z transform of this DE and factoring it similarly to was done previously, we have

$Y(z) = X(z) \frac{( 0.25 + 0.5 z^{-1}+ 0.25 z^{-2} )}{(1 - z^{-1} + 0.5 z^{-2})}$                                (3.36)

or

$H(z) = \frac{( 0.25 + 0.5 z^{-1}+ 0.25 z^{-2} )}{(1 - z^{-1} + 0.5 z^{-2})}$                                (3.37)

As an added point, note the change in sign on the denominator between 3.35 and 3.36/3.37

The following video will show how we can visualize and interpret the z-transform transfer function and its effect on a signal.

MATLAB Code from Video

close all;
figure(1)
% show pole zero plot.
zplane( [0.25 0.5 0.25], [1 -1 0.5] );
xlabel( ‘Real’ );
ylabel( ‘Imaginary’ );
% Set up z plane as real and imaginary parts of z
[x,y] = meshgrid( -1.25:0.05:1.25, -1.25:0.05:1.25 );
z = x + 1i*y;
% Compute H(z) with anything outside of unit circle set to zero.
Hz = min( …
abs( unit_c( 0.25*( z.*z + 2*z + 1 ) ./ ( z.*z – z + 0.5 ), z ) …
), … % end of absolute value (magnitude)
15 ); % end of clipping maximum at 15
% show surface
figure(2);
mesh( -1.25:0.05:1.25, -1.25:0.05:1.25, Hz );
view( [65 45] );
xlabel( ‘Real’ );
ylabel( ‘Imaginary’ );
zlabel( ‘Magnitude’ );
title( ‘H(z) Surface with Unit Cirle Emphasised’ );
% Compute the frequency response of H(z) from equation.
w = [0:pi/512:pi];
ejw = exp( 1i * w );
Hejw = 0.25*( ejw.*ejw + 2*ejw + 1 ) ./ ( ejw.*ejw – ejw + 0.5 );
% plot results.
figure(3);
fs = 10e3; % Set sampling frequency to 10 KHz.
subplot( 211 ),plot( fs*w/(2*pi), abs( Hejw ) );
title( ‘Magnitude Response of H(z)’ );
xlabel( ‘Frequency in Hertz’ );
ylabel( ‘Magnitude Response (Gain)’ );
xlim( [0 fs/2] ),grid;
subplot( 212 ),plot( fs*w/(2*pi), angle( Hejw ) );
title( ‘Phase Response of H(z)’ );
xlabel( ‘Frequency in Hertz’ );
xlim( [0 fs/2] ),grid;

# Section E.1) Graphical Representation of Difference Equations

In this section we will be discussing a graphical representation of a difference equation (DE).  Although at first this may appear to not be that important, but as we develop our understanding of DE’s we will find this representation is quite powerful.  In Figure 3.4 we can see the basic building blocks for describing a DE. Figure 3.4  Basic Building Blocks for Graphically Representing Difference Equations.

Rather than walking through some type of generic process for developing these graphical descriptions, commonly called Signal Flow Diagrams (SFD), it is believed that an example is more instructive.  For this we will implement the following simple first-order DE.

$y(n) = x(n) + x(n-1) + 0.75 y(n-1)$                                           (3.38)

We start with a delay of the input x(n) as shown in Figure 3.5. Figure 3.5 First Part of DE

Conceptually we can look at the $z^{-1}$ element as a register that holds the value of x(n) and thus delays it by one sample.  We can now take the two version of the input we have (x(n) and x(n-1)) and add them together as in Figure 3.6. Figure 3.6 Implementation of The “Feed Forward” Portion of The DE.

Again the summation element is simply an adder that takes the current input value on each line and adds them together.  Also it should be noted that this portion of the system was called “Feed Forward”, since it only uses the inputs, which are feed from the input forward towards the output.

If you now add in the feed back portion of the DE, by delaying the output y(n), scaling it by 0.75 and adding it with the feed forward portion from Figure 3.6. Figure 3.7 Full Implementation of Difference Equation.

At this point we have a SFD of the DE in equation 3.38.  It is important to note that the equations inserted and circled in gray are meant to clarify the process and are not commonly added to the signal graph.

# Section E.2) Example of How A Signal Flow Graph Can Be Used.

Consider the DE in equation 3.27 (replicated here).  This second order DE will be a standard building block for many of the filters we design.

$y(n) = b_0 x(n) + b_1 x(n-1) + b_2 x(n-2) - a_1 y(n-1) - a_2 y(n-2)$                      (3.27)

Following the process used for the first-order DE above we can create the SFD shown in Figure 3.8 Figure 3.8 Second Order DE As a Signal Flow Diagram

This form of implementation is known as a Direct Form I, as it follows the DE in a “direct” fashion.  Now since the feed forward and feedback sections are independent, they can be reordered as shown in Figure 3.9. Figure 3.9 Signal Flow Graph With Sections Reordered.

Now this form is not that different from the Direct Form I.  However, since the delay elements are actually delaying the same signal they can be merged as shown in 3.10. Figure 3.10 Signal Flow Graph With Delay Elements Merged.

This has an advantage that only two delay elements (or memory locations) are used and defined as a Direct Form II.  Although that is of some help, there are other changes that can be used on a SFD.  We will not work through these techniques, however specialists in system theory have developed what is know as a transposed form for an SFD.

The rules for converting to a transposed form are rather esoteric and can be confusing, suffice it to say the input and output are switched, the direction on the components in the SFD are all changed and junctions are replaced with summations and vice versa.  The result is shown in Figure 3.11. Figure 3.11 Transposed Signal Flow Diagram.

Now this form is actually helpful when implementing the system.  However, in order to validate that it implements the second-order DE we will walk through the various parts of the process.  These are laid out in Figure 3.12.

In 3.12, we can see that we start at the bottom, x(n) and y(n) are scaled by $a_2$ & $b_2$ and then added together (Eq 1).  That sum is then delayed through the bottom $z^{-1}$ component (Eq 2).  The next level up will take x(n) and y(n), scaled by $a_1$ & $b_1$ and add them to the previous sum (Eq 3).  That sum is then delayed through the bottom $z^{-1}$ component (Eq 4).  Finally, x(n) is multiplied by $b_0$ and added in, arriving at final equation (Eq 5), which is the DE were were working to implement. Figure 3.12 Transposed Signal Flow Graph With Intermediate Signals Annotated.

# Section E.3) Effect On Code of The Direct and Transposed Forms.

Having described and validated these implementation forms for the DE’s, the questions is “What does that help us?”  In order to address that, two version of code that will implement a second order system using the Direct Form I and the Transposed Direct Form II.

The first box here shows the code for the Direct Form I.  The basic structure of the code is a c function that is called as each sample ( x ) comes in.  It then computes the next output ( y ), based on the current input and the delayed copies of the input and output.  The most notable part of this code is that the output y is computed with 5 Multiply and ADD’s (MADD’s), then a four lines of code are used to implement the delays need in preparation for the next call of the function.   This extra step is one of the challenges with this form of implementation. As a matter of reference, the coefficients set for the filter are those used in Section D.2.
A similar function is written to implement the Direct Form II Transposed form.  First off, only two floating-point variables are needed (Direct Form II).  However, if we look at the implementation we see that the first step computes the output y, using one MADD along with an addition, then the next line will use two MADD’s and an addition to update IIR2_y.  Similarly, two MADD’s will update IIR2_y, in a sense doing the calculations and updating the delay elements in the same process. Of these two approaches, the transposed form is often considered the preferred method for implementing a second order system.

# Section F) Conclusion

In this chapter we have established the format and uses of the z-transform, showing various ways we can visualize and interpret the z-transform.  Perhaps the most important feature is how it relates to the DE, our primary way of processing data, and its relationship to the frequency response of a DE.

In the next chapter we will be looking at ways we can design a filter, using two basic forms.  The first is the Finite Impulse Response (FIR) filter, or a feed forward filter, and the second uses feedback and has an Infinite Impulse Response (IIR).