Skip to content
Published on

Signal Processing & Control Systems Guide: From Fourier Transform to PID Control

Authors

1. Signals and Systems Fundamentals

1.1 Signal Classification

A signal is a physical quantity that carries information. The primary classifications are:

Continuous-Time vs Discrete-Time Signals

Continuous-time (CT) signals are defined for all tRt \in \mathbb{R}. Example: an analog audio signal x(t)=Acos(ω0t+ϕ)x(t) = A\cos(\omega_0 t + \phi)

Discrete-time (DT) signals are defined only at integer indices nZn \in \mathbb{Z}. Example: a digital audio sample sequence x[n]x[n]

Energy and Power Signals

Signal energy EE and average power PP:

E=x(t)2dtE = \int_{-\infty}^{\infty} |x(t)|^2 \, dt

P=limT12TTTx(t)2dtP = \lim_{T \to \infty} \frac{1}{2T} \int_{-T}^{T} |x(t)|^2 \, dt

If E<E < \infty, the signal is an energy signal. If 0<P<0 < P < \infty, it is a power signal.

1.2 Elementary Signals

Unit Impulse

δ(t)={t=00t0,δ(t)dt=1\delta(t) = \begin{cases} \infty & t = 0 \\ 0 & t \neq 0 \end{cases}, \quad \int_{-\infty}^{\infty} \delta(t) \, dt = 1

Sifting property: x(t)δ(tt0)dt=x(t0)\int_{-\infty}^{\infty} x(t)\delta(t - t_0) \, dt = x(t_0)

Unit Step

u(t)={1t00t<0u(t) = \begin{cases} 1 & t \geq 0 \\ 0 & t < 0 \end{cases}

The relationship u(t)=tδ(τ)dτu(t) = \int_{-\infty}^{t} \delta(\tau) \, d\tau holds.

1.3 Linear Time-Invariant (LTI) Systems

An LTI system satisfies two core properties:

  • Linearity: T{ax1(t)+bx2(t)}=aT{x1(t)}+bT{x2(t)}\mathcal{T}\{ax_1(t) + bx_2(t)\} = a\mathcal{T}\{x_1(t)\} + b\mathcal{T}\{x_2(t)\}
  • Time Invariance: y(t)=T{x(t)}T{x(tt0)}=y(tt0)y(t) = \mathcal{T}\{x(t)\} \Rightarrow \mathcal{T}\{x(t-t_0)\} = y(t-t_0)

1.4 Convolution

The output of an LTI system is the convolution of the input and the impulse response.

y(t)=x(t)h(t)=x(τ)h(tτ)dτy(t) = x(t) * h(t) = \int_{-\infty}^{\infty} x(\tau) h(t - \tau) \, d\tau

For discrete-time:

y[n]=x[n]h[n]=k=x[k]h[nk]y[n] = x[n] * h[n] = \sum_{k=-\infty}^{\infty} x[k] h[n - k]

Convolution satisfies the commutative, associative, and distributive laws.


2. Fourier Series

2.1 Fourier Series of Periodic Signals

A periodic signal with period T0T_0 can be expressed as a sum of complex exponentials with fundamental frequency ω0=2π/T0\omega_0 = 2\pi/T_0:

x(t)=k=ckejkω0tx(t) = \sum_{k=-\infty}^{\infty} c_k e^{jk\omega_0 t}

Fourier coefficients:

ck=1T0T0x(t)ejkω0tdtc_k = \frac{1}{T_0} \int_{T_0} x(t) e^{-jk\omega_0 t} \, dt

2.2 Square Wave Example

For a square wave with amplitude A, period T, and 50% duty cycle:

ck={A/2k=0Akπsin(kπ2)k0c_k = \begin{cases} A/2 & k = 0 \\ \frac{A}{k\pi} \sin\left(\frac{k\pi}{2}\right) & k \neq 0 \end{cases}

Only odd harmonics are present, and amplitude decreases with increasing harmonic order. The Gibbs phenomenon appears near discontinuities.

2.3 Parseval's Theorem

1T0T0x(t)2dt=k=ck2\frac{1}{T_0} \int_{T_0} |x(t)|^2 \, dt = \sum_{k=-\infty}^{\infty} |c_k|^2

The average power in the time domain equals the sum of squared coefficients in the frequency domain.


3. Fourier Transform

3.1 Continuous-Time Fourier Transform (CTFT)

The Fourier transform pair for aperiodic signals:

X(jω)=x(t)ejωtdt(forward)X(j\omega) = \int_{-\infty}^{\infty} x(t) e^{-j\omega t} \, dt \quad \text{(forward)}

x(t)=12πX(jω)ejωtdω(inverse)x(t) = \frac{1}{2\pi} \int_{-\infty}^{\infty} X(j\omega) e^{j\omega t} \, d\omega \quad \text{(inverse)}

3.2 Important Transform Pairs

SignalFourier Transform
δ(t)\delta(t)11
112πδ(ω)2\pi\delta(\omega)
ejω0te^{j\omega_0 t}2πδ(ωω0)2\pi\delta(\omega - \omega_0)
cos(ω0t)\cos(\omega_0 t)π[δ(ωω0)+δ(ω+ω0)]\pi[\delta(\omega-\omega_0)+\delta(\omega+\omega_0)]
u(t)u(t)πδ(ω)+1/(jω)\pi\delta(\omega) + 1/(j\omega)
eatu(t),a>0e^{-at}u(t), \, a>01/(a+jω)1/(a+j\omega)
rect(t/τ)\text{rect}(t/\tau)τsinc(ωτ/2)\tau \cdot \text{sinc}(\omega\tau/2)

3.3 Fourier Transform Properties

  • Linearity: ax1(t)+bx2(t)aX1(jω)+bX2(jω)ax_1(t) + bx_2(t) \leftrightarrow aX_1(j\omega) + bX_2(j\omega)
  • Time Shift: x(tt0)ejωt0X(jω)x(t-t_0) \leftrightarrow e^{-j\omega t_0} X(j\omega)
  • Frequency Shift (Modulation): ejω0tx(t)X(j(ωω0))e^{j\omega_0 t} x(t) \leftrightarrow X(j(\omega - \omega_0))
  • Convolution Theorem: x(t)h(t)X(jω)H(jω)x(t)*h(t) \leftrightarrow X(j\omega)H(j\omega)
  • Multiplication Theorem: x(t)h(t)12πX(jω)H(jω)x(t)h(t) \leftrightarrow \frac{1}{2\pi}X(j\omega)*H(j\omega)
  • Differentiation: dxdtjωX(jω)\frac{dx}{dt} \leftrightarrow j\omega X(j\omega)
  • Parseval's: x(t)2dt=12πX(jω)2dω\int|x(t)|^2 dt = \frac{1}{2\pi}\int|X(j\omega)|^2 d\omega

3.4 Computing Fourier Transforms with Python

import numpy as np
from scipy.fft import fft, ifft, fftfreq, fftshift
import matplotlib.pyplot as plt

# Generate a square wave signal
fs = 1000       # Sampling frequency (Hz)
T = 1.0         # Signal duration (s)
t = np.linspace(0, T, int(fs * T), endpoint=False)

from scipy.signal import square
x = square(2 * np.pi * 10 * t)   # 10 Hz square wave

# Compute FFT
N = len(x)
X = fft(x)
freqs = fftfreq(N, 1/fs)

# Two-sided spectrum (normalized)
magnitude = np.abs(fftshift(X)) / N
freq_shifted = fftshift(freqs)

plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(t[:100], x[:100])
plt.title('Square Wave (time domain)')
plt.xlabel('Time (s)')

plt.subplot(1, 2, 2)
plt.plot(freq_shifted, magnitude)
plt.xlim(-100, 100)
plt.title('Fourier Transform (frequency domain)')
plt.xlabel('Frequency (Hz)')
plt.tight_layout()
plt.show()

4. Discrete Fourier Transform (DFT) & FFT

4.1 Discrete-Time Fourier Transform (DTFT)

The DTFT of a discrete-time signal x[n]x[n]:

X(ejω)=n=x[n]ejωnX(e^{j\omega}) = \sum_{n=-\infty}^{\infty} x[n] e^{-j\omega n}

Inverse transform:

x[n]=12πππX(ejω)ejωndωx[n] = \frac{1}{2\pi} \int_{-\pi}^{\pi} X(e^{j\omega}) e^{j\omega n} \, d\omega

The DTFT has continuous frequency ω\omega and is periodic with period 2π2\pi.

4.2 DFT Definition

The N-point DFT is a uniform sampling of the DTFT at N frequencies:

X[k]=n=0N1x[n]ej2πkn/N,k=0,1,,N1X[k] = \sum_{n=0}^{N-1} x[n] e^{-j2\pi kn/N}, \quad k = 0, 1, \ldots, N-1

Inverse DFT:

x[n]=1Nk=0N1X[k]ej2πkn/N,n=0,1,,N1x[n] = \frac{1}{N} \sum_{k=0}^{N-1} X[k] e^{j2\pi kn/N}, \quad n = 0, 1, \ldots, N-1

4.3 The FFT Algorithm

Direct DFT computation has complexity O(N2)O(N^2), but the FFT (Cooley-Tukey algorithm) achieves O(NlogN)O(N \log N) using divide-and-conquer when N=2mN = 2^m.

Twiddle factor: WNk=ej2πk/NW_N^k = e^{-j2\pi k/N}

4.4 Spectrum Analysis Example

import numpy as np
from scipy.fft import fft, fftfreq
import matplotlib.pyplot as plt

# Generate a composite signal: 50 Hz + 120 Hz
fs = 1000           # Sampling frequency (Hz)
N = fs              # Number of samples (1 second)
t = np.linspace(0, 1, N, endpoint=False)

# Composite signal with noise
signal = (np.sin(2 * np.pi * 50 * t)
          + 0.5 * np.sin(2 * np.pi * 120 * t)
          + 0.2 * np.random.randn(N))

# Compute FFT
yf = fft(signal)
xf = fftfreq(N, 1/fs)

# Single-sided spectrum (positive frequencies only)
half = N // 2
amplitude = 2.0 / N * np.abs(yf[:half])
freq_pos = xf[:half]

plt.figure(figsize=(10, 4))
plt.plot(freq_pos, amplitude)
plt.xlabel('Frequency (Hz)')
plt.ylabel('Amplitude')
plt.title('Single-Sided Amplitude Spectrum')
plt.axvline(x=50, color='r', linestyle='--', label='50 Hz')
plt.axvline(x=120, color='g', linestyle='--', label='120 Hz')
plt.legend()
plt.grid(True)
plt.show()

# Frequency resolution = fs / N = 1 Hz
print(f"Frequency resolution: {fs/N} Hz")
print(f"Nyquist frequency: {fs/2} Hz")

4.5 Nyquist-Shannon Sampling Theorem

To reconstruct a signal without aliasing, the sampling frequency fsf_s must be at least twice the maximum signal frequency fmaxf_{max}:

fs2fmax(Nyquist condition)f_s \geq 2 f_{max} \quad \text{(Nyquist condition)}


5. Laplace Transform

5.1 Definition

Bilateral Laplace transform:

X(s)=x(t)estdt,s=σ+jωX(s) = \int_{-\infty}^{\infty} x(t) e^{-st} \, dt, \quad s = \sigma + j\omega

Unilateral (one-sided) Laplace transform:

X(s)=0x(t)estdtX(s) = \int_{0^-}^{\infty} x(t) e^{-st} \, dt

5.2 Important Transform Pairs

Signal x(t)x(t)Transform X(s)X(s)ROC
δ(t)\delta(t)11Entire s-plane
u(t)u(t)1/s1/sRe(s)>0\text{Re}(s) > 0
eatu(t)e^{-at}u(t)1/(s+a)1/(s+a)Re(s)>a\text{Re}(s) > -a
teatu(t)te^{-at}u(t)1/(s+a)21/(s+a)^2Re(s)>a\text{Re}(s) > -a
sin(ω0t)u(t)\sin(\omega_0 t)u(t)ω0/(s2+ω02)\omega_0/(s^2+\omega_0^2)Re(s)>0\text{Re}(s) > 0
cos(ω0t)u(t)\cos(\omega_0 t)u(t)s/(s2+ω02)s/(s^2+\omega_0^2)Re(s)>0\text{Re}(s) > 0

5.3 Transfer Function

The input-output relationship in the s-domain:

H(s)=Y(s)X(s)=bmsm+bm1sm1++b0ansn+an1sn1++a0H(s) = \frac{Y(s)}{X(s)} = \frac{b_m s^m + b_{m-1} s^{m-1} + \cdots + b_0}{a_n s^n + a_{n-1} s^{n-1} + \cdots + a_0}

Poles: values of ss where H(s)H(s) \to \infty (denominator = 0)

Zeros: values of ss where H(s)=0H(s) = 0 (numerator = 0)

Pole locations determine system stability. A system is BIBO stable if all poles lie in the left half of the s-plane (LHP).

5.4 Standard Second-Order System

H(s)=ωn2s2+2ζωns+ωn2H(s) = \frac{\omega_n^2}{s^2 + 2\zeta\omega_n s + \omega_n^2}

  • ωn\omega_n: Natural frequency
  • ζ\zeta: Damping ratio
    • ζ<1\zeta < 1: Underdamped — oscillatory response
    • ζ=1\zeta = 1: Critically damped — fastest non-oscillatory
    • ζ>1\zeta > 1: Overdamped — slow non-oscillatory

6. Z-Transform

6.1 Definition

The Z-transform of a discrete-time signal x[n]x[n]:

X(z)=n=x[n]zn,zCX(z) = \sum_{n=-\infty}^{\infty} x[n] z^{-n}, \quad z \in \mathbb{C}

Substituting z=ejωz = e^{j\omega} (on the unit circle) reduces to the DTFT.

6.2 Relationship Between Z-Transform and Laplace Transform

When a continuous-time system is discretized with sampling period TsT_s:

z=esTsz = e^{sT_s}

This maps the s-plane to the z-plane:

  • Imaginary axis of s-plane \leftrightarrow unit circle in z-plane
  • Left half of s-plane \leftrightarrow interior of unit circle in z-plane

6.3 Stability Analysis

A discrete-time system is stable if and only if all poles lie inside the unit circle:

zi<1for all poles|z_i| < 1 \quad \text{for all poles}

6.4 Important Z-Transform Pairs

Signal x[n]x[n]Z-Transform X(z)X(z)
δ[n]\delta[n]11
u[n]u[n]z/(z1)z/(z-1)
anu[n]a^n u[n]z/(za)z/(z-a)
nanu[n]n \cdot a^n u[n]az/(za)2az/(z-a)^2
cos(ω0n)u[n]\cos(\omega_0 n)u[n]z(zcosω0)/(z22zcosω0+1)z(z-\cos\omega_0)/(z^2-2z\cos\omega_0+1)

7. Digital Filter Design

7.1 FIR vs IIR Filters

FIR (Finite Impulse Response) Filter

y[n]=k=0Mbkx[nk]y[n] = \sum_{k=0}^{M} b_k x[n-k]

  • Finite-length impulse response
  • Always stable (no feedback)
  • Can achieve linear phase
  • Requires high order for sharp cutoff

IIR (Infinite Impulse Response) Filter

y[n]=k=0Mbkx[nk]k=1Naky[nk]y[n] = \sum_{k=0}^{M} b_k x[n-k] - \sum_{k=1}^{N} a_k y[n-k]

  • Includes feedback, infinite impulse response
  • Sharp cutoff at lower order
  • Nonlinear phase (potential phase distortion)
  • Can become unstable if poorly designed

7.2 Filter Types

  • Low-Pass Filter (LPF): Passes low frequencies, attenuates high frequencies
  • High-Pass Filter (HPF): Passes high frequencies, attenuates low frequencies
  • Band-Pass Filter (BPF): Passes a specific frequency band
  • Band-Stop Filter (BSF/Notch): Attenuates a specific frequency band

7.3 Classical Filter Designs

Butterworth: Maximally flat response in the passband

Chebyshev Type I: Ripple in passband, sharper rolloff

Elliptic: Ripple in both passband and stopband, minimum order for given attenuation

7.4 Filter Design with scipy.signal

import numpy as np
from scipy import signal
import matplotlib.pyplot as plt

fs = 1000  # Sampling frequency (Hz)

# Butterworth low-pass filter (order 4, cutoff 100 Hz)
nyq = fs / 2
cutoff = 100 / nyq  # Normalized frequency (0 to 1)
b_butter, a_butter = signal.butter(N=4, Wn=cutoff, btype='low')

# Chebyshev Type I (order 4, 1 dB ripple, cutoff 100 Hz)
b_cheby, a_cheby = signal.cheby1(N=4, rp=1, Wn=cutoff, btype='low')

# Frequency response
w, h_butter = signal.freqz(b_butter, a_butter, fs=fs)
w, h_cheby  = signal.freqz(b_cheby,  a_cheby,  fs=fs)

plt.figure(figsize=(10, 5))
plt.subplot(1, 2, 1)
plt.plot(w, 20 * np.log10(np.abs(h_butter)), label='Butterworth')
plt.plot(w, 20 * np.log10(np.abs(h_cheby)),  label='Chebyshev I')
plt.xlabel('Frequency (Hz)')
plt.ylabel('Magnitude (dB)')
plt.title('Filter Frequency Response')
plt.legend()
plt.grid(True)
plt.ylim(-80, 5)

# Filter a noisy signal
t = np.linspace(0, 1, fs, endpoint=False)
clean  = np.sin(2 * np.pi * 50 * t)
noisy  = clean + 0.5 * np.sin(2 * np.pi * 300 * t)
filtered = signal.filtfilt(b_butter, a_butter, noisy)

plt.subplot(1, 2, 2)
plt.plot(t[:200], noisy[:200],    alpha=0.5, label='Noisy')
plt.plot(t[:200], filtered[:200],            label='Filtered')
plt.xlabel('Time (s)')
plt.legend()
plt.title('Filtering Result')
plt.tight_layout()
plt.show()

7.5 FIR Filter Design (Window Method)

from scipy.signal import firwin, freqz

# FIR LPF using Hamming window
# 51 taps, cutoff 100 Hz
numtaps = 51
cutoff_hz = 100
b_fir = firwin(numtaps, cutoff_hz / (fs / 2), window='hamming')

w_fir, h_fir = freqz(b_fir, [1.0], fs=fs)
print(f"FIR filter order: {numtaps - 1}")
print(f"Linear phase: True (symmetric coefficients)")

8. Control Systems Fundamentals

8.1 Open-Loop vs Closed-Loop Control

Open-Loop Control: The output does not feed back to affect the control action. Simple but sensitive to disturbances and model errors.

Closed-Loop Control: The output is measured and compared with the reference; the error drives the controller. Feedback provides robustness.

Basic feedback loop transfer function:

Y(s)R(s)=G(s)1+G(s)H(s)\frac{Y(s)}{R(s)} = \frac{G(s)}{1 + G(s)H(s)}

where G(s)G(s) is the plant and H(s)H(s) is the sensor transfer function.

8.2 Stability: Routh-Hurwitz Criterion

A system is stable if all roots of the characteristic equation 1+G(s)H(s)=01 + G(s)H(s) = 0 lie in the left half s-plane (LHP).

The Routh array determines pole locations from coefficients alone. The number of sign changes in the first column equals the number of right-half-plane poles.

8.3 Bode Plot

The frequency response H(jω)H(j\omega) plotted on a logarithmic scale:

  • Magnitude plot: 20log10H(jω)20\log_{10}|H(j\omega)| [dB] vs log10ω\log_{10}\omega
  • Phase plot: H(jω)\angle H(j\omega) [degrees] vs log10ω\log_{10}\omega

Stability margins:

  • Phase Margin (PM): How far the phase is from -180 degrees when the gain is 0 dB
  • Gain Margin (GM): How far the gain is from 0 dB when the phase is -180 degrees

PM > 45 degrees and GM > 6 dB are generally considered acceptable stability margins.

from scipy import signal
import matplotlib.pyplot as plt
import numpy as np

# Second-order transfer function: H(s) = 10 / (s^2 + 2s + 10)
num = [10]
den = [1, 2, 10]
sys = signal.TransferFunction(num, den)

w, mag, phase = signal.bode(sys)

fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(8, 6))
ax1.semilogx(w, mag)
ax1.set_ylabel('Magnitude (dB)')
ax1.set_title('Bode Plot')
ax1.grid(True, which='both')

ax2.semilogx(w, phase)
ax2.set_ylabel('Phase (degrees)')
ax2.set_xlabel('Frequency (rad/s)')
ax2.grid(True, which='both')

plt.tight_layout()
plt.show()

9. PID Controller

9.1 PID Control Action

A PID controller combines three control actions:

u(t)=Kpe(t)+Ki0te(τ)dτ+Kdde(t)dtu(t) = K_p e(t) + K_i \int_0^t e(\tau) d\tau + K_d \frac{de(t)}{dt}

where e(t)=r(t)y(t)e(t) = r(t) - y(t) is the error signal.

  • Proportional (P): Control output proportional to current error. Fast response but leaves steady-state error.
  • Integral (I): Eliminates accumulated error to drive steady-state error to zero. Slows response.
  • Derivative (D): Reacts to rate of change of error, suppressing overshoot. Sensitive to noise.

Transfer function:

C(s)=Kp+Kis+Kds=Kp(1+1Tis+Tds)C(s) = K_p + \frac{K_i}{s} + K_d s = K_p\left(1 + \frac{1}{T_i s} + T_d s\right)

9.2 Ziegler-Nichols Tuning

Method 1 (Open-loop step response):

Determine dead time LL and time constant TT from the process reaction curve:

ControllerKpK_pTiT_iTdT_d
PT/LT/L--
PI0.9T/L0.9T/L3L3L-
PID1.2T/L1.2T/L2L2L0.5L0.5L

Method 2 (Closed-loop ultimate gain):

Set Ki=Kd=0K_i = K_d = 0. Increase KpK_p until sustained oscillation occurs. Record ultimate gain KuK_u and period TuT_u:

ControllerKpK_pTiT_iTdT_d
PID0.6Ku0.6K_u0.5Tu0.5T_u0.125Tu0.125T_u

9.3 PID Simulation in Python

import numpy as np
import matplotlib.pyplot as plt

class PIDController:
    def __init__(self, kp, ki, kd, dt):
        self.kp = kp
        self.ki = ki
        self.kd = kd
        self.dt = dt
        self.integral = 0.0
        self.prev_error = 0.0

    def update(self, setpoint, measured):
        error = setpoint - measured
        self.integral += error * self.dt
        derivative = (error - self.prev_error) / self.dt
        output = self.kp * error + self.ki * self.integral + self.kd * derivative
        self.prev_error = error
        return output

# First-order plant: dy/dt = -y + u (time constant = 1s)
def plant_step(y, u, dt):
    return y + dt * (-y + u)

dt = 0.01
t = np.arange(0, 10, dt)
setpoint = 1.0

pid = PIDController(kp=2.0, ki=1.0, kd=0.5, dt=dt)
y = 0.0
y_list = []

for _ in t:
    u = pid.update(setpoint, y)
    u = np.clip(u, -10, 10)   # actuator saturation
    y = plant_step(y, u, dt)
    y_list.append(y)

plt.figure(figsize=(10, 4))
plt.plot(t, y_list, label='Plant Output')
plt.axhline(setpoint, color='r', linestyle='--', label='Setpoint')
plt.xlabel('Time (s)')
plt.ylabel('Output')
plt.title('PID Control Simulation')
plt.legend()
plt.grid(True)
plt.show()

10. State-Space Representation

10.1 State Equations

State-space representation of a continuous-time LTI system:

x˙(t)=Ax(t)+Bu(t)\dot{\mathbf{x}}(t) = \mathbf{A}\mathbf{x}(t) + \mathbf{B}\mathbf{u}(t) y(t)=Cx(t)+Du(t)\mathbf{y}(t) = \mathbf{C}\mathbf{x}(t) + \mathbf{D}\mathbf{u}(t)

  • x\mathbf{x}: state vector (n×1n \times 1)
  • u\mathbf{u}: input vector (m×1m \times 1)
  • y\mathbf{y}: output vector (p×1p \times 1)
  • A\mathbf{A}: system matrix (n×nn \times n)
  • B\mathbf{B}: input matrix (n×mn \times m)
  • C\mathbf{C}: output matrix (p×np \times n)
  • D\mathbf{D}: feedthrough matrix (p×mp \times m)

Relationship to transfer function: H(s)=C(sIA)1B+DH(s) = \mathbf{C}(s\mathbf{I} - \mathbf{A})^{-1}\mathbf{B} + \mathbf{D}

10.2 Controllability and Observability

Controllability: Can the state be driven from any initial condition to any desired state in finite time?

The controllability matrix C=[B  AB  A2B    An1B]\mathcal{C} = [\mathbf{B} \; \mathbf{AB} \; \mathbf{A}^2\mathbf{B} \; \cdots \; \mathbf{A}^{n-1}\mathbf{B}] must have rank nn for full controllability.

Observability: Can the initial state be determined from the output alone?

The observability matrix O=[CT  (CA)T    (CAn1)T]T\mathcal{O} = [\mathbf{C}^T \; (\mathbf{CA})^T \; \cdots \; (\mathbf{CA}^{n-1})^T]^T must have rank nn for full observability.

10.3 Linear Quadratic Regulator (LQR)

Optimal state feedback that minimizes a quadratic cost function:

J=0(xTQx+uTRu)dtJ = \int_0^{\infty} (\mathbf{x}^T \mathbf{Q} \mathbf{x} + \mathbf{u}^T \mathbf{R} \mathbf{u}) \, dt

Optimal control: u=Kx\mathbf{u}^* = -\mathbf{K}\mathbf{x}

Q\mathbf{Q} weights state error; R\mathbf{R} weights control energy.

import numpy as np
from scipy import linalg

# Double integrator: x_ddot = u
A = np.array([[0, 1], [0, 0]])
B = np.array([[0], [1]])

# LQR weights: penalize position error more
Q = np.diag([10, 1])
R = np.array([[1]])

# Solve continuous algebraic Riccati equation
P = linalg.solve_continuous_are(A, B, Q, R)
K = np.linalg.inv(R) @ B.T @ P

print("LQR Gain K:", K)
print("Closed-loop poles:", np.linalg.eigvals(A - B @ K))

11. AI/ML and Signal Processing

11.1 CNN for 1D Signals

1D convolutional neural networks excel at pattern recognition in time-series signals such as ECG, seismic data, and audio.

import torch
import torch.nn as nn

class Signal1DCNN(nn.Module):
    def __init__(self, num_classes=5):
        super().__init__()
        self.conv_layers = nn.Sequential(
            nn.Conv1d(1, 32, kernel_size=7, padding=3),
            nn.BatchNorm1d(32),
            nn.ReLU(),
            nn.MaxPool1d(2),
            nn.Conv1d(32, 64, kernel_size=5, padding=2),
            nn.BatchNorm1d(64),
            nn.ReLU(),
            nn.MaxPool1d(2),
            nn.Conv1d(64, 128, kernel_size=3, padding=1),
            nn.ReLU(),
            nn.AdaptiveAvgPool1d(8)
        )
        self.classifier = nn.Sequential(
            nn.Flatten(),
            nn.Linear(128 * 8, 256),
            nn.ReLU(),
            nn.Dropout(0.5),
            nn.Linear(256, num_classes)
        )

    def forward(self, x):
        x = self.conv_layers(x)
        return self.classifier(x)

# Example usage
model = Signal1DCNN(num_classes=5)
# Input shape: (batch_size, channels=1, sequence_length=1000)
x = torch.randn(8, 1, 1000)
output = model(x)
print("Output shape:", output.shape)  # (8, 5)

11.2 STFT and Mel-Spectrogram

Speech AI systems commonly convert signals to time-frequency representations and process them with CNNs.

import librosa
import numpy as np

# STFT (Short-Time Fourier Transform)
# D = librosa.stft(y)
# S_db = librosa.amplitude_to_db(np.abs(D), ref=np.max)

# Mel-Spectrogram
# mel_spec = librosa.feature.melspectrogram(y=y, sr=sr, n_mels=128)
# mel_db   = librosa.power_to_db(mel_spec, ref=np.max)

# MFCC (Mel-Frequency Cepstral Coefficients)
# mfcc = librosa.feature.mfcc(y=y, sr=sr, n_mfcc=13)

print("STFT: Hann window, hop_length=512, n_fft=2048")
print("Mel-filterbank: 128 mel bands, 0 Hz to Nyquist")
print("MFCC: 13 coefficients commonly used for speech recognition")

11.3 Signal Processing + Deep Learning Applications

  • ECG Arrhythmia Classification: 1D CNN + LSTM for detecting cardiac rhythm abnormalities
  • Anomalous Vibration Detection: FFT feature extraction followed by autoencoder-based anomaly detection
  • Speech Recognition: MFCC + Transformer architectures (Whisper, wav2vec 2.0)
  • Seismic Event Detection: Automatic P-wave/S-wave classification from seismometer signals
  • EEG Analysis: Frequency-band power spectrum for attention/sleep state classification

12. Quiz

Q1. Given a sampling frequency of 8000 Hz, what is the maximum signal frequency that can be represented without aliasing?

Answer: 4000 Hz

Explanation: By the Nyquist-Shannon sampling theorem, the maximum representable frequency is half the sampling frequency — the Nyquist frequency (4000 Hz). This is exactly why telephone voice codecs (8 kHz sampling) handle audio up to 4 kHz.

Q2. What is the most important advantage of FIR filters over IIR filters?

Answer: Linear phase response and unconditional stability.

Explanation: FIR filters have no feedback, so they are always BIBO stable. Symmetric coefficients guarantee linear phase, meaning all frequency components experience the same group delay. This is critical in data communications where phase distortion must be avoided. IIR filters, by contrast, achieve sharp cutoff at much lower filter orders.

Q3. What is the role of the derivative (D) action in a PID controller, and what is its main drawback?

Answer: The D term reacts to the rate of change of error to suppress overshoot, but it amplifies measurement noise.

Explanation: The D action generates a strong control output when the error changes rapidly, preventing the system from overshooting the setpoint. However, differentiating a noisy measurement amplifies high-frequency noise. Practical implementations add a low-pass filter on the derivative term, or use derivative-on-measurement (differentiating the output rather than the error) to mitigate this problem.

Q4. What is the stability condition for a digital filter stated in terms of the Z-transform?

Answer: All poles must lie strictly inside the unit circle in the z-plane.

Explanation: For a discrete-time system: poles inside the unit circle (magnitude less than 1) imply stability; poles on the unit circle imply marginal stability; poles outside the unit circle imply instability. This mirrors the continuous-time condition that all poles must be in the left half s-plane. Under the mapping z = exp(sT), the left half s-plane maps exactly to the interior of the unit circle.

Q5. If the phase margin of a closed-loop control system is negative, what can you conclude about the system?

Answer: The system is unstable.

Explanation: Phase margin is measured at the gain crossover frequency (where gain = 0 dB). A negative phase margin means the phase has already passed -180 degrees before the gain drops to 0 dB, causing the closed-loop system to be unstable. Well-designed systems typically target a phase margin of 45 to 60 degrees and a gain margin greater than 6 dB.


References

  1. Oppenheim, A. V. & Schafer, R. W.Discrete-Time Signal Processing (3rd ed.), Prentice Hall, 2010
  2. Ogata, K.Modern Control Engineering (5th ed.), Prentice Hall, 2010
  3. Proakis, J. G. & Manolakis, D. G.Digital Signal Processing, Prentice Hall, 2007
  4. SciPy Signal Processing Documentationhttps://docs.scipy.org/doc/scipy/reference/signal.html
  5. MIT OpenCourseWare 6.003 — Signals and Systems — https://ocw.mit.edu/courses/6-003-signals-and-systems-fall-2011/
  6. NumPy FFT Documentationhttps://numpy.org/doc/stable/reference/routines.fft.html