Skip to content
Published on

[Computer Networking] 03. Network Delay, Loss, and Throughput

Authors

This post is based on the textbook Computer Networking: A Top-Down Approach (6th Edition) by James Kurose and Keith Ross.


1. Overview of Delay in Packet-Switched Networks

As a packet travels from source to destination, it experiences several types of delay at each node (router) along the path. The most important is nodal delay, which consists of four components.

Nodal delay (d_nodal) = d_proc + d_queue + d_trans + d_prop

  d_proc  : Processing Delay
  d_queue : Queuing Delay
  d_trans : Transmission Delay
  d_prop  : Propagation Delay

2. The Four Delay Components

2.1 Processing Delay

The time required to examine the packet header and determine where to direct the packet.

  • Bit-level error checking of the packet header
  • Forwarding table lookup
  • Typically on the order of microseconds (us) or less
Packet arrives -> [Header check] -> [Forwarding decision] -> Enter queue
                   Processing delay (d_proc)

2.2 Queuing Delay

The time a packet waits in the output link's queue before transmission.

  • Depends on the number of other packets waiting in the queue
  • Ranges from microseconds to milliseconds
  • The most complex and unpredictable of the four delays
Output queue:
  [pkt5][pkt4][pkt3][pkt2][pkt1] --> Output link
                                      (transmitting)
  <-- Wait time for these packets = queuing delay -->

2.3 Transmission Delay

The time required to push all of a packet's bits onto the link.

Transmission delay = L / R

  L: Packet length (bits)
  R: Link transmission rate (bps)

Example: L = 10,000 bits, R = 10 Mbps

d_trans = 10,000 / 10,000,000 = 0.001 seconds = 1ms

Transmission delay depends on packet length and link transmission rate, and is independent of the distance between routers.

2.4 Propagation Delay

The time it takes for a bit to physically propagate through the link.

Propagation delay = d / s

  d: Physical distance between two routers (meters)
  s: Propagation speed of the medium (approx. 2 * 10^8 m/s to 3 * 10^8 m/s)

Example: Distance between routers = 5,000 km, propagation speed = 2.5 x 10^8 m/s

d_prop = 5,000,000 / 250,000,000 = 0.02 seconds = 20ms

Propagation delay depends on distance and is independent of packet size.

2.5 Transmission Delay vs Propagation Delay Analogy

Highway tollbooth analogy:

Car caravan = bits in a packet
Tollbooth = router
Highway = link

Transmission delay: Time for all cars to pass through the tollbooth
                    (number of cars / tollbooth processing speed)

Propagation delay: Time for one car to travel from one tollbooth to the next
                   (distance / car speed)

3. Queuing Delay and Packet Loss

3.1 Traffic Intensity

The extent of queuing delay can be assessed using traffic intensity.

Traffic intensity = L * a / R

  L: Packet size (bits)
  a: Average packet arrival rate (packets/sec)
  R: Link transmission rate (bps)

Queuing Delay vs Traffic Intensity

Queuing
delay
  ^
  |          |
  |          |     /
  |          |    /
  |          |   /
  |          |  /
  |        __|_/
  |   ___/   |
  |__/       |
  +-----------+-------> Traffic intensity (La/R)
  0          1

  La/R -> 0 : Queuing delay nearly zero
  La/R -> 1 : Queuing delay increases dramatically
  La/R > 1  : Queue grows without bound (unstable system)

Key Rules

Traffic IntensityQueuing Delay Status
La/R close to 0Nearly zero
La/R close to 1Increases dramatically
La/R greater than 1Grows without bound (effectively packet loss)

Golden rule of system design: Traffic intensity must not exceed 1.

3.2 Packet Loss

In reality, queue (buffer) sizes are finite.

When a new packet arrives at a full buffer:

  [pkt_n][...][pkt2][pkt1] --> Output link
  ^^^^^^^^^^^^^^^^^^^^^^^^
  Buffer capacity = n (full)

  pkt_new arrives -> Drop! (Packet loss)
  • Lost packets may be retransmitted by the previous node or the source
  • Or they may not be retransmitted at all (depends on the application)
  • Loss rates increase significantly in congested networks

4. End-to-End Delay

Let us calculate the total delay from source to destination.

For N links (assuming no congestion):

d_end-to-end = N * (d_proc + d_trans + d_prop)
             = N * (d_proc + L/R + d/s)

Example: 3 routers (3 links), d_proc = 0.003ms, L = 1,500 bytes, R = 2 Mbps, d = 5,000 km, s = 2.5 * 10^8 m/s

Delay per hop:
  d_proc  = 0.003 ms
  d_trans = (1500 * 8) / 2,000,000 = 6 ms
  d_prop  = 5,000,000 / 250,000,000 = 20 ms

Per-hop delay = 0.003 + 6 + 20 = 26.003 ms
Total delay = 3 * 26.003 = 78.009 ms

4.1 Traceroute

The traceroute command (tracert on Windows) can measure the actual end-to-end path and per-hop delays.

traceroute www.example.com
Sample output:
  1  192.168.1.1     1.234 ms   1.123 ms   1.345 ms
  2  10.0.0.1        5.678 ms   5.432 ms   5.789 ms
  3  72.14.215.85   15.234 ms  14.987 ms  15.123 ms
  ...

Each row represents one router (hop) with three RTT measurements.

4.2 Other Types of End-to-End Delays

Intentional Delays

  • Media packetization delay: Time to collect voice data into packets in VoIP
  • Processing delay: Virus scanning at email servers, etc.

5. Throughput

5.1 Definition

Throughput is the number of bits per unit time delivered from source to destination.

  • Instantaneous throughput: Transmission rate at a specific point in time
  • Average throughput: Average transmission rate over the entire transfer duration
Transferring a file of F bits takes T seconds:

  Average throughput = F / T (bps)

End-to-end throughput is determined by the slowest link on the path.

Server --Rs--> Router --Rc--> Client

  Rs = Server-side link transmission rate
  Rc = Client-side link transmission rate

  Throughput = min(Rs, Rc)

Example 1: Server Side Is the Bottleneck

  Server --2 Mbps--> Router --10 Mbps--> Client

  Throughput = min(2, 10) = 2 Mbps
  Bottleneck link: Server-side link

Example 2: Client Side Is the Bottleneck

  Server --100 Mbps--> Router --1.5 Mbps--> Client

  Throughput = min(100, 1.5) = 1.5 Mbps
  Bottleneck link: Client-side link (access network)
10 server-client pairs sharing one core link (R):

  Server1  --Rs--+                    +--Rc--> Client1
  Server2  --Rs--+                    +--Rc--> Client2
  ...            +-- R (shared link) -+
  Server10 --Rs--+                    +--Rc--> Client10

Throughput per connection:

Throughput = min(Rs, Rc, R/10)

In the real Internet, core link capacity is very large, so the access network is usually the bottleneck.

Typical case:
  Core link >> Access network transmission rate

  -> Throughput = min(Rs, Rc)
  -> Bottleneck is almost always the access network

6. Relationship Between Delay and Throughput

Delay and Throughput are independent performance metrics:

  Delay: Time for a single packet to arrive
  Throughput: Amount of data delivered per unit time

  Pipe analogy:
  +-----------------------------+
  |  Water (data) flowing       |
  |  through a pipe             |
  +-----------------------------+
    <-- pipe length = delay -->
    pipe cross-section = throughput

7. Summary

Delay ComponentDepends OnMagnitude
Processing delayRouter performanceus or less
Queuing delayTraffic intensityus to ms
Transmission delayPacket size / link rateus to ms
Propagation delayLink distance / propagation speedms

Key formulas:

d_nodal = d_proc + d_queue + d_trans + d_prop
Traffic intensity = La/R (must be less than 1 for stability)
Throughput = min(transmission rates of all links on the path)

8. Review Questions

Q1. Explain the difference between transmission delay and propagation delay.
  • Transmission delay: The time to push all bits of a packet onto the link. Calculated as L/R and depends on packet size and link transmission rate.
  • Propagation delay: The time for a bit to physically travel through the link. Calculated as d/s and depends on distance and propagation speed.

Tollbooth analogy: Transmission delay is the time for all cars to pass through the tollbooth; propagation delay is the time for one car to drive to the next tollbooth.

Q2. What happens when traffic intensity exceeds 1?

The average packet arrival rate exceeds the transmission capacity of the link, causing the queue to grow without bound. In reality, since buffers are finite, massive packet loss occurs. Therefore, networks must be designed so that traffic intensity does not exceed 1.

Q3. What determines end-to-end throughput?

The slowest link on the path, the bottleneck link, determines end-to-end throughput. In the real Internet, the access network (home DSL, cable, etc.) is usually the bottleneck.