- Published on
[Computer Networking] 15. Link Layer: Error Detection and Multiple Access Protocols
- Authors

- Name
- Youngju Kim
- @fjvbn20031
Link Layer: Error Detection and Multiple Access Protocols
The Link Layer is responsible for safely delivering frames between two adjacent nodes. While the network layer handles packet delivery between hosts, the link layer handles communication on a per-hop basis.
In this post, we examine the link layer's services, error detection techniques, and Multiple Access protocols for shared media.
1. Link Layer Services
1.1 Basic Services
Services Provided by the Link Layer
======================================
1. Framing
- Encapsulates network layer datagrams into frames
- Includes MAC addresses in frame header
2. Link Access
- Access control on shared media (multiple access protocols)
- Frame delivery using MAC addresses
3. Reliable Delivery
- Uses ARQ on error-prone links (wireless)
- Usually not used on wired links
4. Error Detection and Correction
- Bit error detection: parity, CRC, etc.
- Error correction: FEC (Forward Error Correction)
1.2 Implementation Location
The link layer is primarily implemented in the NIC (Network Interface Card). The NIC's hardware chip handles framing, CRC computation, MAC protocol, and more.
Link Layer Implementation
===========================
[Application Layer]
[Transport Layer] CPU / Software
[Network Layer]
-----------------------
[Link Layer] NIC (Hardware + Firmware)
[Physical Layer]
2. Error Detection Techniques
2.1 Parity Bit
The simplest error detection method, adding a single parity bit to the data bits.
Single Parity Bit (Even Parity)
=================================
Data: 1011010
Parity bit: 0 (so that the count of 1s is even)
Transmitted: 10110100
Received: 10110110 (5th bit error)
Count of 1s: 5 (odd) --> Error detected!
Limitation: Cannot detect even number of simultaneous bit errors
Two-Dimensional Parity: Adding parity bits to both rows and columns can even correct single-bit errors.
Two-Dimensional Parity
========================
Data + Row Parity:
1 0 1 1 | 1
0 1 1 0 | 0
1 1 0 1 | 1
----------
0 0 0 0 (Column parity)
Error occurred:
1 0 1 1 | 1
0 1 0 0 | 0 <-- Row parity mismatch
1 1 0 1 | 1
----------
0 0 1 0 <-- Column parity mismatch
^
3rd column, 2nd row --> Error location identified and correctable
2.2 Internet Checksum
A simple error detection method used by the transport layer (TCP, UDP).
Internet Checksum Computation
================================
1. Divide data into 16-bit words
2. Add all words using one's complement addition
3. Take the one's complement of the result as the checksum
Example:
Word 1: 0110 0110 0110 0000
Word 2: 0101 0101 0101 0101
-------------------------
Sum: 1011 1011 1011 0101
Checksum: 0100 0100 0100 1010 (one's complement)
Verification: At receiver, adding all words including checksum
should yield 1111 1111 1111 1111
2.3 CRC (Cyclic Redundancy Check)
CRC is the most widely used and powerful error detection method at the link layer. It is based on polynomial division.
CRC Computation Process
=========================
1. Determine generator polynomial G (shared by sender and receiver)
Example: G = 1001 (x^3 + 1), r = 3 bits
2. Append r zeros to data D
D = 101110, D * 2^r = 101110 000
3. Compute remainder R by XOR division of D * 2^r by G
101110 000 / 1001
----------------
101110 000
XOR 1001
--------
0101 10 000
XOR 1001
--------
0000 1000 0
XOR 1001
------
0001 0
Remainder R = 011
4. Transmitted bits: Concatenate D and R = 101110 011
5. Receiver: Divide received bits by G, remainder = 0 means no error
2.4 Error Detection Capability of CRC
CRC's Strong Detection Capability
====================================
Errors detectable with r-bit CRC:
- All single-bit errors
- All 2-bit errors (with appropriate G selection)
- Odd number of bit errors (if G contains x+1 as factor)
- Burst errors of r bits or fewer
Standard CRC Polynomials:
CRC-8: x^8 + x^2 + x + 1
CRC-16: x^16 + x^15 + x^2 + 1
CRC-32: Used in Ethernet, WiFi, ZIP, etc.
Very strong detection with 32-bit remainder
3. Multiple Access Protocols
When multiple nodes share a single broadcast link, collisions from simultaneous transmission must be resolved.
3.1 Classification
Multiple Access Protocol Classification
==========================================
1. Channel Partitioning
- TDMA, FDMA, CDMA
- No collisions but inefficient
2. Random Access
- ALOHA, Slotted ALOHA, CSMA, CSMA/CD
- Collisions allowed, detection/recovery mechanisms
3. Taking-Turns
- Polling, Token Passing
- Middle ground compromise
4. Channel Partitioning Protocols
4.1 TDMA (Time Division Multiple Access)
Time is divided into fixed-size slots, and each node is assigned a dedicated slot.
TDMA Operation
================
Time frame:
[Slot1][Slot2][Slot3][Slot4][Slot1][Slot2][Slot3][Slot4]
NodeA NodeB NodeC NodeD NodeA NodeB NodeC NodeD
Advantage: No collisions
Disadvantage: Even if only Node A has data, it can only transmit in its slot
--> Only 1/N of channel bandwidth usable (waste)
4.2 FDMA (Frequency Division Multiple Access)
The frequency band is divided and each node is assigned a dedicated frequency.
FDMA Operation
================
Frequency
^
| [Node D band]
| [Node C band]
| [Node B band]
| [Node A band]
+--------------------> Time
Same pros/cons as TDMA
- No collisions but bandwidth waste
5. Random Access Protocols
5.1 ALOHA
The first random access protocol, developed at the University of Hawaii.
Pure ALOHA
===========
Operation:
1. If a frame is ready, transmit immediately
2. On collision, retransmit immediately with probability p, wait with (1-p)
Time:
[A transmits]
[B transmits] <-- Overlaps with A, collision!
[A retransmits]
[C transmits] <-- No collision
Maximum efficiency: 1/(2e) = ~18%
(Only 18% of channel capacity used for actual transmission)
5.2 Slotted ALOHA
Time is divided into slots, and transmission is only allowed at slot boundaries, reducing the collision window.
Slotted ALOHA
===============
Time slots:
| Slot 1 | Slot 2 | Slot 3 | Slot 4 | Slot 5 |
| A sends | A+B col.| B sends | | C sends |
| Success | Failure | Success | Empty | Success |
Maximum efficiency: 1/e = ~37%
(Twice Pure ALOHA)
Assumptions:
- All frames are the same size
- Time is synchronized into slots
- Transmission starts only at slot boundaries
5.3 CSMA (Carrier Sense Multiple Access)
Before transmitting, the channel is listened to (carrier sense). If another node is transmitting, wait.
CSMA Operation
================
1. Carrier Sense (Listen before talk)
- Channel idle --> Transmit
- Channel busy --> Wait
2. Why collisions still occur: Propagation delay!
Time
|
| A starts transmitting
| [A's signal traveling toward B...]
| B senses channel: "Idle!" --> B also starts transmitting
| [Collision!]
v
The greater the propagation delay, the higher the collision probability
5.4 CSMA/CD (CSMA with Collision Detection)
Detects collisions and immediately stops transmission to reduce channel waste. This is the method used by Ethernet.
CSMA/CD Operation
====================
1. NIC receives frame and senses channel
2. If channel is idle, start transmission
3. Collision detected during transmission:
a. Stop transmission immediately
b. Send Jam signal (48 bits)
c. Binary Exponential Backoff
Binary Exponential Backoff:
After nth collision, randomly choose from 0 ~ (2^n - 1) and wait
1st collision: Choose from 0 or 1
2nd collision: Choose from 0, 1, 2, 3
3rd collision: Choose from 0 ~ 7
...
10th collision: Choose from 0 ~ 1023 (capped at 10)
5.5 CSMA/CD Efficiency
CSMA/CD Efficiency Formula
=============================
Efficiency = 1 / (1 + 5 * d_prop / d_trans)
d_prop: Maximum propagation delay (max distance between two nodes)
d_trans: Transmission time for maximum-size frame
Properties:
- d_prop -> 0: efficiency -> 1 (ideal)
- d_trans -> infinity: efficiency -> 1
- Efficient with large frames and short distances
6. Taking-Turns Protocols
6.1 Polling
A master node designates each slave node in order, granting transmission opportunities.
Polling Protocol
==================
[Master]
|
|--> "Node A, your turn" --> [Node A] transmits
|
|--> "Node B, your turn" --> [Node B] transmits (no data, pass)
|
|--> "Node C, your turn" --> [Node C] transmits
|
|--> Back to Node A...
Disadvantages:
- Polling overhead (polling messages)
- Polling latency (waiting for turn)
- Single point of failure if master node fails
6.2 Token Passing
A special token frame circulates among nodes, and only the node holding the token can transmit.
Token Passing Protocol
========================
[A] --> [B] --> [C] --> [D]
^ |
|_______________________|
(Token circulation)
Operation:
1. Token arrives at A
2. If A has data: transmit data, then pass token to B
3. If A has no data: pass token directly to B
4. Repeat
Advantages: Fairness guaranteed, no collisions
Disadvantages:
- Token overhead
- Recovery needed if token is lost
- Network failure if one node fails
7. Multiple Access Protocol Comparison
Protocol Comparison Summary
==============================
Protocol | Efficiency | Delay | Fairness | Complexity
-------------+------------+----------+----------+-----------
TDMA | Low (1/N) | Guaranteed| Fair | Simple
FDMA | Low (1/N) | Guaranteed| Fair | Simple
Pure ALOHA | 18% | Variable | Fair | Simple
Slotted ALOHA| 37% | Variable | Fair | Simple
CSMA/CD | High | Variable | Fair | Medium
Polling | High | Poll delay| Variable| Medium
Token | High | Token delay| Fair | High
8. Summary
| Concept | Key Points |
|---|---|
| Parity Bit | Simplest error detection, misses even-count errors |
| CRC | Polynomial division based, strong burst error detection |
| CSMA/CD | Sense then transmit, stop immediately on collision + backoff |
| Slotted ALOHA | Slot synchronization for 37% efficiency |
| Binary Exp. Backoff | Wait range doubles with each collision |
| Polling | Master assigns turns, single point of failure risk |
| Token Passing | Only token holder transmits, fair but has overhead |
In the next post, we will examine Ethernet, link-layer switches, and VLANs as representative implementations of the link layer.
References
- James F. Kurose, Keith W. Ross, "Computer Networking: A Top-Down Approach", 6th Edition, Chapter 5
- IEEE 802.3 - Ethernet Standard
- Andrew S. Tanenbaum, "Computer Networks", 5th Edition, Chapter 4