The UDP datagram consists of a header and an optional data field. The format of the datagram is as follows:
Let’s concentrate on the UDP header:
The UDP header overhead is usually 8 Bytes. The way to represent that visually (on the UDP header figure below) is by multiplying the number of rows by the size of each row. The number of rows is 2. And the length of each row is 32 bits (from bit number 0 to bit number 31) or 4 bytes. You multiply 2*4 = 8 bytes.
– The UDP header contains four fields:
- source port,
- destination port,
- length: is the size of both UDP header and data.
- Checksum: concerns both header and data integrity. If the sender does not calculate a checksum, then this field will be all zeros.
– provides multiplexing and demultiplexing of applications
– provides integrity verification (checksum)
– does not provide delay guarantees (neither does TCP)
– does not provide retransmission of lost segments or corrupted segments. It does not even know if a segment is lost!
– does not provide any mechanism to verify in-order or out-of-order segments (sequence numbers)
– does not provide flow control: the sender can send at any rate
– used by applications that rely on a request/response model, such as DNS, DHCP, NTP;;; where the request is contained in a single UDP datagram
– used by IP Telephony applications, some routing protocols, broadcast and multicast communications,…
– can be seen as simply a wrapper and multiplexer/demultiplexer for application data
– UDP header overhead is 8 bytes, while TCP header overhead is typically 20 bytes
– to circumvent the lack of reliable transfer, some applications that run on top of UDP are written with their own acknowledging and retransmission mechanism
– UDP checksum is calculated on both the sender and the receiver. The checksum is the sum of 16-bit words. The principle is: if at the receiver the checksum gives an “all-1” binary number, then there are “theoretically” no errors in the segment. If there is any 0 then there is an error. Once an error is detected, the segment is either discarded, or passed with warning to the application layer, depending on the UDP implementation
Why does UDP provide an error detection mechanism while there are error detection mechanisms at the Link layer? Because the Transport layer was built with the supposition that some links do not have error detection mechanisms, since the Network layer can run on any Link layer.