- most applications (web browsing, file transfer…) don’t care about the variation of the end-to-end delay. Real-time applications are impacted by variations of the end-to-end delay
- packet switches have buffers. End host real-time applications have playback buffers. These applications build playback buffers to have a reserve of packets. Playback buffers are maintained in memory.
- The purpose of the playback buffers is to absorb the variation of the end-to-end delay, not the delay itself.
- in the case of a real-time application (such as Youtube), the server streaming rate is fixed. However, the number of bytes received by the client is variable in time, because between the server streaming and the transmission of bytes there is a variable delay.
- The playback buffer (at the client side) streams at a fixed rate.
- If the size of the playback buffer is very big, then it leads to a big delay between the time the server streams the file to the time the client plays it back. If the size of the playback buffer is not big enough, then there may be received bytes but with no available playback buffer space, which leads to a “buffering” phenomena.
- the variable queueing delays in a network path are experienced at the packet switches and not at the end host.
Adsense black background: