Source

This document was originally shared on MoQ IETF mailing list. This is a verbatim copy of the original, preserved without Google Docs.

Introduction

This is an attempt to document the issues that Twitch/Amazon IVS has encountered with various distribution protocols over the last 8 years.

HLS

We initially used RTMP for distribution but switched to HLS something like 8 years ago. We use MSE to feed the player buffer on web platforms. This assumes 2s segments.

Congestion

Latency

Time to Video

Clients

LHLS

We made our own fork of HLS to address some of the issues mentioned above. Segments are advertised in the playlist ahead of time and delivered frame-by-frame with HTTP chunked-transfer.

AS COMPARED TO HLS

Congestion

Latency

Clients

Performance

LL-HLS

Apple went ahead and made their own low latency HLS solution. Segments are split into sub-segments and updates are requested more frequently. We have not implemented this yet so some of these bullet points may be inaccurate or missing. This assumes 2s segments and 500ms sub-segments

AS COMPARED TO LHLS

Congestion

Latency

Clients

Performance

WebRTC

We decided that the only way to further reduce latency was to use WebRTC. This project involved using WebRTC for last-mile delivery; our existing video system (RTMP ingest, HLS distribution) was used for everything prior. We tried twice; once using libwebrtc and another time using a heavily forked pion.

Some of these issues would not be present if we replaced our entire video pipeline with WebRTC instead of this hybrid approach. That would have been a gigantic undertaking and was absolutely not feasible.

AS COMPARED TO LHLS

Congestion

Latency

Quality

Time to Video

Clients

Features

Performance

Frames over WebRTC data channels

When WebRTC was not working, we tried to switch over to WebRTC data channels (SCTP over DTLS). Each frame was sent as a WebRTC data channel message. These frames could be fed into the player via MSE.

It didn’t work. SCTP deadlocks when messages are too large because they count towards flow control until fully received. The flow control limits in Chrome and Firefox are hard-coded and are often smaller than a single I-frame. SCTP cannot drop messages out of order.

RTP over WebRTC data(grams)

Since data channels weren’t working as intended, we decided to send each RTP packet as an unreliable message. This was then reassembled by the application and fed into the player.

AS COMPARED TO LHLS

Congestion

Latency

Time to Video

Features

Performance

Warp

Warp is conceptually similar to LHLS, but segments are pushed in parallel via QUIC/WebTransport. Prioritization is used to avoid segments fighting for bandwidth, delivering newer media first (especially audio) during congestion.

AS COMPARED TO LHLS

Congestion

Latency

Time to Video

Clients

Features

Performance