What is Livestreaming?
Streaming is the transport of recorded or live digital media, such as video or audio, over the internet. Some common forms of streaming content are podcasts, TV shows, music videos, and webcasts.
Livestreaming is a form of streaming in which the streamed media is recorded and broadcasted in real-time (without being recorded or stored) without substantial delay. While streams are pre-recorded and can be edited, livestreams are uncensored. Livestreams are sent from computer servers to end users in real-time using standard web protocols.
With advancements in streaming servers, cable modems, broadband networks, and compression algorithms for video and audio, livestreaming has become a popular method of delivering quality video and audio over the internet.
How Does Livestreaming Work?
Livestreaming involves broadcasting a live event as it happens. In livestreaming, a camera or another input device captures a video. Then the video is encoded using the appropriate codecs and packaged into a file that can be streamed over the internet. The file is then compressed using an algorithm to reduce the file size without compromising the quality. The resulting file is distributed over the internet for playback on media players.
This section explores the different stages of a livestreaming workflow.
Livestreaming workflows begin with a camera (the video source) and a microphone (the audio source). For short recordings, you can use simple cameras like a webcam integrated into a laptop and a microphone built into the webcam, or you can plug a camera and microphone into your streaming PC. For large productions, you may need a professional-grade camera, a camcorder, an action camera, and broadcast-quality microphones.
Video capture can also use multiple cameras. Multi-cameras used to film live events need an HDMI or an SDI input to send the footage directly to a vision mixer. Vision mixers and switchers draw from different live video sources to create a single master output. The switcher also allows you to switch between each camera feed in real-time.
In addition to hardware switchers, you can use alternative software switchers which don’t require an HDMI or an SDI connection. Once the switcher software is installed on your streaming device, configuring multiple cameras becomes easy. vMix, OBS Studio, and other modern encoding software have built-in switching capabilities and don’t require switching software.
IP or network cameras can also be used for livestreaming. These cameras can have a video encoder built-in, eliminating the need for a separate encoder and reducing the hardware required in your workflow. Most cameras support H.264 encoding (or Advanced Video Coding) and Real-Time Streaming Protocol (RTSP).
Encoding and Compression
After video capture comes encoding and compression, the raw data captured is large and would consume significant storage space and bandwidth if streamed over the internet in its raw form. It must be compressed to remove redundancies and improve transmission efficiency. Compression reduces the size of the digital stream by removing unnecessary information without compromising quality.
Encoding is a method of compressing video files. It converts raw audio and video to a higher-quality digital format that can be played on various devices such as smartphones and laptops. This is done using a codec that’s either built-in or installed on the device used to stream. With encoding, the compression algorithm searches for patterns that can be compressed into a repeatable formula and pieces of data that can be removed from the file altogether. This is called lossy compression.
To ensure that the video is played back on different devices and platforms once compressed, the digital file is converted into different file formats via transcoding.
In transcoding, the encoder sends the encoded livestream to a transcoder for modification of some attributes such as bitrates, resolutions, file containers, and codecs. While this step is not compulsory, it’s crucial in most streaming scenarios to ensure a large base of end users can receive the stream. By converting the livestream into different resolutions and bitrates, you accommodate users of various devices that require different video formats.
Video segmentation involves breaking up a video stream into smaller chunks containing several seconds of video. It’s done by streaming protocols. The segmented video is ready to be distributed to the audience.
Content Delivery Network
Once a video is compressed and segmented, it is distributed to users in different locations using a content delivery network (CDN). A CDN ensures that end-users can load the livestream. CDN has more bandwidth for distributing the stream than a single origin server. The highest video quality is maintained at minimal latency by caching and serving the content through a CDN.
Decoding involves converting the encoded digital steam into closed captions, titles, videos, and audio for display on users’ viewing devices. Whether using a decoder blade, software, or a hardware device to un-compress (or decompress) the digital stream, the option you choose must be fast enough to allow smooth video playback.
The livestreaming workflow—video capture, encoding, compression, transcoding, segmentation, distribution through CDN, and decoding—ends with playback. The playback process is enabled by a video player, which serves as a medium for end users to interact with the livestream. Video players should be user-friendly and work across many devices.
To work properly, livestreaming requires you to monitor and balance bandwidth, buffering, packet loss, and latency. If you don’t pay close attention to these things, your livestreams may lack clarity, have limited accessibility, and ultimately reach fewer viewers.
However, it’s challenging to strike a balance between these livestreaming requirements. For example, because video encoding uses lossy compression, it’s impossible to reduce the file size and not experience some noticeable quality degradation. Additionally, fixing packet loss while maintaining high latency is not easy.
Fortunately, streaming software can help find a balance between these livestreaming concerns. For instance, streaming software can regulate video or audio resolution to suit the available network strength. This software is equipped with adaptive bitrate streaming functionality to eliminate buffering and relies on adaptive streaming protocols to protect against packet loss.
To avoid livestreaming interruptions due to bandwidth fluctuations, you should set the target bitrate to half your available upload bandwidth or lower. Bitrate is the number of bits encoded for a unit of time and is measured in bits per second (bps) in video streaming or even kilobits in audio streaming.
SD (standard definition) streams are used in areas with slow internet speeds, have a maximum resolution of 480p, and commonly use a 4:3 aspect ratio. The resolution for HD (high definition) streams ranges from 720p to 1080p, while the top option in video streaming, 4K streaming, has two standard resolutions of 4096 x 2160 and 3840 x 2160.
The physical distance of the audience from the media server influences the time taken to deliver the livestream. If the audience is far from the media server, they may experience buffering and latency. Low latency is required to deliver content in near-real time.
There are technologies and tools available to minimize latency. For example, a CDN can solve latency issues associated with the global delivery of content streaming by geographically distributing media servers. This avoids the bottleneck of traffic that comes with delivering live media from a single server.
Streaming software has an adaptive bitrate streaming functionality that provides a variety of video and audio resolutions depending on the available processing power, connectivity, and display. This eliminates stream interruptions or buffering and allows even those users with unstable internet connectivity and power to stream. It also automatically adjusts the rendition as the internet connectivity improves.
Every time you access a network or the internet, small units of data, also known as packets, are sent and received. The failure of a packet to reach the intended destination is called packet loss.
Streaming relies on real-time packet processing, and once a packet loss affects the perceptual quality or causes loss of data in transit, the streaming process is disrupted. The multiple description coding (MDC) technique maximizes video or audio presentation quality in case of packet loss through error control mechanisms such as error concealment, error-resilient encoding, and retransmission.
Video playback can be limited by the bandwidth available to the service or by other factors such as screen resolution or frame rate. If these limitations cannot be adjusted at the time of broadcast, viewers will not be able to play back their videos on demand. Livestreaming protocols enable video content providers and viewers to adapt their experience based on factors outside their control. They also guarantee the security and privacy of streams.
Protocols allow communication between streaming servers and clients by performing functionalities such as session control, transport, and network addressing. Protocols are layered on top of one another such that each layer concentrates on a specific function.
The network layer protocol handles these basic network support functionalities. The internet protocol (IP) is the network-layer protocol in livestreaming. For end-to-end network functionalities, there are lower-layer transport protocols such as the transmission control protocol (TCP) and user datagram protocol (UDP) and upper-layer protocols such as real-time control protocol (RTCP) and real-time transport protocol (RTP) running on top of the lower-layer protocols.
Some popular livestreaming protocols include:
HTTP Live Streaming (HLS)
These protocols can be HTTP-based, relying on regular web servers or streaming protocols that use dedicated streaming servers to optimize the streaming experience. HLS supports adaptive bitrate streaming and delivers streams that can be played by most Microsoft, macOS, Linux, and Android devices, as well as all Google Chrome browsers.
Real-Time Messaging Protocol (RTMP) and Real-Time Streaming Protocol (RTSP)
RTMP is another communication protocol for multimedia data transmission over the internet. It allows media files to be streamed in real-time without buffering and at low latency, allowing multiple users to access the same file simultaneously.
RTSP is a session control protocol that defines the messages and procedures that control the delivery of multimedia data during an established session.
Web Real-Time Communication (WebRTC)
WebRTC is a modern communication protocol. Designed initially for chat-based applications, fully interactive streaming is one of WebRTC’s significant strengths. It delivers near-instant video and audio streaming to allow real-time communication. Users can achieve 500 milliseconds latency with WebRTC and play streams from popular browsers without a plugin.
- Livestreaming is a form of real-time broadcasting video or audio over the internet.
- It occurs over a series of steps when you broadcast live, starting with video capture and then through encoding, compression, transcoding, segmentation, and distribution through a CDN. The digital stream goes to clients for the final decoding, and the process ends by playing the stream on a user’s device.
- There are various concerns in livestreaming, including bandwidth, buffering, packet loss, and latency.
- Livestreaming protocols ensure that video playback is adapted to the available bandwidth and latency and reaches a broad audience. They also ensure the stream is secure.
- WebRTC, RTMP and RTSP, and HLS are common livestreaming protocols. HLS assures compatibility with most browsers, devices, and platforms. RTMP allows for low latency streams, while WebRTC is suitable for interactive streaming.