Transmuxing, also referred to as repackaging or packetizing, is a process in which audio and video files are repackaged into different delivery formats without changing the files’ contents.
The primary function of transmuxing is to change the container or delivery format of video and audio while preserving the original content. Although transmuxing is similar to transcoding, it is generally considered to be more lightweight and requires less computing power because it does not modify the uncompressed, raw video or audio source.
Container, or file, formats are often confused with video formats, which are compressed with video codecs. For example, Flash (.flv) is a container format that uses the H.264 video codec and AAC audio codec to compress data. When you convert a .flv file to a .mov file through transmuxing, you’re only changing the container format, which leaves the compressed files untouched. Converting the video format from H.264 to vp6, another video codec compatible with Flash files, would be considered transcoding.
Transmuxing is particularly critical for live streaming to ensure that every viewer has the best experience possible. FFmpeg is popular open source software that is leveraged by many organizations for live stream transmuxing. It utilizes demuxers to read multimedia streams and muxers to write multimedia streams. For example, FFmpeg can receive an RTMP (Real Time Messaging Protocol) stream and convert it to an HTTP-based media streaming protocol, which allows for adaptive bitrate streaming.
Let’s look at two different streaming protocols that FFmpeg can facilitate.
Smooth Streaming is an Internet Information Services (IIS) extension that enables adaptive streaming over HTTP. Using FFmpeg, video streams can be converted to MOV, MP4, and ISMV formats, and Smooth Streaming content can be pushed to a publishing point using a simple command:
ffmpeg -re -movflags isml+frag_keyframe -f ismv http://server/publishingpoint.isml/Streams(Encoder1)
Microsoft Smooth Streaming has been used for major livestreaming events such as the 2008 Beijing Summer Olympic Games and the 2010 Winter Olympics in Vancouver.
HTTP Live Streaming (HLS) supports both live and video-on-demand content by breaking streams into small MPEG2-TS files, called video chunks, of varying bit rates. A segment is then used to set the duration and produce an index that acts as a playlist file for the video chunks. Video streams can be converted and broken into appropriate segments by FFmpeg with a command such as:
ffmpeg -i inputFile.mkv -c:v h264 -flags +cgop -g 30 -hls_time 1 outputFile.m3u8
HLS has been adopted by some of the largest video and audio broadcasters, such as Music Choice and Fox Networks, but is also used by companies you wouldn’t expect, such as the lifestyle trendsetter Hurley.
Repackaging video and audio files into different delivery formats can give you the flexibility to deliver content in the most effective format possible. Some cloud providers have built-in transmuxing services that replace the hassle of working with FFmpeg but they often come with a large price tag.
Although StackPath’s edge compute products do not support built-in transmuxing at this time, our content delivery network can deliver transmuxed video and audio quickly and reliably at the edge, providing a superior end-user experience.