Taming Video Delivery Through HTTP Live Streaming

Written by: Shreyas Hirday

0*h9Gx2ulQiDlhOlUc.png?q=20 0*h9Gx2ulQiDlhOlUc.png

Swipe Night is an interactive, apocalyptic-themed in-app video experience that was available Sunday nights from 6pm to midnight in the U.S. throughout October. A significant part of the feature involved the streaming of high-production-value video content, on demand, with very little lag. Given that streaming was something new to us and new to Tinder, it was essential that we gained the required knowledge, used the right tools, and ran the correct tests to ensure that we delivered a top-notch experience upon launch.

Content Delivery Goals

There are multiple ways to deliver video content, each involving a different level of effort, so we needed a way to evaluate and eliminate potential strategies. We established goals that defined effective video content delivery in a way that served the needs of this specific experience:

1. Dynamic: Content should be able to be changed remotely on the fly.

2. Efficient: Memory usage should be minimized — content not intended to be viewed is not stored on the device.

3. Seamless: There should be little to no interruption during playback.

The simplest strategy to deliver the content was to include the videos inside the app bundle, which would be downloaded when members installed and updated Tinder. However, doing so would have violated our first two goals: changing content on the device would require another app release, and the content would take up disk space on the device for people who never participated in the new feature. We also considered pre-downloading the video content, perhaps well before launch, but this, again, would waste storage on the device for members who did not participate. Additionally, people with subpar network conditions would suffer, and new members would be required to wait for the content to download before getting access to the feature, given that they had no opportunity to download the content ahead of time.

As a result of these considerations, we decided to use an adaptive bitrate protocol developed by Apple known as HTTP Live Streaming (HLS) to deliver our content.

The premise behind HLS is to have multiple copies of the same video at different levels of quality. Each level of quality is considered a “variant” or, as we like to call it, a “gear.” Although the goal is to show the highest quality video possible, the member’s bandwidth might not be able to stream the video on the fly fast enough without buffering or playback errors. An HLS stream therefore provides a manifest to a video player, which includes the URL to each copy of the video as well as the level of bandwidth the member is expected to have in order to view that level of quality without issue. The video player can switch between the different gears as the member’s connection improves or worsens.

Transcoding

To transcode MP4 files to HLS Streams, we leveraged FFMPEG, a free and open source software. Using the child_process module of Node.js, we developed a workflow with an MP4 & configurations as inputs and a directory containing the HLS stream as an output. One configuration would be provided for each gear we wanted to be made available. The output directory was stored in an AWS S3 bucket and contained a master manifest (with a .m3u8 file extension), a subdirectory for subtitles, and one subdirectory for each gear.

0*OVXcLO2ZomsCBOs7?q=20 0*OVXcLO2ZomsCBOs7

Since we were aware that there was not one universal set of gears that adequately supports all use cases of video content, we invested time to understand the different levers available when creating configurations and how they impacted the final output and experience:

  • Frame Rate: -framerate
  • Resolution: -vf scale
  • Desired Video & Audio Bitrate: -b:v & -b:a
  • Video Bitrate Variability: -maxrate & -bufsize
  • Segment Length: -hls_time, -hls_init_time, & -force_key_frames
  • Optimizations: -hls_playlist_type vod & -hls_flags independent_segment

A standard frame rate of around 24 frames per second is used for video content in general, so we used this for all gears. A higher frame rate can be used for HD / 4K movie content, but this was not required for our mobile experience.

The resolution was the most obvious way to differentiate gears. We supported a wide array of resolutions, which affected the amount of data being downloaded and the level of quality each member observed. Additionally, in HLS, the first variant listed in the manifest is always the first segment downloaded and displayed by the client to gauge its available bandwidth, so we followed Apple guidelines and kept the first variant as the one that had the average bitrate closest to typical cellular bandwidth.

Keyframes are created at regular intervals; keyframes are different from other frames in that they require the full set of data to render the frames. Other frames are rendered by providing the differential between the previous frame and the current frame — an optimization that reduces the file size of the content. When there are significant changes between frames, such as in the case of fast action or explosions, the amount of data for that frame increases; this is why the bitrate differs between segments over the duration of the video. You can provide the desired average bitrate for all segments by setting -b:v, but you can also control the variability through -maxrate & -bufsize. A larger bitrate means more data supporting the content and, therefore, a higher quality, but it takes larger bandwidth or more time to download.

We found at times though that there was a significant gap between the peak and average bitrates for some of our variants, so much so that the peak bitrate for one variant was greater than the average bitrate of the next higher quality variant. We knew that this would be a problem for low bandwidth members who would struggle with the higher bitrate segments. Setting -maxrate to be a multiplier of -bufsize addressed this problem, as doing so constrainst he bitrate of a given segment closer to the average bitrate, but overdoing it can negatively affect the desired quality.

Despite setting the -hls_time and -hls_init_time to a specific time duration, we noticed that our segment lengths were not conforming to our specification. Although FFMPEG accepts those parameters, it will only cut the segments at a keyframe. Setting -force_key_frames with the value expr:gte(t,n_forced*${<hls_time_value>}) ensured that a keyframe was being inserted at the interval that we desired, allowing for segments to be divided as we intended.

Validation

As mentioned, the output directory has a master manifest, which contains information about each gear, such as the peak and average bitrate of all segments in that gear. The video player will use its algorithm and the information in the manifest to decide when it should switch gears and which gear it should switch to as the member’s network connection changes. It would be a waste of time and bandwidth for the video player to download all gears, analyze the bitrate of each gear, and then make a decision; so instead, the video player downloads the manifest, which summarizes the information necessary to make decisions. Therefore, an accurate manifest is required to make good decisions for the best possible experience.

In order to understand the accuracy of the manifest, Apple provides a mediastreamvalidator tool, which tests the manifest by simulating a streaming experience. The results of the tool are reported in a validation_data.json, and we use this information to update the manifest to ensure accuracy. This finalized manifest and the surrounding content is finally available in our production S3 bucket.

Content Access & Delivery

Given that we needed this content to be downloaded with as little latency as possible in order to allow for a speedy, interactive experience, we used AWS Cloudfront, a content delivery network (CDN). As members from different areas of the U.S. were requesting the content, the CDN would copy the HLS stream directory from the S3 bucket into regional and local caches for a period of time, so members in similar locations would be able to access the content more quickly.

We also used the CDN to ensure that the content was only accessible when we intended. Before getting access to the URLs to the master manifest files in the CDN, the Tinder client called an endpoint to receive a set of cookies. The client used those cookies to get access to the files on the CDN, and these signed cookies were only valid within a certain time frame. We therefore had full control over when the content could be viewed.

Client Video Players

On Android, proper Exoplayer configuration required us to experiment with different constructor values for some of its components such as DefaultLoadControl, DefaultLoadErrorHandlingPolicy, AdaptiveTrackSelection.Factory, and DefaultHttpDataSourceFactory. We were able to use the Player.EventListener#onPlayerStateChanged callback in order to update the UI as the video state changed. We were also able to reduce latency and buffering duration by simply setting flags like HlsMediaSource.Factory#setAllowChunklessPreparation and by using -hls_flags independent_segment in our transcoding configuration. Additionally, we used CacheDataSourceFactory so that every retrieved video was written into disk, allowing for previously downloaded content to be readily available for returning members. Lastly, to keep overall storage light, we proactively deleted cached videos once the member finished watching them.

iOS only allows for four AVPlayer instances in memory at once before the operating system begins disallowing content decoding, triggering playback errors. Consequently, we needed to be vigilant, freeing up existing video players so that new ones could be used in subsequent scenes in the episode. We used an AVPlayer for the entry screen as well, so we needed to make sure that AVPlayer was deallocated once the members started the experience.

Understanding Video Performance

In order to test our transcoding configuration, content delivery, and client playback, we needed to stream video content before Swipe Night was launched. We were able to update a Tinder U entry modal to include a streamed background video, as well as interactive Swipe Life video content.

When we released each video, we measured video performance using five KPIs, inspired by Apple’s WWDC 2018 HLS session::

  1. Perceived start up time for a member
  2. Number of stalls during playback
  3. Time spent in the stalled state
  4. % of video sessions with errors
  5. Average quality of the video measured by average bitrate

We measured these KPIs by injecting real-time ETL events that sent metadata about the state of the video, the quality, the device’s bandwidth, and so forth. These metrics are not independent; for example, improving quality could cause the video files to be larger, thereby increasing startup time. Finding the right balance between the five metrics for our specific use case was key. On a traditional video streaming app, like Netflix or Youtube, members might get value out of waiting five seconds on a bad connection to start watching an episode, but on a mobile-centric app like Tinder, five seconds is an eternity. Consequently, we were willing to sacrifice a certain level of quality for speed.

Launch Day

After thorough testing, numerous calculations, and intensive research, we finally settled on a general configuration that was applied to all videos. We were confident in what we had chosen and in knowing that we were making the right tradeoffs that would lead to a great experience for our members. Still, when we launched Swipe Night, our eyes were anxiously fixated on our video analytics dashboard, as our video content was repeatedly streamed to our members over the course of nine hours across the continental U.S. We had backup URLs with tweaked configurations as solutions to possible problems that members might encounter, but in the end, we were delighted with our KPIs and received no complaints on video quality or performance.

Knowing that our video streaming success allowed Tinder members to seamlessly navigate the apocalypse was indeed an astronomical win for us, and we continue to strive to bring great experiences to the app for all to enjoy.

Acknowledgments: Nicholas Long (Senior iOS Engineer) for talking about AVPlayer, Josh Gafni (Engineering Manager) for reviewing

0
0
0
0
0
0
0 0 71
Submit comment
    No comments yet

Dec 18 2020 - Netflix 2021: Lord Of The Streams, Return Of The King - Part1

July 5 2021 - The honeymoon is over for streaming services

Nov 6 2021 - Roku's Q3 Update Highlights The Evolution Of Streaming Customers

Sept 11 2020 - Limelight Networks: A True Goldilocks Story Of The CDN Sector

Nov 15 2021 - Roku: An Opportunity For A Netflix 2.0 Run?

Jan 15 2021 - Streaming Price War Approaches: Sell Disney And Netflix

Streaming wars spill into CES 2020 as media nabs the spotlight

May 2 2019 - Time For GameStop To Use The 'Konami Code'

Submit media
Enter your nickname

Show

Show

Enter your email address and we will send you an email explaining how to change your password or activate your account.

Back to login form

Close