AV1 vs. h265 (HEVC) vs. VP9: What Is the Difference Between These Compression Standards?

Blog

HomeHome / Blog / AV1 vs. h265 (HEVC) vs. VP9: What Is the Difference Between These Compression Standards?

Oct 09, 2023

AV1 vs. h265 (HEVC) vs. VP9: What Is the Difference Between These Compression Standards?

Different video codecs deliver different compression rates and video quality.

Different video codecs deliver different compression rates and video quality. But which should you be using?

Streaming in 4K is the new norm, but with information for more than 8.2 million pixels being transmitted every 16 milliseconds—storing and transmitting 4K video on the internet is no easy task.

A two-hour-long movie would hog over 1.7 Terabytes of storage when uncompressed. So, how do streaming giants like YouTube and Netflix manage to store and stream videos that take up so much space?

Well, they don't because they use video codecs to reduce the size of movies, but what is a video codec, and which one is the best?

Before diving deep into the complexities of video codecs, it's vital to understand how a video is created. Put simply, video is nothing but a set of still images replacing each other quickly.

Due to this high changing velocity, the human brain thinks that the images are moving, creating the illusion of watching a video. Therefore, when watching a video in 4K, you are just looking at a set of images with a resolution of 2160x3840. This high resolution of images enables a video shot in 4K to deliver a great video experience. That said, this high resolution of images increases the size of the video, making it impossible to stream over channels with limited bandwidth, such as the internet.

To solve this problem, we have video codecs. Short for coder/decoder or compression/decompression, a video codec compresses the stream of images into bits of data. This compression can either reduce the quality of the video or have no effect on it based on the compression algorithms used.

As the name suggests, the compression bit in a codec reduces the size of each image. To do the same, the compression algorithm exploits the nuances of the human eye—preventing people from knowing that the videos they watch are compressed.

The decompression, on the contrary, works oppositely and renders the video using the compressed information.

Although codecs do a great job when it comes to compressing information, performing the same can be taxing for your CPU. Due to this, it's normal to see fluctuations in system performance when you run video compression algorithms on your system.

To solve this problem, CPUs and GPUs come with special hardware which can run these compression algorithms. Enabling the CPU to perform the tasks at hand while the dedicated hardware processes the video codecs, improving efficiency.

Now that we have a basic understanding of what a video codec does, we can look at how a codec works.

As explained earlier, videos are made up of images, and chroma subsampling reduces the information in each image. To do this, it reduces the color information contained in each image, but how is this reduction in color information detected by the human eye?

Well, you see, human eyes are great at detecting changes in brightness, but the same cannot be said about colors. This is because the human eye has more rods (photoreceptors cells responsible for detecting changes in brightness) when compared to cones (photoreceptors cells responsible for differentiating colors). The difference in rods and cones prevents the eyes from detecting color changes when comparing compressed and uncompressed images.

To perform chroma subsampling, the video compression algorithm converts the pixel information in RGB to brightness and color data. After that, the algorithm reduces the amount of color in the image based on compression levels.

Videos are made up of several frames of images, and in most cases, all of these frames contain the same information. For example, imagine a video with a person speaking against a fixed background. In such a case, all the frames in the video have a similar composition. Therefore all the images are not needed to render the video. All we need is a base picture that contains all the information and the data related to change when moving from one frame to the other.

Hence, to reduce the video size, the compression algorithm divides the video frames into I and P frames (Predicted frames). Here I frames are the ground truth and are used to create P frames. The P frames are then rendered using the information in the I frames and the change information for that particular frame. Using this methodology, a video is broken down into a set of I Frames interleaved into P frames compressing the video further.

Now that we have broken the video into I and P frames, we need to look at motion compression. A part of the video compression algorithm which helps create the P frames using the I frames. To do this, the compression algorithm breaks the I frame into blocks known as macro-blocks. These blocks are then given motion vectors which define the direction in which these blocks are moving when transitioning from one frame to another.

This motion information for each block helps the video compression algorithm predict each block's location in an upcoming frame.

Just like changes in color data, the human eye can't detect subtle changes in high-frequency elements in an image, but what are high-frequency elements? Well, you see, the image rendered on your screen comprises several pixels, and the values of these pixels change based on the image being displayed.

In some areas of the picture, the pixel values change gradually, and such areas are said to have a low frequency. On the other hand, if there is a rapid change in the pixel data, the area is categorized as having high-frequency data. Video compression algorithms use the Discrete Cosine Transform to reduce the high-frequency component.

Here is how it works. First, the DCT algorithm runs on each macro-block and then detects the areas where the change in pixel intensity is very rapid. It then removes these data points from the image—reducing the size of the video.

Now that all the redundant information in the video has been removed, we can store the remaining bits of data. To do this, the video compression algorithm uses an encoding scheme such as Huffman encoding, which links all the data bits in a frame to the number of times they occur in the video and then connects them in a tree-like fashion. This encoded data is stored on a system, enabling it to render a video easily.

Different video codecs use different techniques to compress videos, but at a very basic level, they use the five fundamental methods defined above to reduce the size of videos.

Now that we understand how codecs work, we can determine which is the best out of AV1, HEVC, and VP9.

If you have a 4K video that is taking up a lot of space on your system and can't upload it to your favorite streaming platform, you might be looking for a video codec that offers the best compression ratio. However, you also need to consider that the quality it delivers decreases as you keep compressing the video. Therefore, while selecting a compression algorithm, it's essential to look at the quality it delivers at a particular bitrate, but what is the bitrate of a video?

Simply put, the bitrate of a video is defined as the number of bits the video needs to play for a second. For example, a 24-bit uncompressed 4K video running at 60 frames has a bitrate of 11.9 Gb/s. Therefore, if you stream an uncompressed 4K video on the internet, your Wi-Fi must deliver 11.9 gigabits of data every second—exhausting your monthly data quota in minutes.

Using a compression algorithm, on the contrary, reduces the bitrate down to a very small amount based on the bitrate of your choice without degrading the quality.

When it comes to compressibility/quality numbers, AV1 leads the pack and offers 28.1 percent better compression compared to H.265 and 27.3 percent saving compared to VP9 while delivering similar quality.

Therefore, if you are looking for the best compression without a degradation in quality, AV1 is the compression ratio for you. Due to the great compression-to-quality ratio of the AV1 codec, it is used by Google in its video-conferencing application Google Duo and by Netflix while transmitting video on a low bandwidth data connection.

As explained earlier, a video compression algorithm encodes a video once it's compressed. Now to play this video, your device needs to decode the same. Therefore, if your device does not have the hardware/software support to decompress a video, it won't be able to run it.

Hence, it's important to understand the compatibility aspect of a compression algorithm because what's the point of creating and compressing content that can't run on many devices?

So, if compatibility is something that you are looking for, then VP9 should be the codec for you as it is supported on over two billion endpoints and can run on every browser, smartphone, and smart TV.

The same can't be said about AV1 as it uses newer, more complex algorithms to reduce the file size of a video and can't be played on older devices. Regarding browser support, Safari cannot play AV1, but browsers like Firefox and Chrome can play AV1 videos without any issues.

In terms of hardware support, new SoCs and GPUs like the Snapdragon 8 Gen 2, Samsung Exynos 2200, MediaTek Dimensity 1000 5G, Google Tensor G2, Nvidia's RTX 4000-Series, and Intel Xe and Arc GPUs support accelerated hardware decoding for the AV1 codec. Therefore, if you own devices powered by these chipsets, you can enjoy streaming content compressed using the AV1 codecs without exhausting your CPUs/GPUs power.

When it comes to the H.265 codec, most popular browsers like Safari, Firefox, and Google Chrome can run videos encoded using the compression algorithm without any issues. That said, compared to AV1 and VP9, H.265 is not open source, and licenses need to be procured to use the H.265 codec. For this reason, apps like Microsoft's Movies & TV video player, which come with the operating system, cannot run videos encoded using H.265 by default. Instead, users must install additional add-ons from the Windows store to run such videos.

Video codecs reduce the size of a video substantially, but to reduce the size of a video, the uncompressed video needs to be processed using software, which takes time. Therefore, if you want to reduce the size of a video, you have to look at the time it takes to compress the video using a compression algorithm.

Regarding encoding efficiency, VP9 leads the pack, and encoding time for compressing videos is much lower than H.265 and AV1. AV1, on the other hand, is the slowest in encoding time and can take over three times more time to encode a video when compared to H.265.

When it comes to video codecs, finding the perfect codec is very subjective, as every codec offers different features.

If you are looking for the best video quality, go for AV1. On the other hand, if you are looking for the most compatible video codec, VP9 would be the best fit for you.

Finally, H.265 codec is a great fit if you need good quality and compression without encoding overheads.

Nischay is an Electronics and Communication engineering graduate with a knack for simplifying everyday technology. He has been making tech easy to understand since 2020, working with publications like Candid.Technology, Technobyte, Digibaum, and Inkxpert.In addition, Nischay loves automotive technology and has been working as an Engineer with Stellantis for the last two years. He has adept knowledge about the features that makes today's cars safer and easier to drive.

MAKEUSEOF VIDEO OF THE DAY SCROLL TO CONTINUE WITH CONTENT