WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2024 Poal.co

855

So what is the AV1 codec. Basically what distinguishes it from others is that it's prediction of pixel changes from frame to frame takes the approach of throwing the kitchen sink at it.

So almost all modern codecs make predictions. The idea is, simplifying a bit, that you have one frame you are trying to encode. You have the prior two images and from it you try to predict the current one. You don't have to store the third image or even the differences between the second and third. You only need to store the differences between your prediction and the third. Prediction is expensive and that is why encoding video takes so long.

So there is an idea in prediction algorithms that if computation is not important you can just use every method of prediction and then just average them tuning the weights for how much you want them to matter. You can also try one prediction model and test how effective it is, and then try another and see how effective it is, and just throw some out. You can get almost perfect predictions just by tuning this case by case.

AV1 is doing exactly that except it's using every method under the sun as an option, for each frame (actually in each partition of a frame). It also does crazy things to try to find the ideal way the partian the frame into blocks that is the most predictive. It then can apply gradiants of prediction model weights with in a prediction block and even solve ideal arbitrary hard edges within the prediction block to turn off a model entirely. It's basically making near perfect predictions by exhaustion.

The result is an encoding speed of 1/40,000 to real time speed. But it plays back with no problem. It also reduces bandwidth by 30%, and raises quality by a ton.

The reason this sucks for the little guy? Youtube, a partner in this/google, can afford a lot of CPU per video to reduce bandwidth. If they push this as a standard and it becomes defacto, then the little guy won't be able to encode all the different resolutions that people want for videos that are going to be seen so much less than if it were on youtube, in the format that people start demanding. Netflix, Hulu, Amazon, will have no problem with this change in standard.

It's interesting that all these big companies joined together to create a consortium to create the new open source standard that solves all ills of semi-proprietary codecs. The problem is that new supposedly awesome and free standard just happens to serve their interests exactly, and not those who would maybe like to compete against them. Anyone who wants to compete with them at small scale at first will have to use older, perceived as outdated, codecs because the AV1 codec just isn't useful for a lower scale site. The perception by users will undoubtedly be, "this isn't youtube, why is it using ganky old codecs, get your shitty alt-site out of here."

42 minute video on the codec (invidio.us)

So what is the AV1 codec. Basically what distinguishes it from others is that it's prediction of pixel changes from frame to frame takes the approach of throwing the kitchen sink at it. So almost all modern codecs make predictions. The idea is, simplifying a bit, that you have one frame you are trying to encode. You have the prior two images and from it you try to predict the current one. You don't have to store the third image or even the differences between the second and third. You only need to store the differences between your prediction and the third. Prediction is expensive and that is why encoding video takes so long. So there is an idea in prediction algorithms that if computation is not important you can just use every method of prediction and then just average them tuning the weights for how much you want them to matter. You can also try one prediction model and test how effective it is, and then try another and see how effective it is, and just throw some out. You can get almost perfect predictions just by tuning this case by case. AV1 is doing exactly that except it's using every method under the sun as an option, for each frame (actually in each partition of a frame). It also does crazy things to try to find the ideal way the partian the frame into blocks that is the most predictive. It then can apply gradiants of prediction model weights with in a prediction block and even solve ideal arbitrary hard edges within the prediction block to turn off a model entirely. It's basically making near perfect predictions by exhaustion. The result is an encoding speed of 1/40,000 to real time speed. But it plays back with no problem. It also reduces bandwidth by 30%, and raises quality by a ton. The reason this sucks for the little guy? Youtube, a partner in this/google, can afford a lot of CPU per video to reduce bandwidth. If they push this as a standard and it becomes defacto, then the little guy won't be able to encode all the different resolutions that people want for videos that are going to be seen so much less than if it were on youtube, in the format that people start demanding. Netflix, Hulu, Amazon, will have no problem with this change in standard. It's interesting that all these big companies joined together to create a consortium to create the new open source standard that solves all ills of semi-proprietary codecs. The problem is that new supposedly awesome and free standard just happens to serve their interests exactly, and not those who would maybe like to compete against them. Anyone who wants to compete with them at small scale at first will have to use older, perceived as outdated, codecs because the AV1 codec just isn't useful for a lower scale site. The perception by users will undoubtedly be, "this isn't youtube, why is it using ganky old codecs, get your shitty alt-site out of here." [42 minute video on the codec](https://invidio.us/watch?v=qubPzBcYCTw)

(post is archived)

The AOMedia Video 1 codec, developed by the "Alliance for Open Media".

The governing members of which are Amazon, Apple, ARM, Cisco, Facebook, Google, IBM, Intel Corporation, Microsoft, Mozilla, Netflix, Nvidia and Samsung Electronics.

That pretty much says it all.

[–] 0 pt

The result is an encoding speed of 1/40,000 to real time speed

Wait. 1 second takes 11 hours to process? On what hardware?

This doesn't make sense to me. Yes, google can afford a lot of processing power, but considering how many minutes worth of new footage is uploaded to youtube every minute... Are you sure?

Or they could use Theora instead since that is open source also.