Friday, August 22, 2025

The tech behind YouTube real-time generative AI results

The trainer and the coed

Our strategy revolves round an idea referred to as data distillation, which makes use of a “trainer–scholar” mannequin coaching methodology. We begin with a “trainer” — a big, highly effective, pre-trained generative mannequin that’s an professional at creating the specified visible impact however is much too sluggish for real-time use. The kind of trainer mannequin varies relying on the aim. Initially, we used a custom-trained StyleGAN2 mannequin, which was skilled on our curated dataset for real-time facial results. This mannequin may very well be paired with instruments like StyleCLIP, which allowed it to control facial options primarily based on textual content descriptions. This offered a robust basis. As our venture superior, we transitioned to extra subtle generative fashions like Google DeepMind’s Imagen. This strategic shift considerably enhanced our capabilities, enabling higher-fidelity and extra numerous imagery, better inventive management, and a broader vary of types for our on-device generative AI results.

The “scholar” is the mannequin that finally runs on the consumer’s gadget. It must be small, quick, and environment friendly. We designed a scholar mannequin with a UNet-based structure, which is great for image-to-image duties. It makes use of a MobileNet spine as its encoder, a design identified for its efficiency on cell gadgets, paired with a decoder that makes use of MobileNet blocks.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles