But how do AI videos actually work? | Guest video by @WelchLabsVideo

But how do AI videos actually work? | Guest video by @WelchLabsVideo39:48

Información de descarga y detalles del video But how do AI videos actually work? | Guest video by @WelchLabsVideo

Publicado el:

25/7/2025

Vistas:

78.1K

Descripción:

0:00 - Intro: Stephen Welch introduces the topic of AI videos and outlines the key concepts that will be covered. 3:37 - CLIP: He explains how CLIP works and its role in connecting text and image embeddings. 6:25 - Shared Embedding Space: The video discusses how a shared embedding space allows for cross‑modal interactions. 8:16 - Diffusion Models & DDPM: Welch walks through the fundamentals of diffusion models and the Denoising Diffusion Probabilistic Model. 11:44 - Learning Vector Fields: He covers how vector fields are learned to guide the diffusion process. 22:00 - DDIM: The video explains the Denoising Diffusion Implicit Models and their advantages. 25:25 - DALL·E 2: Welch describes the architecture and training of DALL·E 2. 26:37 - Conditioning: He explains how conditioning is applied in diffusion models. 30:02 - Guidance: The role of guidance in improving image quality is discussed. 33:39 - Negative Prompts: The concept of negative prompts and how they influence generation is covered. 34:27 - Outro: Welch summarizes the key takeaways and thanks the audience. 35:32 - About guest videos + Grant’s Reaction: The video concludes with a brief discussion about guest videos and Grant’s reaction.