Category Essentials: AI for Video
Each week, I spotlight one category and suggest the three tools that are tried, tested, and trusted by Lead with AI members.
For this week: I know, it’s controversial. But I can’t help featuring this one! We’ve officially crossed into the era where you can turn a text prompt into a full-blown video. That means shooting, editing, VFX-ing… all handled by AI, starting with your idea alone.
This isn’t for everyone. But if it does fit your work—whether it’s pitching concepts, prototyping stories, or creating content fast—these tools are gold.
And with that, here are the top three that have already returned impressive results for our community:
#1 Veo 3
This is Google’s latest (and still limited-access) release, currently available only to select users in the US.
If you're lucky enough to have access, it's absolutely worth testing, especially with its native audio output and Flow, a tool that lets you stitch together shots into a longer narrative.
Datacamp has a tutorial for Veo 3 here. More demos are here and here.
#2 Sora
Sora is a text-to-video model from OpenAI and has quickly made its mark for its visual fidelity.
One standout feature is the Storyboard mode. It lets you prompt multiple frames or scenes, essentially directing a short film one shot at a time.
From what we've seen, it handles abstract concepts and stylized visuals better than (human) motions.
If you're on a ChatGPT Plus plan or higher, give Sora a try here.
#3 RunwayLM
RunwayLM has been around longer, and it keeps evolving. Its AI video generator can turn text into video like the others, but also offers more ways to experiment.
Watch Runway examples here.
One unique feature of Runway is Act-One, which lets you animate a static character using your facial expressions. That opens up playful and expressive ways to create avatar-led content, especially for creators who want more control and personality in their videos.
>> Try Runway here. (Freemium available)
Want me to cover a specific category and/or AI tool next? Reply and let me know here.