A screenshot of a Meta Movie Gen video October 2024.
News

Meta Introducing Movie Gen Video Models

Chris Ehrlich avatar
By
SAVED
What can the GenAI models do for creators?

MENLO PARK, California — Meta is developing its latest iteration of generative AI models for video creators.

Meta shared a 92-page research paper on its multimodal Meta Movie Gen foundation models that provide text-to-video, text-to-image, image-to-video and text-to-audio capabilities, according to the company today.

Meta plans to improve the models and move toward a "potential future release."

In the process, Meta will "work closely with filmmakers and creators to integrate their feedback."

"I’s important to note that generative AI isn’t a replacement for the work of artists and animators," Meta says.

Meta believes in "the power of this technology to help people express themselves in new ways and to provide opportunities to people who might not otherwise have them."

Meta noted posting its research on Meta Movie Gen is part of its "long and proven track record of sharing fundamental AI research with the community."

About the Author
Chris Ehrlich

Chris Ehrlich is the former editor in chief and a co-founder of VKTR. He's an award-winning journalist with over 20 years in content, covering AI, business and B2B technologies. His versatile reporting has appeared in over 20 media outlets. He's an author and holds a B.A. in English and political science from Denison University. Connect with Chris Ehrlich:

Main image: Via Meta.
Featured Research