Whenever Madonna sings the 1980s hit 鈥淟a Isla Bonita鈥 on her concert tour, moving images of swirling, sunset-tinted clouds play on the giant arena screens behind her.
To get that ethereal look, the pop legend embraced a still-uncharted branch of generative artificial intelligence 鈥 the text-to-video tool. Type some words 鈥 say, 鈥渟urreal cloud sunset鈥 or 鈥渨aterfall in the jungle at dawn鈥 鈥 and an instant video is made.
Following in the footsteps of AI chatbots and still image-generators, some AI video enthusiasts say the emerging technology could one day upend entertainment, enabling you to choose your own movie with customizable story lines and endings. But there's a long way to go before they can do that, and plenty of ethical pitfalls on the way.
For early adopters like Madonna, who鈥檚 long pushed art鈥檚 boundaries, it was more of an experiment. She nixed an earlier version of 鈥淟a Isla Bonita鈥 concert visuals that used more conventional computer graphics to evoke a tropical mood.
鈥淲e tried CGI. It looked pretty bland and cheesy and she didn鈥檛 like it,鈥 said Sasha Kasiuha, content director for Madonna's Celebration Tour that continues through late April. 鈥淎nd then we decided to try AI.鈥
ChatGPT-maker OpenAI gave a glimpse of what sophisticated text-to-video technology might look like when the company recently , a new tool that's not yet publicly available. Madonna's team tried a different product from New York-based startup Runway, which helped pioneer the technology by releasing its first public text-to-video model last March. The company released a more advanced 鈥淕en-2" version in June.
Runway CEO Crist贸bal Valenzuela said while some see these tools as a 鈥渕agical device that you type a word and somehow it conjures exactly what you had in your head,鈥 the most effective approaches are by creative professionals looking for an upgrade to the decades-old digital editing software they're already using.
He said Runway can't yet make a full-length documentary. But it could help fill in some background video, or b-roll 鈥 the supporting shots and scenes that help tell the story.
鈥淭hat saves you perhaps like a week of work,鈥 Valenzuela said. 鈥淭he common thread of a lot of use cases is people use it as a way of augmenting or speeding up something they could have done before."
Runway鈥檚 target customers are 鈥渓arge streaming companies, production companies, post-production companies, visual effects companies, marketing teams, advertising companies. A lot of folks that make content for a living,鈥 Valenzuela said.
Dangers await. Without effective safeguards, AI video-generators could threaten democracies with convincing 鈥渄eepfake鈥 videos of things that never happened, or 鈥 as is already the case with AI image generators 鈥 flood the internet with fake pornographic scenes depicting what appear to be real people with recognizable faces. Under pressure from regulators, major to watermark AI-generated outputs to help identify what's real.
There also are copyright disputes brewing about the video and image collections the AI systems are being trained upon (neither Runway nor OpenAI discloses its data sources) and to what extent they are unfairly replicating trademarked works. And there are fears that, at some point, video-making machines could replace human jobs and artistry.
For now, the longest AI-generated video clips are still measured in seconds, and can feature jerky movements and telltale glitches such as distorted hands and fingers. Fixing that is "just a question of more data and more training,鈥 and the computing power on which that training depends, said Alexander Waibel, a computer science professor at Carnegie Mellon University who's been researching AI since the 1970s.
鈥淣ow I can say, 鈥楳ake me a video of a rabbit dressed as Napoleon walking through New York City,鈥欌 Waibel said. 鈥淚t knows what New York City looks like, what a rabbit looks like, what Napoleon looks like.鈥
Which is impressive, he said, but still far from crafting a compelling storyline.
Before it released its first-generation model last year, Runway's claim to AI fame was as a co-developer of the image-generator Stable Diffusion. Another company, London-based Stability AI, has since taken over Stable Diffusion's development.
The underlying 鈥渄iffusion model鈥 technology behind most leading AI generators of images and video works by mapping noise, or random data, onto images, effectively destroying an original image and then predicting what a new one should look like. It borrows an idea from physics that can be used to describe, for instance, how gas diffuses outward.
鈥淲hat diffusion models do is they reverse that process,鈥 said Phillip Isola, an associate professor of computer science at the Massachusetts Institute of Technology. 鈥淭hey kind of take the randomness and they congeal it back into the volume. That's the way of going from randomness to content. And that鈥檚 how you can make random videos.鈥
Generating video is more complicated than still images because it needs to take into account temporal dynamics, or how elements within the video change over time and across sequences of frames, said Daniela Rus, another MIT professor who directs its Computer Science and Artificial Intelligence Laboratory.
Rus said the computing resources required are 鈥渟ignificantly higher than for still image generation鈥 because 鈥渋t involves processing and generating multiple frames for each second of video.鈥
That's not stopping some well-heeled tech companies from trying to keep outdoing each other in showing off higher-quality AI video generation at longer durations. Requiring written descriptions to make an image was just the start. Google recently demonstrated a new project called Genie that can be prompted to transform a photograph or even a sketch into 鈥渁n endless variety鈥 of explorable video game worlds.
In the near term, AI-generated videos will likely show up in marketing and educational content, providing a cheaper alternative to producing original footage or obtaining stock videos, said Aditi Singh, a researcher at Cleveland State University who has surveyed the text-to-video market.
When Madonna first talked to her team about AI, the 鈥渕ain intention wasn鈥檛, 鈥橭h, look, it鈥檚 an AI video,'鈥 said Kasiuha, the creative director.
鈥淪he asked me, 鈥楥an you just use one of those AI tools to make the picture more crisp, to make sure it looks current and looks high resolution?'鈥 Kasiuha said. 鈥淪he loves when you bring in new technology and new kinds of visual elements.鈥
Longer AI-generated movies are already being made. Runway hosts an annual AI film festival to showcase such works. But whether that's what human audiences will choose to watch remains to be seen.
鈥淚 still believe in humans," said Waibel, the CMU professor. 鈥滻 still believe that it will end up being a symbiosis where you get some AI proposing something and a human improves or guides it. Or the humans will do it and the AI will fix it up."
鈥斺赌斺赌斺赌
Associated Press journalists Joseph B. Frederick and Rodrique Ngowi contributed to this report.