Adobe and Luma AI jointly release innovative video generation model
September 19, 2025 | Zoey
On Thursday, the AI video company Luma AI announced an alliance with Adobe to launch its new model Ray3, ushering in a new competitive territory in the world of studios and filmmakers generating cinematic-quality videos.
As of today, Ray3 will be available to Adobe Firefly users as a first stop. Paid users of Firefly will have unlimited access for the next two weeks before it becomes a paid service. Other user groups will be able to subscribe to the model as well, including Hollywood studios and subsequent streaming platforms for their directors and producers.
Luma AI hopes this partnership will extend its technology across a spectrum of use cases, and eventually create a foothold in the market.
Luma AI is working to make AI-generated cinematic videos more lifelike to give them an edge against the likes of Runway AI and Google Veo, both also funded by the likes of Amazon and a16z.
The idea is that Hollywood studios and filmmakers can use AI to get on-screen videos, bypassing the shootings of narratives in physical landscapes while slashing costs in production.
Before the software and hardware can progress to the next stage, video quality produced by AI will still need measuring. We still need to consider how and if AI-generated videos can begin to replace more traditional live-action production at all.
Ever since the introduction of Dream Machine in early 2024, Luma has been relentlessly iterating on new models. It achieved breakthroughs in short-form video creation, allowing users to quickly generate videos with a simple text prompt. The recently announced Ray3 continues the same 10-second dialogue-free short video format but takes realism to the next level.
In an interview over the phone with THR, the CEO of Luma AI referred to Ray3 as "the most intelligent video model available on the market," specifically underlining its reasoning capabilities. Reasoning, an often vaguely used term in the AI space, basically references the model’s ability to agilely think and optimize by itself, rather than users having to continuously re-adjust their prompt.
"Programmers use intelligent models, so why can't creators?" Jain asked in an interview.
He provided one example in which he presented a difficult prompt: have the model turn a character into a beam of light that changes colors every second and then blows up. He said that this six-step task would be very difficult for just about every existing model.
Ray3 also allows creators to directly graffiti on the image. Creators can draw the character's motion route on the screen and then trigger the tool to make a video.
Hannah Elsakr, the vice president of next-generation generative AI at Adobe, remarked in a statement: "With Ray3 now integrated into the Firefly app, Adobe customers will be the first to use this powerful new video model. It’s going to enhance creative imagination, and it’s also going to transform workflows. We are excited to see how users will leverage it to help bring their ideas to life."
Thanks to the extensive resources and technology available with Google, and their vast amount of instructional content (over 1300 YouTube videos), Veo3 continues to lead the industry.