table of contents
I’m going to dive into LumaLabs Dream Machine, an AI-powered app designed to transform the way we create artistic videos. I’ve spent some time testing its features, and I’m here to share an honest review of its capabilities, ease of use, and the overall quality of the videos it produces. At a time of promises and postponed release dates, I was genuinely interested in experiencing what Dream Machine could do. And when I said honest review, I meant honest.
Luma Labs’ Dream Machine is a great step forward in generative AI video. They’ve likely leveraged their expertise in generative 3D modeling to improve their understanding of motion in video, but it still feels like a temporary hiatus for real AI video. Over the past two years, AI image generation has gone from eerie, low-resolution representations of humans with multiple fingers and faces that look more like something Edvard Munch might have painted than a photograph to becoming almost indistinguishable from reality.
AI video is more complex. It not only needs to replicate the realism of a photograph, but also needs to understand real-world physics and how that affects movement across scenes, people, animals, vehicles, and objects. For now, I believe the best AI-powered video tools are meant to be used alongside traditional filmmaking, rather than replacing it.
But we’re approaching what Ashton Kutcher predicts will be an era where everyone can make their feature films. Luma Labs has created one of the most realistic motion graphics tools I’ve seen so far, but it’s still not up to scratch. I don’t think it’s on par with Sora, but I can’t compare it to the videos I’ve made myself using Sora—only what I’ve seen from the filmmakers and OpenAI themselves, and it’s likely carefully curated from hundreds of failures.
Abel Art, an avid AI artist who had early access to the Dream Machine, has created some impressive work. But he says he needs to create hundreds of generations of just one minute of video to make it coherent, and then discard unusable clips. With about 500 clips per minute of video, each about 5 seconds long, he discards 98% of the footage to create the perfect scene.
I suspect the percentage is higher for Pika Labs and Runway, and reports indicate that Sora has a similar abandonment rate, at least from filmmakers who have used it. For now, I believe the best AI tools for video are meant to be used alongside traditional filmmaking rather than replacing it, but we’re approaching what Ashton Kutcher predicts: an era where everyone can make their feature films.
Luma Labs, the AI company known for its generative 3D model, Genie, has ventured into AI video with Dream Machine, and the results have been impressive. The high demand for Dream Machine overloaded Luma’s servers, necessitating a queuing system. I spent all night waiting for my requests to be processed and converted into videos, but once I reached the front of the queue, the actual “dream” process took about two minutes.
Early access users shared videos on social media that looked too good to be true, suggesting that the best of the best was being picked. However, after trying it out myself, I can confirm its impressive capabilities. While it may not yet rival Sora or Kling, it stands out as one of the best in terms of instant engagement and movement understanding. Unlike Sora, the Dream Machine is available to everyone today.
Each generated video lasts about five seconds, which is nearly twice the length of Runway or Pika Labs’ clips without accessories, and some videos even include multiple takes. I created several clips during my testing. One was ready in about three hours, while others took most of the night. Some clips exhibited questionable blending or blurring, but overall, it captured motion better than any other model I’ve tried.
My prompts included walking, dancing, and running sequences. Early models often rendered objects moving backward or mishandled certain actions, such as zooming a cart. However, the Dream Machine accurately captured the concepts of movement without requiring detailed motion area specifications. Running sequences were particularly well executed. Precision control is minimal, relying primarily on prompting.
This reliance on prompts may be due to the new state of the model, with the AI augmenting prompts using its language model. This approach, also used by Ideogram and Leonardo for image generation, helps provide a more descriptive visualization of the desired outcome. Video models based on transform propagation, such as Haiper, also perform better when prompts are the primary focus. Sora is also said to work with simple text prompts and minimal additional controls.
What is Dream Machine?
Dream Machine is an AI model designed to quickly generate high-quality, realistic videos from text and images. Positioned as a highly efficient and scalable transformer model, Dream Machine is trained directly on video clips, supposedly allowing it to generate accurate, consistent, and action-packed footage, resulting in a 5-second video. Described as the first step towards building a universal imagination engine, Dream Machine is now available to everyone, aiming to revolutionize the way we approach video creation with AI.
Dream Machine, a branch of 3D modeling AI from Luma Labs, is a powerful web-based video generator. Unlike OpenAI’s much-hyped Sora, it’s now available to the public after extensive beta testing. The familiar text and image interface promises high-quality, realistic videos, though no background audio or music is included. It doesn’t support content that isn’t safe for work. While some of the creations others have created are certainly impressive, how practical are they for the average user? I went hands-on with the latest model to see how it works, what it can accomplish, and whether it meets the standards of other general AI tools like Runaway, Kling, and Lucas.
Luma Dream Machine Pricing System
1- Free plan:
This introductory tier is ideal for those just starting or experimenting with AI video creation. It includes up to 30 generations per month at no cost, allowing users to explore the core features.
2- Standard plan:
At $29.99 per month, this plan increases the number of available generations to 120.
3- Professional plan:
For professional creators who need more powerful capabilities, the Professional plan offers 400 generations per month for $99.99.
4- The distinctive plan:
At $499.99 per month, this is the most advanced plan, offering a significant increase to 2,000 GEL per month.
Luma Dream Machine Features Overview
The app has a user-friendly interface. Beginners can navigate its features easily. Simply type in your prompt or insert a reference image to get started. That’s it. You can view your video gallery and account plan. In the video gallery, you can see finished and queued videos. How long is the queue? According to the website, 120 frames per 120 seconds. However, if you plan your popcorn break around that estimate, you may end up hungry.
In practice, with more and more enthusiastic artists testing the limits of their free credits, the system feels a bit overloaded. The result? A wait time that feels like a thriller, constantly keeping you guessing. In my own experience, a video that was cheerfully promised to be 9 minutes decided to take a scenic route, stretching those 9 minutes to a comfortable 2 hours. And just when you think you’ve nailed the timing, it’s a rollercoaster of expectation management—jumping from 10 minutes to 2 hours, then maybe back down.
1- Luma Dream Machine Quality:
When evaluating the quality of Dream Machine’s output, it’s clear that this AI tool stands above many others in the video creation space. The videos produced are impressive, especially when you consider the relative infancy of this technology. While the results may not be ready to fill a movie screen resolution (yet), they are undoubtedly transformative for the industry.
The app delivers solid performance, generating videos that often exhibit greater fidelity and consistency than what’s typically seen from current AI-powered tools. Textures, colors, and video dynamics hold up well across a variety of artistic styles, but perform best in realistic settings.
2- Convert text to video Luma Dream Machine:
Let’s start with the text-to-prompt settings. This was one of my first generations, and I wanted to capture someone entering a room after a fire. I like camera movement because you can get a wobbly feeling as if someone is entering the room, and you can feel the footsteps. I love the quick camera shot that gets the adrenaline pumping. Overall, it’s nice, but there’s a lot of artifacting that doesn’t fit, especially toward the end of the shot.
Next, I tried creating some animations, and I must say I was amazed. There’s a lot of color, a lot of movement, and it’s really cohesive. What I really don’t like is the lack of human anatomy; the changes happen too quickly and the bodies are really morphing. Another issue I saw was the anatomy of the hands, a perennial AI problem. I got some multiple fingers and some oddly shaped hands, but other than that, the cohesion of the clip was pretty good.
I’ll always love landscape shots, so you can see one example of that. The issues involved a drone shot, and I like the way it turned out, but I don’t like that in some parts, the image is overexposed, while the rocks and cliffs look too dark and lack any kind of texture, especially in the distance, where everything seems to blend into a big blur.
One of the elements I like to incorporate into my photography, and now videos, is the use of perspective. I’m a fan of Kubrick, especially how he flows through all those perspective shots and gives you an emotional feel. Here we have my attempt at creating horror with a person running and then running into a parallel world. I was wondering if Loma would be able to keep up with a person running in a straight line and a single perspective to make it look more eerie. I can’t say I’m really satisfied with the result because it’s choppy and the people blend in, but overall, the feeling and atmosphere are very liminal, which is the effect I wanted to achieve.
3- Convert image to video Luma Dream Machine:
I’ve seen a lot of users on X and Instagram using their photos and inserting them into Luma to get better results, subjects, and images they want. So, I put some Midjourney photos into Luma to see how it went. For the first one, I inserted a photo of an alien landscape, and I like the way it starts moving as if someone is hiding behind rocks and watching. It gives me a bit of a Star Wars vibe, which wasn’t my intention. However, I like the details—how Luma preserved the details and texture of the stones, buildings, and distant space.
I used one of my favorite images and included it in Luma, and this one blew my mind. I didn’t invest in the prompt, so it wasn’t very descriptive. I like the way the reflections move, and the moment it sums up the story in 5 seconds. This is probably the image I’m most proud of. I tried to have a Victorian ghost descending the stairs, and it worked well because it maintained the tone and atmosphere of the image. However, you can see the jerky motion of the ghost descending the stairs. At the end of the video, the ghost’s facial expression changes, which I didn’t push for. This influenced me to think about how it might influence other future stories.
Overall, I think Luma is a great tool in development and has a lot of potential. Shortly, it’s sure to expand significantly, and we can expect many improvements as it listens to users’ needs. However, I don’t think AI-generated videos will completely replace cinematography shortly because we still can’t control every movement, every logical aspect, and every detail we want. As long as we’re human and want to control every detail, AI won’t give us the results we want. However, I believe Luma and similar tools will significantly reduce production costs and open up more creative opportunities, enabling the creation of things we never imagined would become reality.
Luma Dream Machine is a text-to-video template created by Luma Labs. Some examples of the fantastical and imaginative scenes featured on their website include a realistic polar bear strolling through a serene winter landscape, butterflies with vibrant wings fluttering around an antique TV in an enchanted forest, and a man walking along an otherworldly beach as the sun sets in a stunning array of colors.
Like OpenAI’s Sora, Dream Machine allows users to enter a text prompt to generate a video that matches the input. What happens behind the scenes? We’re not entirely sure, as details of the model’s training data aren’t public. But anyone can try it for free right now, so as one of our resident video experts, I decided to give it a try to see what it can do. Free users can generate up to 30 generation videos per month, but no more than 5 per day. Each generated video will be 5 seconds long, but you can extend the video for additional credits. At the time of testing (June 2024), the model took between 10 and 45 minutes to generate a single video.
Luma Labs’ Dream Machine represents a significant advance in generative AI video. They appear to have leveraged their expertise in generative 3D modeling to enhance understanding of motion in video. However, it still appears to be a temporary solution on the road to truly advanced AI video. Over the past two years, AI image generation has advanced significantly. It has gone from producing bizarre, low-resolution human representations with multiple fingers and surreal faces, reminiscent of the work of Edvard Munch, to creating images that are almost indistinguishable from reality.
However, AI video is significantly more complex. It must not only replicate photographic realism, but also understand real-world physics and how it affects movement across scenes, including people, animals, vehicles, and objects. Luma Labs has developed one of the most realistic motion tools I’ve encountered, but it still falls short of what’s needed. While it may not reach the level of Sora, I can’t directly compare it to the videos I’ve created with Sora.
Only those shared by filmmakers and OpenAI are likely the best selections from many attempts. Abel Art, an enthusiastic AI artist who had early access to the Dream Machine, produced some impressive work. However, he stated that it would take hundreds of iterations to produce a coherent one-minute video after discarding unusable clips.
Since the huge hype surrounding OpenAI’s Sora machine, hardly a month goes by without the announcement of a new AI video generator. This time, we’re taking a look at the Dream Machine from Luma AI. According to the product page, the newly launched model creates high-quality, realistic videos from text and does so quickly. But what’s most exciting about this generator is that anyone can try it for free right now. Let’s give it a try, shall we?
This isn’t the first time we’ve written about Luma AI. I’m a big fan of the automated 3D scans that users can create from simple smartphone videos. In my opinion, this feature is particularly useful for site exploration (you can see the full workflow demonstrated in this video post). The developers even call themselves a “3D AI company,” so it was unexpected to see them join the video creation race. But then again, perhaps they can translate their knowledge and tons of scanned footage into a working model. You can’t know unless you try.
What Luma AI’s Dream Machine Promises
In the description, Luma AI presents the Dream Machine as a high-quality text-to-video (and image-to-video) model capable of generating accurate, consistent, and action-packed footage. They also praise its impressive speed: the neural network can allegedly generate 120 frames in 120 seconds (note: my tests showed this isn’t always the case, as some generations took up to 7 minutes). Another notable feature of this tool is its consistency, as the Dream Machine understands how people, animals, and objects interact with the physical world. This allows you to create videos with impressive character consistency and accurate physics.
Current Limitations of Luma AI’s Dream Machine
This AI video generator (like others already available on the market) may have significant difficulty handling human or animal movement. Try creating a running dog, and you’ll notice that it doesn’t move its paws at all. In its current version, Luma AI’s Dream Machine cannot insert or generate any coherent and/or meaningful text. Metamorphism is also a problem and can occur regularly. This means that your objects may change shape during complex movements or actions.
The current lack of flexibility. You can’t create clips longer than 5 seconds from the start, add negative prompts, or change the aspect ratio. At least for now. The developers mention in the FAQ section that they are working on additional controls for future versions of Dream Machine and are open to feedback on their Discord channel.
Luma AI’s Dream Machine is now available for trial
Overall, Luma AI’s Dream Machine seems more advanced than other AI-based video generation software I’ve tested so far. The consistency of results is higher, people’s faces look more realistic, and the motion isn’t too bad either. However, it’s still a far cry from what OpenAI’s Sora promises and delivers. But as long as we can’t get our hands on it, promises remain promises. You can try out Dream Machine here. Currently, users get 5 free generations per day. There are also paid plans that will give you watermark-free, royalty-free downloads and 30 free generations + 120 paid generations. You can access the tool at https://lumalabs.ai/dream-machine.
What are your first impressions of Luma AI’s Dream Machine? Have you tried it yet? We realize there’s a huge discussion surrounding AI-powered video generation software in our industry. What do you think? Let’s chat in the comments below and share the article anywhere. Learn more great tools here: https://tech.khutana.com.
Questions about the Luma Dream Machine article: A fitting name for creating videos from Ai texts and its top 3 features
Are there any planned changes I should be aware of for Dream Machine from Luma AI?
Yes, we’re constantly updating Dream Machine with new features and improvements. You can stay up to date on all upcoming changes and announcements by visiting the Change History section of our website. This section provides details about recent updates and bug fixes, and a quick overview of upcoming updates on the roadmap.
How do I provide feedback or report a bug in Luma AI’s Dream Machine?
We value your feedback! You can share your ideas and suggestions or report any bugs directly to our team by emailing support+web@lumalabs.ai for web or support+ios@lumalabs.ai for iOS. We also welcome user suggestions and would love to hear what features you’d like. Contact us to help us shape the future of Dream Machine!
Where can I find Luma AI Dream Machine tutorials and guides?
We’re creating a comprehensive learning hub for Luma, including tutorials, guides, and demonstration videos for mobile and web users. You can access these resources at lumalabs.ai/learning-hub. The tutorials will be organized into sections, covering everything from beginner tips to advanced features, helping you master the Dream Machine at your own pace.
How long does it take for generations to create a Dream Machine from Luma AI?
The time it takes to create content in Dream Machine may vary based on several factors:
Image generation: Typically takes 20-30 seconds for each batch of 4 images.
Video Creation: Typically takes 30-45 seconds for a set of two videos.
Extended videos: May take up to 60 seconds, depending on the length and complexity of the scene.
If content creation takes longer than expected, it may be due to heavy server load or a complex request. Users on higher plans (Plus and Unlimited) receive priority in the queue, which helps speed up the process.