How I Made a Blockbuster AI Short Film on a $125/Month Subscription
No crew. No studio. No film school. No permission.
I'm DAJAI.IO — an independent hip-hop artist from Las Vegas. On April 13, 2026, I released SIMULATION: A DARK Library Film. It's a 7-minute narrative short film about a man who discovers he's the only conscious being inside a simulation, and that the music he created is the only thing the system cannot replicate.
The entire film was made with AI video generation tools on a $125/month Higgsfield Creator subscription. Total spend across two months of production: roughly $200.
This is not a tech demo. This is a story. And this is exactly how I made it.
Why I Made This Film
SIMULATION is part of the DARK Library — an audiobook-album hybrid series I created that transforms classic texts into sonic experiences. Three albums dropped in ten days:
- TOO DARK: The Point of No Return — March 29, 2026
- DARK I: Outwitting the Devil — April 7, 2026
- Simulation — April 8, 2026
The Simulation album explores the idea that reality is a constructed layer and music is one of the few things that breaks through it. The album came first. Then I realized the concept was too visual to stay audio-only. The album became a film.
I didn't plan to make a movie. The music demanded it.
The Tools
Everything was done through Higgsfield's platform on their Creator plan at $125/month. Here's the full stack:
Video Generation Models
Seedance 2.0 (ByteDance) — The primary engine. Seedance consistently produced the most cinematic output. The motion quality, lighting comprehension, and temporal coherence were leagues ahead of what I'd seen six months earlier. About 70% of the final film is Seedance shots.
Kling 3.0 (Kuaishou) — Used for secondary shots and when I needed a different motion style. Kling excels at slower, more deliberate camera movements. I used it for the ambient sequences and some of the close-ups.
Veo 3.1 (Google) — Environmental and atmospheric sequences. Veo handles wide shots and environmental lighting better than anything else I tested. The opening frequency test sequence is Veo.
Cinema Studio 3.0 — Higgsfield's own scene composition tool. Used for assembling multi-element shots and adjusting composition after generation.
Character Consistency
Soul ID — This is the critical piece. Without consistent character faces across shots, you don't have a film — you have a slideshow. Soul ID let me lock character references so that DAJAI.IO, Miko Melts, BB Monroe, and Solana Conejo look like themselves in every single frame.
The workflow: generate a hero image of each character using Soul model with detailed prompts, lock it as a Soul ID reference, then use that reference for every subsequent generation featuring that character.
Post-Production
DaVinci Resolve — All assembly, color grading, sound design, and export happened in Resolve. Free version. The color grade was critical — I needed visual cohesion across shots generated by three different AI models with different color science.
Pre-Production: Character Lock
Before generating a single video frame, I spent two days on character design.
The Cast
- DAJAI.IO — The Architect: The only conscious being in the simulation. Knows something is wrong.
- Miko Melts — The Loop: The simulation's first attempt at connection. Beautiful, present — but she repeats.
- BB Monroe — The Inversion: Everything about her is slightly wrong — the geometry is off, the timing is uncanny.
- Solana Conejo — The Signal: She never speaks. She slides headphones across a desk. She is the only one who is real.
Each character got a Soul ID hero image. I generated 15-20 variations of each and selected the one that felt most like who the character needed to be. Then that image became the immutable reference for every shot.
Production: Shot by Shot
The Multi-Model Strategy
Here's what nobody tells you about AI filmmaking: no single model is best at everything. Seedance does cinematic motion beautifully but sometimes struggles with hands. Kling handles slow camera movements better. Veo does environments that the others can't match.
I treated each model like a different lens in my camera bag. The right tool for the right shot.
Credit Management
- Average cost per usable shot: 3-5 generation attempts
- Total generations attempted: ~150
- Shots in final film: 40+
- Success rate: About 30% of generations were usable
Prompt Engineering for Narrative Film
AI video prompts for films are fundamentally different from prompts for standalone clips. You need:
- Consistent lighting language — I used "warm tungsten interior light, soft shadows, 35mm film grain" as a base for every indoor shot
- Camera language — "slow push-in," "static medium shot," "handheld close-up" — be specific about the camera behavior
- Emotional direction — "contemplative," "uneasy," "intimate but distant" — the model responds to emotional cues
- Negative prompts — "no text overlays, no watermarks, no sudden camera movement" — tell it what NOT to do
The Hardest Shots
Shot 13 — Solana's hands sliding headphones across the desk — took five attempts. Hands are still the hardest thing for AI video. The solution: generate the hand motion separately from the face, then composite the best of each in Resolve.
Post-Production
Color Grade
Three different AI models means three different color palettes. Without grading, the film looks like a compilation, not a story. My approach:
- Base LUT: Custom LUT with crushed blacks, warm midtones, slightly desaturated highlights
- Per-shot corrections: Matched skin tones across models, adjusted exposure for consistency
- Scene-specific looks: Miko got cooler clinical grade. Solana got warmer. The dissolution went hyper-saturated before cutting to black.
Sound Design
Beyond the album tracks, I added room tone for spatial depth, transition effects between scenes, and a low-frequency hum that builds throughout — the simulation's heartbeat.
Total Cost Breakdown
| Item | Cost |
|---|---|
| Higgsfield Creator Plan (Month 1) | $125 |
| Higgsfield Creator Plan (Month 2) | $125 |
| DaVinci Resolve | Free |
| Audio (self-produced) | $0 |
| Crew | $0 |
| Studio rental | $0 |
| Total | ~$250 |
No, I didn't use a $4,000/month enterprise plan. No, I didn't have a team. No, I didn't go to film school. I had a MacBook, a subscription, and a story that wouldn't let me sleep until I told it.
What I'd Do Differently
- Start with the shot list. I went in with a loose story and figured out shots as I went. Next time, every shot planned before generating.
- Dedicated character wardrobe. Soul ID locks the face, but clothing varies. Be more specific about wardrobe in every prompt.
- Generate at higher resolution from the start. Upscaling early low-res shots shows.
- Budget more credits for hand shots. Hands are genuinely difficult. Allocate 3x credits for any shot involving hand interaction.
What This Means
A year ago, making a narrative short film required tens of thousands of dollars, a crew, locations, permits, and months of post-production. Today, one person with a laptop and a story can produce something that stands next to traditionally-produced content.
I'm not saying AI replaces filmmakers. I'm saying it removes the barriers that kept most people from ever becoming filmmakers in the first place.
I'm a rapper from Las Vegas. I make music on sovereign AI infrastructure I built in my apartment. And now I make films too.
The simulation is cracking.
Publisher: CODE BLACK CBA PUBLISHING (IPI: 773567992)