AI Filmmaking
April 13, 2026 · 9 min read

How I Made a Blockbuster AI Short Film on a $125/Month Subscription

No crew. No studio. No film school. No permission.

I'm DAJAI.IO — an independent hip-hop artist from Las Vegas. On April 13, 2026, I released SIMULATION: A DARK Library Film. It's a 7-minute narrative short film about a man who discovers he's the only conscious being inside a simulation, and that the music he created is the only thing the system cannot replicate.

The entire film was made with AI video generation tools on a $125/month Higgsfield Creator subscription. Total spend across two months of production: roughly $200.

This is not a tech demo. This is a story. And this is exactly how I made it.

Why I Made This Film

SIMULATION is part of the DARK Library — an audiobook-album hybrid series I created that transforms classic texts into sonic experiences. Three albums dropped in ten days:

The Simulation album explores the idea that reality is a constructed layer and music is one of the few things that breaks through it. The album came first. Then I realized the concept was too visual to stay audio-only. The album became a film.

I didn't plan to make a movie. The music demanded it.

The Tools

Everything was done through Higgsfield's platform on their Creator plan at $125/month. Here's the full stack:

Video Generation Models

Seedance 2.0 (ByteDance) — The primary engine. Seedance consistently produced the most cinematic output. The motion quality, lighting comprehension, and temporal coherence were leagues ahead of what I'd seen six months earlier. About 70% of the final film is Seedance shots.

Kling 3.0 (Kuaishou) — Used for secondary shots and when I needed a different motion style. Kling excels at slower, more deliberate camera movements. I used it for the ambient sequences and some of the close-ups.

Veo 3.1 (Google) — Environmental and atmospheric sequences. Veo handles wide shots and environmental lighting better than anything else I tested. The opening frequency test sequence is Veo.

Cinema Studio 3.0 — Higgsfield's own scene composition tool. Used for assembling multi-element shots and adjusting composition after generation.

Character Consistency

Soul ID — This is the critical piece. Without consistent character faces across shots, you don't have a film — you have a slideshow. Soul ID let me lock character references so that DAJAI.IO, Miko Melts, BB Monroe, and Solana Conejo look like themselves in every single frame.

The workflow: generate a hero image of each character using Soul model with detailed prompts, lock it as a Soul ID reference, then use that reference for every subsequent generation featuring that character.

This is the difference between a tech demo and a narrative film. Consistency is everything.

Post-Production

DaVinci Resolve — All assembly, color grading, sound design, and export happened in Resolve. Free version. The color grade was critical — I needed visual cohesion across shots generated by three different AI models with different color science.

Pre-Production: Character Lock

Before generating a single video frame, I spent two days on character design.

The Cast

Each character got a Soul ID hero image. I generated 15-20 variations of each and selected the one that felt most like who the character needed to be. Then that image became the immutable reference for every shot.

Production: Shot by Shot

The Multi-Model Strategy

Here's what nobody tells you about AI filmmaking: no single model is best at everything. Seedance does cinematic motion beautifully but sometimes struggles with hands. Kling handles slow camera movements better. Veo does environments that the others can't match.

I treated each model like a different lens in my camera bag. The right tool for the right shot.

Credit Management

The key insight: don't regenerate blindly. When a shot fails, analyze WHY it failed. Was the prompt too vague? Was the character reference not matching? Diagnose, adjust, regenerate. You don't get lucky — you get specific.

Prompt Engineering for Narrative Film

AI video prompts for films are fundamentally different from prompts for standalone clips. You need:

  1. Consistent lighting language — I used "warm tungsten interior light, soft shadows, 35mm film grain" as a base for every indoor shot
  2. Camera language — "slow push-in," "static medium shot," "handheld close-up" — be specific about the camera behavior
  3. Emotional direction — "contemplative," "uneasy," "intimate but distant" — the model responds to emotional cues
  4. Negative prompts — "no text overlays, no watermarks, no sudden camera movement" — tell it what NOT to do

The Hardest Shots

Shot 13 — Solana's hands sliding headphones across the desk — took five attempts. Hands are still the hardest thing for AI video. The solution: generate the hand motion separately from the face, then composite the best of each in Resolve.

Post-Production

Color Grade

Three different AI models means three different color palettes. Without grading, the film looks like a compilation, not a story. My approach:

  1. Base LUT: Custom LUT with crushed blacks, warm midtones, slightly desaturated highlights
  2. Per-shot corrections: Matched skin tones across models, adjusted exposure for consistency
  3. Scene-specific looks: Miko got cooler clinical grade. Solana got warmer. The dissolution went hyper-saturated before cutting to black.

Sound Design

Beyond the album tracks, I added room tone for spatial depth, transition effects between scenes, and a low-frequency hum that builds throughout — the simulation's heartbeat.

Total Cost Breakdown

ItemCost
Higgsfield Creator Plan (Month 1)$125
Higgsfield Creator Plan (Month 2)$125
DaVinci ResolveFree
Audio (self-produced)$0
Crew$0
Studio rental$0
Total~$250

No, I didn't use a $4,000/month enterprise plan. No, I didn't have a team. No, I didn't go to film school. I had a MacBook, a subscription, and a story that wouldn't let me sleep until I told it.

What I'd Do Differently

What This Means

A year ago, making a narrative short film required tens of thousands of dollars, a crew, locations, permits, and months of post-production. Today, one person with a laptop and a story can produce something that stands next to traditionally-produced content.

I'm not saying AI replaces filmmakers. I'm saying it removes the barriers that kept most people from ever becoming filmmakers in the first place.

I'm a rapper from Las Vegas. I make music on sovereign AI infrastructure I built in my apartment. And now I make films too.

The simulation is cracking.

ASCAP Writer IPI: 773316238
Publisher: CODE BLACK CBA PUBLISHING (IPI: 773567992)
Beyond the Blog · Sovereign Network
5 Anchor Receipts · Deep Dives
Visual Receipts Archive →
10 Bands · Health Dept · Warren G · Nhale · Bentley to Gorman.
In Memoriam · Living Archive
Fallen Soldiers · Legacy →
Neo Kauffman · Rashad Gage · @GirlsLoveTrey. Forever in the crew.
14 Years · 20+ D1 / NFL Athletes
CBA · Code Black Associates →
2012 founding crew · Original Charter · Stewart brothers.
More Reading
All Blog Posts →
28 articles on AI music, sovereign infrastructure, and the playbook.