Here’s How I Finally Wrapped My 9th Grade Film Thanks to AI Video Generation

When I was in 9th grade, I joined a school project with some friends. We were going to shoot a science fiction mini movie around Central Park in New York City. We wrote part of our time travel script, discussed the many logistics and locations we’d shoot in.
Young Filmmakers on the Streets of New York?
I remember we were going to feature a tall, black obelisk that at the time was found at the entrance to Central Park on 59th Street and 5th Avenue. The sculpture would be the ‘time portal’ that our characters would walk towards and disappear through. Clever editing would avoid the need for special effects.
We were in ‘preproduction’ that spring, and it would have been a spectacular time to film on the streets of New York. Though we were all inspired by the potential of our little project, most eventually realized the many complexities of making a movie and how long it really would take to pull it off. Still, I felt undeterred. But the others had a different (more realistic) view.
Our project started losing steam, and ultimately, our short flick never got out of development. It was simply too big a lift. A few months later, we all graduated, and that was it.
My Origin Story that Never Happened
This would have been my origin story as a fifteen-year-old filmmaker, but it was not to be. (Instead, a year later, I found a more structured opportunity to explore my video production interests in high school.)
But I’ve never forgotten about my first student movie short that never was. That obelisk scene is seared into my long-term memory. I really wanted to capture that shot. I saw it so clearly.
I still do.
AI Video Generation Can Bring Your Vision to Life
Over the decades, I’ve occasionally found myself returning to the nagging sadness that we never finished our movie. Heck, we never started it!
But if I could somehow go back to the future and capture that obelisk scene, maybe I could check it off my bucket list.
Well, now I can… from the comfort of my home office with a little text-to-video prompting and the power of AI video generation.
Yes, the magic of Gen AI is transforming our existence on a daily basis. And yes, it can now enable me to finally manifest my dusty vision out of thin air.
So that’s exactly what I decided to do.
There are multiple platforms that are up to the task. I decided to use Google’s Veo 3.1 and Flow/Scenebuilder. So, I signed up for the Google AI Pro plan for twenty bucks a month. I felt that would give me enough generative AI credits for what would be a 30-second scene.Text to Image Prompting
First, I created still images of my three main characters using Google Whisk and its text-to-image generation powers:
The Leader

Second in Command

The Nerd

Text to Video Prompting in Scenebuilder
Any remnants of our original script were long gone, but as I’ve said, the obelisk imagery remained clearly in my mind.
I’ve admittedly updated the characters (away from a few school kids) and added a few lines (current scriptwriter’s prerogative). Yes, these AI characters can talk!
Then, I uploaded the images of my AI actors and began typing in prompts for individual shots around this one scene. I relied on the ‘Scenebuilder’ mode to retain the same characters and background from shot to shot.
Veo 3.1 is impressive, but it also hallucinated a fair amount, adding in new scripted lines, a few of which I end up using.
“The Portal in Central Park,” My AI-Generated Movie Scene
And here’s my completed 30-second scene, “The Portal in Central Park”… finally ready for its premiere all these decades later.
Imperfect, Yet Simultaneously Stunning
Okay. This is not exactly going to win any awards, and it does look rather fake (Though not entirely fake… It could easily serve as an early draft for a pitch to do a real shoot).
And I also found myself struggling to get precisely what I wanted. (Perhaps that’s due to the limitations in my basic text prompting skills.) Strangely, I felt like a director arguing with live actors who didn’t want to follow my direction.
As I mentioned, I ended up accepting the actors’ improv in a couple of the hallucinations. So, this scene isn’t exactly what I originally envisioned, but it’s close.
The background music is also AI-generated through Google’s MusicFX platform. I just typed in… “A cinematic feeling piece of music suggesting that time is running out. Exciting violins. Medium tempo.”
Click. One try is all it took.
That’s a Wrap!
Ultimately, I found it amazing what I was able to accomplish in just a few hours. That said, I edited the clips together manually in Final Cut Pro. This part still required (for now) nuanced timing and a human touch.
Each clip took about a minute to generate using Veo 3.1 Fast mode. And yes, there were many that ended up on the cutting room floor.
But as imperfect as the results were, I can still say I successfully brought my teenage cinematic vision ‘to life.’
The Future of Visual Storytelling
But I must admit there’s more to this exercise than completing the big scene from an old school project that I’m sure my former classmates have long forgotten about.
The truth is I’m back to where I started as a teenager. I still feel the creative passion to bring stories to life, but I again need to learn how to use the tools available to me.
And that’s exactly what I’m doing.
For twenty bucks, you and I can conjure up complete videos with stories and characters based on simple text prompts. It feels entirely like a fantasy. But it’s not.
The only part of the process that feels normal is this:
-The power of the written word is as strong as ever.
Keep It Real
We’re clearly in the middle of a creative revolution. If you want to keep up, there’s no time to lose.
Learn how to use these new AI-fueled creative tools, which will continue to improve… There are countless reasons why.
…Or else you may find yourself eventually becoming the hallucination on the cutting room floor.




