I tried AI and was too afraid to talk about it…

The title of this article pretty much sums up the main point of this post, and I think it’s important to talk about. If you don’t know me, I’m a machinima hobbyist. I have a normal job, a normal life. I’m not rich or even “well off,” but I am creative and resourceful, and I’ve been wanting to tell the story of NAMARA for many years now.

Like most hobby filmmakers, I don’t have millions of dollars to hire an animation studio or build a massive backlot. Our little “two-person” production company works with a tiny budget and the massive inventories we’ve carefully collected in Second Life. It may sound strange, but building and filming is what most of my SL experience looks like these days. I’ve been on and off the grid for almost two decades. Many of the friends I’ve made have come and gone. These days my IMs are quiet, and I spend a huge portion of my time shifting prims, adjusting textures, and trying to mold virtual spaces into places where stories can breathe. That reflects who I am in real life, too. I’m a builder and a creator. I like making beautiful things and sharing beautiful experiences. And when people are curious or adventurous, I love showing them what I’ve learned so they can make beautiful and adventurous things of their own.

During one of my breaks from SL (if you know, you know), I spent a few months diving into Unreal Engine. I wanted to see whether I could tell the story of NAMARA there, and honestly? It blew my mind. Especially the Metahumans system. They’re an industry standard and people use motion capture and even their phones to drive facial animation with eerily lifelike results. To stand in my living room, record a sentence on my iPhone, and then upload it to animate Romy’s face - with emotion, with fear in her eyes, with a real smile - it was surreal.

And yes, a lot of that magic is powered by AI.

So when Gonje and I talked about coming back to Second Life to continue NAMARA, I’ll admit I felt torn. Part of me didn’t want to set aside Unreal Engine because I could see how fast it was evolving. But I also missed my 80,000-item inventory, my familiar building tools, my scripts, my comfort zone. Not to mention that Second Life is beautiful. Our avatars, overall, look better than most video games -and are insanely customizable. 

Technology has always reshaped industry - especially creative ones. My real life has taught me the importance of watching technology evolve. I joke that I’m from the “Skynet era,” but I’ve genuinely watched computers go from Oregon Trail to… whatever this new AI landscape is becoming. My last job was in tech - a job that was literally replaced by AI. I had to change industries and learn something entirely new because AI swept through faster than anyone predicted. AI is showing up everywhere, including creative fields many of us care deeply about. Not all of what it does is good,  but some of it is undeniably impressive. Ignoring that doesn’t protect us. Pretending it’s not evolving also doesn’t make it less real.

That’s part of why I hesitated to write this article.


I’ve watched it firsthand - even in my own career - as certain tasks that used to require a team can now be handled by a tool. That doesn’t mean human creators stop mattering; it means the landscape shifts, just like it did when CGI replaced miniature sets, or when digital photography replaced film, or when motion capture replaced extras in large battle scenes. These transitions left artists jobless, and yes, they became part of how modern storytelling works.

I don’t pretend to know exactly what this shift will look like in five or ten years; I don’t think anyone truly does. That being said, I do think it’s important for creatives - especially those of us who work in digital spaces - to understand what these tools can and can’t do. Not because AI replaces artistry, but because awareness helps us protect and strengthen the parts of the process that are uniquely human. 

For me, exploring AI wasn’t about trying to bypass the craft that makes Second Life special, it was simply about understanding what new tools exist and deciding, with intention, which ones actually support my process and which ones don’t. That’s it. Just informed creativity - nothing more dramatic than that.

And I also fully respect that everyone’s relationship with AI is different. People have different histories, different livelihoods, different comfort levels, and different boundaries. My goal isn’t to convince anyone to use these tools or to change how they feel. My only goal here is to share my own experience honestly, the same way I would with any other part of the filmmaking process.

To be clear: I’m not out here generating AI meshes for NAMARA. What I did do was test a few tools that claimed to help with very specific filming hurdles I ran into during Episode 5 - MidJourney and ElevenLabs. If you're curious what worked (and what absolutely didn’t), I wrote separate posts about each one.

I hope you’ll still watch the episode when it comes out. It’s not “all AI” by any stretch. Romy and Lilith are still up to no good, and the scenes still reflect the quirks, limits, and beauty of filming inside Second Life. My goal wasn’t to replace the human parts of this process, but instead to understand the tools available and make thoughtful choices about what actually helps me tell a story I love.

Disclaimer:
Yes, I occasionally experiment with AI tools. No, NAMARA is not secretly being taken over by robo-actors or synthetic face puppets. I’m still a builder and machinima nerd at heart, and 95% of what you see is exactly what came out of Second Life - clipping, forgotten mouse cursors, stubborn animations and all. I respect that everyone has different comfort levels with AI, ranging from “Hooray robots!” to “Kill it with FIRE!!” My goal is simply to share my own experiences transparently, not to suggest what anyone else should or shouldn’t use.

Previous
Previous

I Tried MidJourney - Here’s What It Was Like

Next
Next

Heterocera Skyway WIP