ElevenLabs - The Voice Experiment I Was Most Nervous to Try

Of all the AI tools I experimented with during production of Episode 5, ElevenLabs was the one I hesitated over the most. Not because it’s hard to use (it’s actually incredibly user-friendly), but because voice acting is such an intimate part of storytelling. A voice carries personality, emotion, timing, humor… all the little things that make a character feel alive.

That curiosity led me to ElevenLabs. And… wow. o.0 It surprised me.

It’s dramatically more affordable for hobby filmmakers, and the number of available voices is huge, ranging from “natural radio host” to “villain monologue” to “sweet elderly neighbor.” The beta V3 model genuinely shocked me; with good prompt writing, I was able to create realistic laughs, sighs, shouting, sarcastic tones, fear, even subtle emotional shading. Not perfectly - but impressively.

The part that made my stomach drop a little? As amazing as this is for creators on a budget, it raises very real questions for the acting community. If a tool can build a voice model based on recordings, what does that mean for voice actors in the long run? I’m a huge Critical Role fan, and I think about how much talent and training goes into performance. The idea of technology encroaching on that space genuinely concerns me. This is the only AI tool I’ve tried that made me pause and think,
“Okay… this one could actually change things in ways we aren’t ready for.”

But there was one part of ElevenLabs that eased some of that fear: They offer a payment structure for voice providers. People who choose to contribute their voices to the platform can earn royalties whenever their model is used. Yes, the payouts might be small - maybe pennies at a time - but they’re also passive income for work already done. Over time, those pennies can stack up. And most importantly, it shows an attempt (however early) to create a system where voice talent is acknowledged and compensated.

I’m not saying it’s perfect. I’m saying it’s something. And in a world where AI often feels like it’s taking without giving back, “something” matters.

I also think that if you are going to use an AI voice in a scene, you should try to pair them with a human voice actor. AI can sound ‘robot-y’… even when it doesn’t sound ‘robot-y’. Pairing them with a human softens the AI, helping to make the entire scene more believable over all.

For NAMARA, ElevenLabs let me explore vocal styles that would have been financially impossible otherwise. It didn’t replace the heart of my actors - but it did give me room to play, experiment, and shape emotional beats more intentionally. I was able to ‘cast’ parts that I hadn’t had voices for, which allowed me to continue forward in my story. The tool isn’t magic; it still requires careful prompting, tuning, and plenty of listening.

Am I switching to AI voices forever? No.
Am I open to using them strategically when it helps tell the story? Yes.

Just like with Midjourney, this is another tool in the toolbox — nothing more, nothing less. The soul of NAMARA still comes from the characters, the world, and the storytelling. Technology just helps me bring those pieces to life with the limited resources we have.

And as always, I respect that everyone feels differently about AI. My goal isn’t to convince anyone to use it — only to share what I learned and why I made the choices I did.

Disclaimer:
Yes, I occasionally experiment with AI tools. No, NAMARA is not secretly being taken over by robo-actors or synthetic face puppets. I’m still a builder and machinima nerd at heart, and 95% of what you see is exactly what came out of Second Life - clipping, forgotten mouse cursors, stubborn animations and all. I respect that everyone has different comfort levels with AI, ranging from “Hooray robots!” to “Kill it with FIRE!!” My goal is simply to share my own experiences transparently, not to suggest what anyone else should or shouldn’t use.

Next
Next

I Tried MidJourney - Here’s What It Was Like