I Tried MidJourney - Here’s What It Was Like
I first heard about Midjourney through the Second Life forums. I was trying to understand what regenerative AI actually was, and after a really long thread caught my eye, I figured, “Well, everyone’s talking about it — might as well see what the fuss is about.”
I started with Midjourney, and like most AI programs, it works on a credit system. The creation of images or short video clips cost credits depending on your subscription level. There are a lot of ways you can use it, including the “type a description and see what it gives you” method. I still don’t pretend to fully understand how the data behind it works (and that’s a complex conversation for another day), but from a user standpoint, yes - you can describe something, and it tries to interpret that description in a visual way.
For my own testing, I wasn’t trying to generate random content from scratch. I wanted to explore something more controlled and could help me with a production I was actively working on.
Could I use AI animation to help visualize moments that simply aren’t possible in SL using traditional methods?
I decided to try it mostly in the ‘Ho Ho Ho’ scene for Episode 5 of NAMARA. To start, I built and filmed everything in Second Life exactly as I normally do. This means Gonje did all the shopping for every avatar, we carefully put together sets and adjusted environment settings. I edited the scenes in Premiere Pro, lined up the dialogue, and then - in a few specific moments - I grabbed still frame screenshots from the footage….pinpointing an actual frame I wanted to us as the starting point for any AI generated video extensions. These images, from my own footage, were the images used in Midjourney.
Second Life, for all its strengths, has limitations in movement: region crossings, animations, how objects interact with avatars. Faces don’t always animate the way we wish they would and objects don’t always behave naturally in hands. Machinima makers have little tricks that we use to ‘get the point across’ from the script they are using to tell their story, but something as simple as handing an objects from one hand to another isn’t that simple inside Second Life. Anyone who’s watched enough SL machinima learns to spot the little tells to make it obvious that it’s not AI generated - stuff like hair that never moves, fingers that stay locked, the occasional “frozen smile”, or lips that move when the avatar talks but they move in an unnatural way.
For marketing, this kind of thing can be misleading, which is why I’ve always avoided using AI to “sell” anything and why I think Second Life consumers get so upset when creators use AI to enhance their ads for the items they are selling. That feels dishonest to me. But NAMARA isn’t a product; it’s a passion project done by students of film and story crafting. Bay City Studios pays for almost everything we use, and we don’t answer to producers - we answer to the story.
For the sake of experimentation, I used Midjourney to help me explore some “impossible” shots - things like Romy slipping a mirror into her back pocket. In SL, there’s simply no animation for that. It’s true; I could spend time teaching myself how to rig animations through Blender, and then another few hours making the animations needed to do a 3 second clip. Instead, With Midjourney, I could upload the screenshot and gently nudge the image toward the action I needed using prompts.
It wasn’t perfect.
About half the time, things turned out strange — especially in scenes with multiple avatars close together. They’d ‘blob up’ or the AI would think their clothing was an animal and it would start doing thi8ngs. Random people would walk into scenes and at one point, I had additional eyes getting added to someone in a scene. Occasionally, the characters looked a little waxy or too smooth, which didn’t suit every lighting setup.
That being said, some attempts were surprisingly expressive, especially when I kept the prompts simple and clear.
This shot where Romy is ‘looking around in the library’ while she talks to Inviktus on the mirror is one of my favorites. It really feels like she’s about to tell a secret, and that was really important to me to draw the viewer in.
There is no way for Romy to actually ‘put a mirror in her back pocket’ in Second Life. Midjourney made it possible for me to grab a screenshot of Romy holding a mirror, and then ‘put the mirror in her back pocket’ with a text prompt.
There wasn’t anything crazy happening in this video aside from the flicker on the scroll towards the end of the clip. Everything else about it (including the animation of her moving the scroll, opening the scroll, everything, is 100% Second Life. Cool, huh?)
I’m horrible at ‘panning the camera’. My hands can be shakey, and it often takes me many takes to get something that is reasonable. In addition, ‘movement’ of grass in Second Life can be laggy as well as loading in textures as your camera moves. This shot not only looks good, but it saved me a lot of time and took my entire scene to the next level.
Midjourney also had… opinions… about Lilith.
She’s my scandalous fashionista witch with ‘generous anime proportions’, and Midjourney kept worrying that I was attempting to generate adult content. Poor Lilith broke the system. Honestly, it made me laugh harder than it should have.
One place I was genuinely curious to test was close-up facial animation. I had a couple of my actors on NAMARA tell me about a pack of ‘move your mouth’ gestures that come with every LeLutka Head that I never knew existed, but I’ve heard critiques that SL mouths don’t move enough, or the right way. I wondered if AI-generated motion could help sell expression in a few key shots, and after my tests, my feelings were mixed. Romy sometimes looked too polished, while Lilith carried attitude better. To my surprise, Inviktus actually came out the strongest of the three - his expressions translated beautifully in loads of test and his smile seemed to be the most natural.
Will I use Midjourney again for NAMARA? Maybe — in very specific situations.
I used it as a tool, and for me, that’s all I think it will ever be - not a replacement for the process I love. Prompt-writing doesn’t give me the same sense of discovery that filming in SL does, and honestly, some of the most magical moments in machinima come from the unpredictable combination of multiple animations happening in real time.
That being said, when it comes to “impossible moments” - holding hands, a small magical effect without a greenscreen, a gesture SL simply can’t animate cleanly - I can see AI being an occasional helper in the toolbox. But for the vast majority of scenes, NAMARA will stay grounded in the SL environment where the project belongs.
Disclaimer:
Yes, I occasionally experiment with AI tools. No, NAMARA is not secretly being taken over by robo-actors or synthetic face puppets. I’m still a builder and machinima nerd at heart, and 95% of what you see is exactly what came out of Second Life - clipping, forgotten mouse cursors, stubborn animations and all. I respect that everyone has different comfort levels with AI, ranging from “Hooray robots!” to “Kill it with FIRE!!” My goal is simply to share my own experiences transparently, not to suggest what anyone else should or shouldn’t use.