Monday, February 2, 2026
L&D Nexus Business Magazine
Advertisement
  • Home
  • Cover Story
  • Articles
    • Learning & Development
    • Business
    • Leadership
    • Innovation
    • Lifestyle
  • Contributors
  • Podcast
  • Contact Us
No Result
View All Result
  • Home
  • Cover Story
  • Articles
    • Learning & Development
    • Business
    • Leadership
    • Innovation
    • Lifestyle
  • Contributors
  • Podcast
  • Contact Us
No Result
View All Result
L&D Nexus Business Magazine
No Result
View All Result
Home Learning & Development

Character Consistency Q&A: Follow-Up from My Webinar

January 27, 2026
in Learning & Development
Reading Time: 13 mins read
0 0
A A
0
Character Consistency Q&A: Follow-Up from My Webinar
Share on FacebookShare on Twitter


During my recent webinar Character Consistency Solved: AI Image Generation Workflows, the chat was busy—too busy to answer every question in real time. I’m pretty good at following the chat while I present, especially when I have a producer like I always do with Training Mag Network. But 472 messages during a one-hour webinar is past the point where I can respond to everything live! This post pulls together some of the most common themes and questions that came up, along with my answers.

The questions fell into several recurring themes:

Which tools should I use (and when)?Prompting and iteration tacticsCharacter consistency and reference imagesEthics, disclosure, and likeness risks

Even if you didn’t attend the webinar, you can still use these tips on creating consistent characters with AI. If you want to watch the whole session, the recording is available for free (registration required).

Theme 1: Which Tools Should I Use (And When)?

Q1: What are your go‑to tools for character images?

These are the tools I use most often for character work (from the handout):

Midjourney – Consistent characters and styles, photorealistic or illustrated. Big weakness: Combining multiple characters in one image.Flux – Realistic hands, accurate text, strong photorealism, good for editing and text+image prompting. Uses fewer credits than Nano Banana in tools like Flora and Freepik Spaces.Nano Banana (Gemini) – Best free option I use; multimodal, can edit and combine images, good character consistency, can “rotate the camera.”Flora – Node‑based canvas that lets you combine multiple models (e.g., Flux + Nano Banana) in a single workflow.Freepik Spaces – Another node-based canvas that lets you combine multiple models in a single workflow.Magnific & Topaz – Upscaling and enhancement.Enhancor – Skin and detail enhancement.

I often start in Midjourney, then edit or combine in Nano Banana or Flora, and upscale or enhance with Magnific/Enhancor if needed.

Q2: If I can only pay for one or two tools, what should I choose?

If you can only pay for one tool, pick a node-based tool that gives you access to multiple AI image generation models in a single interface for one price. Flora and Freepik Spaces are the two I discussed in my webinar, but ImagineArt, Weavy, and ComfyUI are also options.

If you’re able to subscribe to a second tool, add Midjourney for strong, interesting, consistent character generation. It’s the tool I see many people in the AI media generation space using as their first step, regardless of which tools they switch to later.

I realize not everyone has the option of picking tools. Many people in the webinar mentioned that their organization uses Copilot (which has similar image generation capabilities to ChatGPT). While I don’t think you can get as good results with Copilot and ChatGPT as with Nano Banana, Midjourney, and other tools, you can still do quite a bit.

Q3: What do the tools cost? Are there any free options?

I shared this table during my webinar with a comparison of the primary tools I recommend. Note that sometimes additional discounts and deals are available.

Most AI image tools (except Midjourney) have a free level or trial. You can experiment with free versions before you pay for anything. I also recommend starting with a single month of a paid level to try it out before committing to any annual plans.

ToolMonthlyYearlyMidjourney$10$96 ($8/month)Flora$20$192 ($16/month)Freepik Essential$9$69 ($5.75/month)Freepik Premium$20$144 ($12/month)Gemini (Nano Banana)$20$240 ($20/month)Flux Playground$10/1000 credits

Nano Banana (Gemini) is currently the best of the free options to get started with character generation. While I prefer to combine it with other tools, and you get better results with the paid Pro version,

Theme 2: Prompting & Iteration Tactics

Q4: Why do you use “editorial photo” in your prompts?

In my initial prompts for reference images, I typically use “editorial photo” as the style. If you’ve looked at other sources for image prompting tips, it’s common to see phrases like “hyperrealistic,” “ultra realistic,” or “cinematic photo.” Personally, I find that those phrases end up giving me more dramatic and less realistic images. They work for advertisements and more artistic images, but not as well for training images. “Editorial photo” seems to be the right balance of a photorealistic image in a realistic setting that doesn’t look so much like a stock photo.

I use this image prompt structure for generating initial character reference images.

Q5: How do you iterate and improve your image prompts?

One of the key points I made in my webinar is that AI image generation isn’t about prompting once and getting the ideal result the first time, every time. Plan to generate several images and then pick the best, especially for your initial reference image.

When I iterate and refine image prompts, I’m often focused on writing things in a more specific way or explicitly changing my wording to correct problems.

For example, I used a reference image in Flora to generate multiple expressions of this character talking in a meeting.

Screenshot of Flora with a character reference image, an enhanced and upscaled version, and six different expressions.

One of my changes ended up with a really cheesy expression. Flux did what I asked, but I described the pose and expression poorly.

The problem here was my prompt: “Change her pose to smiling, pointing up with one finger as she explains an important point.”

A woman looking at the camera and smiling while raising one finger to make a point.

The solution was to write a better prompt and regenerate the image. I did want this to be a happier reaction (as for a positive response in a scenario), but it needed to be a more realistic happy in context. “Change her pose to explaining while happy, pointing with one finger to emphasize a point.” This version looked more plausible.

A woman looking not directly at the camera, but slightly to the side as if she's talking to someone, She has a slight smile, but her mouth is also slightly open as if she's explaining something. She is using one finger to emphasize a point while she speaks.

Sometimes, what you prompt won’t work, and it will take a few iterations like this to get what you want. I’ve seen some people say they keep about 30% of the images they generate. That’s a LOT of failures to get the right results!

But that’s part of working with generative AI. You’re not going to prompt once and get the best response every time. It’s not like a spreadsheet where you get the same results every time you use the same formula and data. That’s part of why gen AI can do image generation; you need that variability. However, it also means it takes some trial and error to get things right. It also takes practice prompting; you can and will get better with practice as you see how specific you need to be and learn how to coach the AI effectively.

Q6: How do I make multiple edits without my character “drifting” or the model ignoring changes?

Tips for reducing the drift:

Change one or two elements of an image at a time.Prompt what to change and what to keep.Reattach or re-upload the reference image. (This is one reason node-based tools like Flora and Freepik Spaces work well; they always tie back to the reference images.)Start a new chat with the reference image. Nano Banana and ChatGPT both get “lost” sometimes when a chat continues too long, so it’s easiest to start with a fresh chat.

Theme 3: Character Consistency & Reference Images

Q8: How do I use reference images in Midjourney omnireference?

The process of using Midjourney’s omnireference is basically what I wrote about last year with the character reference feature.

Generate a reference image and open it so you can access the menu of options.Click Prompt to copy the prompt (which you can then edit).Click Image to use the image as a reference. Move the image to the omnireference area rather than the default image reference.Revise the prompt.Describes your reference (to reinforce what should stay).Specifies the changes (pose, expression, clothing, setting).Adjust the –ow (omniweight) valueTry –ow 100 or –ow 150 to keep the character close to the reference.Use lower –ow for more changes (looser match).

In this example, the reference image is on the left. The changes to the prompt on the right are in bold. As you can see, I kept the parts of the original prompt that I wanted to reinforce to keep (the style, subject, age, and glasses). I changed her clothes, action, and setting.

Editorial photo, Latina woman, mid 50s, glasses, green blouse, working on a computer in a modern office

Editorial photo, Latina woman, mid 50s, glasses, green blouse, working on a computer in a modern office

Editorial photo, Latina woman, mid 50s, glasses, maroon button-down shirt, standing in front of a whiteboard, explaining

Editorial photo, Latina woman, mid 50s, glasses, maroon button-down shirt, standing in front of a whiteboard, explaining

Q9: How do I get multiple characters in one scene and keep them consistent?

Two characters is easiest to manage.Use separate reference images:One for each character (if you’re having trouble, try putting them on simple or solid backgrounds).You may also use a background image for the setting.In tools like Nano Banana or Flora (with Flux + Nano Banana):Attach or bring in both reference images.Prompt to combine the images. Describe the positions and actions of the characters and setting.

For example, this prompt combines two character reference images plus a conference room background. “Combine these images. The woman and man are sitting at the table in the conference room. He is talking and gesturing with his hands while she listens. Focus on the characters with a medium shot from the waist up.”

Q10: Can I rotate the camera or change the angle while keeping the same character?

Yes, especially in Nano Banana (Gemini). Flux and other tools may work too. Phrase it as a camera move or angle change:“over‑the‑shoulder view,” “three‑quarter view,” “profile,” “zoom in to a head‑and‑shoulders shot,” etc.

Example prompt: “Rotate the camera around to show the back of her head and an over‑the‑shoulder view of this woman writing an email.”

Theme 4: Ethics, Disclosure, and Likeness Risk

Q11: Can I use images of real people as reference images for my characters?

In general, I think it’s best to use an AI-generated image as your initial reference. Some stock libraries don’t allow AI iteration in the terms of use; check before you use them.

You can use photos of yourself. Nano Banana is especially good at working with real photos. Midjourney is more effective at consistency with AI-generated images, especially if the reference image was made in Midjourney.

Please don’t use images of other real people unless you have an explicit model release and are clear about how you’re editing and using the images. That goes for character images as well as just AI image editing overall. You shouldn’t upload someone else’s photo to AI unless you have explicit permission, not even for something fun for social media.

I have a bunch of ethical concerns about using images of real people, partly because I see too many examples of people being careless. I shouldn’t have to say that you shouldn’t generate an image of a Mormon drinking alcohol or a vegetarian eating meat, but apparently these things need to be spelled out explicitly.

Just start with an AI-generated image and avoid a lot of potential ethical landmines.

Q12: Do I need to disclose that my images are AI‑generated? How should I cite them?

Transparency is generally a good practice, although I think in the next few years people will get used to AI-generated images. When it’s more normalized, it may not be as critical to disclose it.

The chat discussion about disclosure and citation mostly said that a brief note like “Images created with AI tools including Midjourney and Gemini” would be sufficient. This is something that depends on your organization’s internal policy though. If your organization doesn’t have a policy yet, you might want to at least decide on a standard within your own team.

Your questions on character consistency and image generation?

Have a question about character consistency or AI image generation that I didn’t answer above? Let me know in the comments or send me an email.

Upcoming events

Generating Plausible Choices and Consequences for Scenarios using AI

Mini is More: Create One-Question Scenarios for Better Assessment

Beyond Right or Wrong: How to Craft Better Feedback for Scenarios

Like this:

Like Loading…

Related



Source link

Author

  • admin
    admin

Tags: ConsistencyFollowUpCharacterWebinar
Previous Post

10 Bible Verses Every Small Business Owner Needs in 2026 » Succeed As Your Own Boss

Next Post

Pubs to get 15% discount on business rates

Next Post
Pubs to get 15% discount on business rates

Pubs to get 15% discount on business rates

a utilitarian farmer’s enduro motorcycle

a utilitarian farmer's enduro motorcycle

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

L&D Nexus Business Magazine

Copyright © 2025 L&D Nexus Business Magazine.

Quick Links

  • About Us
  • Advertise With Us
  • Disclaimer
  • DMCA
  • Cookie Privacy Policy
  • Terms and Conditions
  • Contact Us

Follow Us

No Result
View All Result
  • Home
  • Cover Story
  • Articles
    • Learning & Development
    • Business
    • Leadership
    • Innovation
    • Lifestyle
  • Contributors
  • Podcast
  • Contact Us
  • Login
  • Sign Up

Copyright © 2025 L&D Nexus Business Magazine.

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms bellow to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In