After nearly a year since V6, Midjourney has released its latest model version — V7. This is not a routine update. It is an entirely new architecture that changes the way you work with AI image generation.

If you've been using V6 up to now, prepare yourself for a quality leap you'll notice from the very first prompt.

How to Switch to V7 (It's Easier Than You Think)

Good news: As of 17 June 2025, V7 is the default version in Midjourney. If you're starting a new project, you're already using it.

Want to specify the version explicitly?

  • On the website: Go to settings and select V7 from the model version list
  • On Discord: Add --v 7 to the end of your prompt

Important: Before you start, you need to unlock personalisation — this takes about 5 minutes. You need to rate roughly 200 pairs of images so the model can learn your aesthetic preferences.

What Actually Changes? (What You Need to Know)

1. Draft Mode — 10x Faster, Half the Cost

This is the real star of V7. Draft Mode generates images:

  • 10 times faster than standard mode
  • At half the GPU cost

Use it for:

  • Quickly testing ideas
  • Iterating on concepts
  • Brainstorming with clients

Quality is lower (think of it as a sketch), but if something appeals to you, a single click of "Enhance" will render it at full quality.

How to enable:

  • On the website: Click the "Draft Mode" button in the prompt bar
  • Discord: Add --draft to your prompt

2. Conversational Mode — Talk to the AI

This is a revolution in prompting. Instead of carefully constructing prompts, you simply... talk.

"Imagine brainstorming with a client during a meeting. You switch on Conversational Mode, say: 'show me an office space in a cyberpunk style', see the result, and immediately add: 'more neons, change it to night time'. Images generate in real time during the conversation."

How it works:

  1. Enable Draft Mode
  2. Click the Conversational Mode button
  3. Type (or say out loud): "show me an art gallery in a cyberpunk style"
  4. After generation: "add more neon lights"
  5. Continue: "change the time to night"

With voice:

  • Click the microphone icon
  • Speak freely, as if to an assistant
  • The model creates and modifies prompts for you

This changes everything for team collaboration — you can think out loud during meetings and images generate in real time.

3. Omni Reference — Character and Object Consistency

It replaces the old Character Reference from V6, but is significantly more powerful.

Omni Reference lets you insert any element from a reference image into new generations:

  • Characters (human and non-human)
  • Objects
  • Creatures
  • Vehicles
  • Even everyday items

How to use:

  • On the website: Drag an image into the "Omni Reference" section
  • Discord: Add --oref [image_URL] to your prompt

Reference strength control: The --ow parameter (omni-weight) ranges from 0 to 1000 (default: 100)

  • --ow 25: Weak match, more creativity
  • --ow 100: Balanced
  • --ow 400-500: Strong detail preservation
  • --ow 800-1000: Maximum reference fidelity

Important: Omni Reference uses 2x more GPU than a standard generation.

4. Quality That Stands Out

V7 fixes the biggest pain points of previous versions:

Hands and Bodies

Finally! Hands look realistic, with correct proportions and anatomy. This was the biggest problem in AI art — V7 solves it.

Textures and Details
  • Richer, more complex textures
  • Better material rendering (metal, fabric, skin)
  • Consistency of small elements
Prompt Understanding
  • The model better interprets long, complex prompts
  • More precise rendering of intent
  • Better multi-language support
Objects and Composition
  • Object consistency within a scene
  • Better compositional balance
  • Realistic lighting and shadows

Practical Differences When Working with Prompts

V6 vs V7 — What Changes?

In V6:

  • You needed very detailed, technical prompts
  • Styles had to be described precisely
  • A lot of trial and error

In V7:

  • The principle "if you want to see it, you need to say it" — the model is very literal
  • Conversational Mode allows natural language
  • You can iterate by voice instead of rewriting the entire prompt
  • Personalisation helps the model "understand" your style

New Parameters and Features

Work modes:

  • Turbo: 2x more expensive than V6, but the fastest
  • Draft: 10x faster, 0.5x cost
  • Relax: Cheaper, slower (returns to normal speed after the "Relax-athon" ends)

What doesn't (yet) work in V7:

  • Upscaling — uses V6
  • Inpainting/Outpainting — uses V6
  • Retexturing — uses V6

Midjourney has announced regular updates every 1–2 weeks for the next 2 months.

Pro Tips to Get Started

1. Make use of personalisation
Don't skip rating images. The more you rate, the better V7 will understand your style. You can turn it off with the "P" button, but give it a chance.

2. Start with Draft Mode
Test ideas in Draft, then render at full quality. You'll save time and money.

3. Experiment with Omni Reference + Style Reference
You can combine --oref (for objects/characters) with --sref (for style). This gives you extraordinary control.

4. Use voice in creative sessions
If you're working with a client or brainstorming — Conversational Mode with voice is a game changer.

5. Remember the Omni Reference weights
Want to change style (photo → anime)? Use a low weight (--ow 25-50)
Want to preserve a face/clothing? Use a high weight (--ow 400-800)

Summary: Is It Worth Switching to V7?

Yes, if:

  • You work with clients and need fast iterations
  • You create series with consistent characters
  • You care about hand quality and complex objects
  • You want a more natural way of working (voice, conversations)

Wait, if:

  • You rely heavily on upscaling/inpainting (features still running on V6)
  • Your workflow is built around old SREF codes that have changed

What's Next? The Future of Midjourney

V7 is just the beginning. Midjourney is working on:

  • Video generation
  • 3D models
  • An improved image editor
  • A "Character Lock" tool (biometric character consistency)

💡 Got questions about V7? Leave a comment on Instagram — I'll reply to every one!

Follow our profile to stay up to date with the latest updates and AI art tutorials.

Article updated: October 2025 | All information based on official Midjourney documentation and community testing