Photography has always borrowed from technology. In the darkroom days, it was chemicals and enlargers. Today, it’s software, artificial intelligence, and digital imaging. If you’ve ever heard someone mention photobashing or ControlNet and wondered what on earth they’re talking about, you’re not alone.
This article explores some of the most common — and most confusing — digital, AI and CGI crossover terms. By the end, you’ll have a working knowledge of the language that sits where photography meets cutting-edge technology.
Photobashing
Photobashing is the art of blending multiple photographs with digital painting or 3D models to create a new image. It’s a technique often used in concept art for films and video games.
Visually, a photobashed piece might look like a surreal composite: a mountain landscape that never existed, a futuristic city, or a mythical creature. Unlike straightforward photo manipulation, photobashing usually involves layering photos as textures over sketches or renders.
Inpainting and Outpainting
Inpainting is when software intelligently fills in missing parts of an image. Think of it as a digital patch tool on steroids. You might remove a lamppost, and the programme recreates the background as if the object was never there.
Outpainting, on the other hand, extends the frame. You can start with a photograph and let AI “imagine” what the surrounding scene could look like. Artists use it to create panoramic versions of photos or reinterpret famous artworks.
ControlNet
ControlNet is a system that guides AI image generation more precisely. Instead of giving the AI just a text prompt, you can feed it outlines, poses, or depth maps to control the output.
In practical terms, if you want a portrait in the same pose as your reference photo but in a different style, ControlNet makes that possible. It gives photographers and digital artists more control over otherwise unpredictable AI results.
LoRA (Low-Rank Adaptation)
LoRA is a way to “teach” an AI model something new without retraining it from scratch. For example, you could create a LoRA for a specific art style or even a person’s likeness.
For photographers, this matters because it means you can guide AI to replicate a lighting look or stylistic approach consistently. Instead of random outputs, you get more predictable results aligned to your vision.
Photogrammetry
Photogrammetry is the process of turning real-world photographs into 3D models. It works by stitching together multiple overlapping photos of a subject from different angles.
It’s used in industries from archaeology to gaming — but also in photography projects where accurate scale models are needed. The results look like textured wireframe models that can then be lit or animated digitally.
NeRF (Neural Radiance Fields)
NeRF is a newer development where AI creates a 3D scene from a set of 2D photographs. It’s like photogrammetry’s younger, more flexible sibling. Instead of just surfaces, NeRF captures how light passes through a space, allowing for virtual fly-throughs.
In photography, it could change how we archive and revisit places. Imagine creating a 3D “memory” of your living room or a holiday location using nothing more than your camera.
Gaussian Splatting
A technical-sounding phrase, Gaussian splatting is about rendering 3D scenes quickly using points of light and colour rather than detailed polygons. It makes virtual scenes look smoother and more realistic with less computer power.
You’ll likely hear this more in the coming years as it’s a breakthrough in how digital scenes are built from photos. For now, think of it as a faster way of creating photorealistic 3D from your images.
Micro FAQ
What is the difference between photobashing and photomontage?
Photobashing adds digital paint and 3D elements, whereas photomontage sticks to photos only.
Can I use inpainting in Lightroom or Photoshop?
Yes — Photoshop’s “Generative Fill” is a form of inpainting. Lightroom doesn’t yet have this feature.
Do I need special cameras for photogrammetry?
No. Any camera works as long as you capture many overlapping angles with good light.
Is NeRF available to the public?
Yes, but it currently requires technical software and is still experimental for consumer use.
Will AI replace traditional photography?
No. AI is a tool. While it can generate or extend images, photography is about real-world capture and experience.
Download the iCAMERA eBook
Get a free copy of iCAMERA and we’ll also send you the latest iPhotography news, regular photo articles, and amazing deals straight to your inbox.
Conclusion
Digital, AI, and CGI terms may feel overwhelming at first. But knowing what they mean gives you confidence to explore modern creative tools without losing sight of traditional photography.
Whether you’re removing distractions, creating new composites, or even turning your photos into 3D worlds, these ideas all expand what’s possible.