How Deepfakes Are Shaping Hollywood’s Future — AI in Cinema

Updated January 2026

Black and white film noir image of a vintage car parked in front of a classic Hollywood cinema, with a crowd queued up waiting for entry.

Cinema has always been about pulling you into a convincing illusion. In the last few years, that illusion has started to shift in a more literal direction, as artificial intelligence becomes part of how faces, voices, and performances are constructed on screen.

The term “deepfake” gets used loosely, but what Hollywood is really adopting is a growing set of AI-assisted tools for digital likeness, face replacement, and performance reconstruction. These techniques are changing how films are made, raising new ethical questions, and quietly redrawing the boundary between what is recorded and what is generated.

This update looks at how that technology actually works in modern production, how it is being used in mainstream cinema, where the ethical pressure points are forming, and what all of this means for storytelling going forward.

What are video deepfakes, really

At a technical level, deepfakes are a subset of synthetic media created using machine-learning models trained on large volumes of visual and audio data. Early systems relied heavily on generative adversarial networks, but modern pipelines increasingly use diffusion-based and transformer-driven models for face synthesis, voice cloning, and lip-sync.

In practical terms, the technology allows a person’s likeness or voice to be reconstructed or altered in ways that can look convincingly real. In entertainment, these systems rarely operate alone. They are layered into existing visual effects workflows alongside motion capture, 3D modelling, compositing, and manual animation.

That distinction matters. What audiences see on screen is not a raw deepfake in the internet sense, but a hybrid of AI and traditional VFX craft.

AI face replacement and digital likeness in Hollywood

Recent high-profile films are often described as “deepfake movies,” but the reality is more nuanced.

Indiana Jones and the Dial of Destiny (2023) used machine-learning-assisted de-aging and face replacement tools combined with traditional VFX techniques to create a younger version of Harrison Ford. Lucasfilm described the process as an internal system trained on decades of archival footage rather than a consumer-style deepfake tool.

Source: Behind the Magic | The Visual Effects of Indiana Jones and the Dial of Destiny (ILM, YouTube).
An official breakdown showing how machine-learning-assisted de-aging and face replacement were combined with traditional VFX to create a younger Harrison Ford.

Rogue One: A Star Wars Story (2016) recreated Peter Cushing’s likeness using CGI, performance capture, and extensive manual animation. No deepfake system was involved, but it remains a foundational reference point for modern digital-likeness debates.

Fast X (2023) included Paul Walker through a blend of archive footage, body doubles, and CGI face work. Again, this was not deepfake technology, but it sits on the same ethical and creative fault line.

What has changed since those films is not the concept, but the accessibility and speed of face-synthesis tools. What once required bespoke studio pipelines can now be rough-tested with off-the-shelf AI systems, even if final film work still demands high-end VFX supervision.

The ethical pressure points

This is where the conversation becomes more serious than visual novelty.

Consent and control

The 2023 SAG-AFTRA strike pushed digital likeness rights into the centre of contract negotiations. Since then, many new agreements include explicit clauses covering how digital replicas and likeness can be licensed and reused. This is now standard practice rather than a niche concern.

This shift also ties into a wider industry reckoning about ownership and leverage, something I explored in my analysis of how actors are increasingly being treated as long-term intellectual property rather than just performers.

Posthumous performances

Digitally recreating deceased performers remains legally complex and culturally divisive. While audiences often accept these appearances when handled carefully, they raise unresolved questions about who owns a legacy and where artistic homage ends and commercial exploitation begins.

Misinformation and reputational harm

Outside cinema, synthetic video and audio have already been used in scams and political misinformation. That broader context is one reason lawmakers in multiple countries have begun drafting or passing legislation targeting non-consensual or deceptive synthetic media.

For film studios, this creates a reputational risk. The deeper harm is not only the fake, it is the shift into watching like an investigator, because once suspicion becomes the default the whole viewing experience changes, which is why proof of origin and clear responsibility matter. The same tools that make production more flexible also increase the need for transparency and consent frameworks that audiences can trust.

AI as a collaborator, not a replacement

A human actor and a humanoid robot face each other on a film set, with the robot holding a clapperboard, symbolising AI assisting rather than replacing performance.

In practice, AI tools are being absorbed into filmmaking the same way digital compositing and CGI once were. They remove friction from certain tasks, but they do not replace the creative judgement that shapes a performance or a story.

Where they already make practical sense:

  • correcting continuity issues without reshoots

  • reducing the need for dangerous or physically demanding stunts

  • extending limited archival footage for flashbacks or historical scenes

Where they still fall short:

  • conveying subtle emotional shifts

  • improvisational nuance

  • the unpredictable energy of live performance

The result is not a replacement of actors or directors, but a quiet rebalancing of what is captured physically and what is constructed digitally.

This shift is less about spectacle and more about workflow. It changes how scenes are finished, how long fixes take, and how much of a performance can be shaped after the camera stops rolling.

There is also a quieter shift underway. What begins as AI acting as a collaborator on technical problems will, over time, edge toward creative influence. Not in the sense of replacing writers or actors, but in subtly shaping what performances are possible, what edits are practical, and what visual ideas survive the cost and time constraints of production.

The more invisible that influence becomes, the more it will matter. When audiences stop noticing whether a tear was captured on set or generated in post, the question is no longer about realism. It is about whether storytelling becomes more expressive and daring, or more uniform and risk-averse.

How audiences decide what still feels real in synthetic video sits at the centre of the whole conversation.

Outside entertainment contracts, lawmakers are also exploring proposed federal right-of-publicity protections for unauthorized synthetic likenesses, indicating how seriously this issue is now being taken.

What has genuinely changed

Three structural shifts now define how synthetic media is actually being absorbed into film and media production.

First, digital likeness rights have moved from abstract principle into routine contract practice. Studios increasingly treat consent, scanning, reuse, and licensing terms as core legal infrastructure rather than optional add-ons negotiated at the margins.

Second, synthetic media detection has improved, with research groups and governments funding forensic tools designed to identify manipulated audio and video. While detection is still imperfect, it has moved beyond academic theory into applied use.

Third, the barrier to entry has collapsed. What once required a major visual effects house can now be rough-tested by small teams using off-the-shelf tools. That shift makes governance, consent, and transparency more important, not less.

Beyond cinema, why this matters elsewhere

The corporate boardroom and presentation screen map directly onto your marketing, training, and communications examples.

Hollywood is effectively stress-testing technologies that will appear in marketing, training, and communications.

Possible uses already being explored in industry:

  • multilingual video localisation using cloned voices

  • historical reconstructions for education

  • digital presenters for internal corporate media

The risk is not technical failure. It is erosion of trust. When synthetic video becomes visually flawless, the audience’s first question stops being “is this impressive” and becomes “is this real”.

The road ahead

Deepfakes are not a gimmick. They are part of a deeper shift in how reality itself is represented and mediated. For filmmakers, the challenge is not technical. It is ethical and narrative.

The real risk is not that AI will replace creativity. It is that it will make creative choices feel cheaper, safer, and more homogenised if used without intention. Tools are neutral. Direction is not.

The directors and studios who use AI to take bigger creative risks will define the next era of cinema. Those who use it only to reduce cost and friction will produce work that looks impressive and feels empty.

Deepfakes will not kill cinema.
Bland reliance on them might.

Nigel Camp

Filmmaker. Brand visuals done right.

Previous
Previous

Engaging Kinesthetic Learners: The Next Frontier in Video Marketing

Next
Next

Psychology of Film Editing: Why Cuts and Pacing Decide Everything