Bringing the Duke Back to Life
(through AI)

Client: INSP | Software: After Effects, MidJourney, Eleven Labs, LipDub, Wav2Lip | Type: Concept

Reviving John Wayne with AI:

Tasked by the board to digitally resurrect the legendary John Wayne, I spearheaded a groundbreaking project leveraging advanced AI techniques, including voice cloning with Eleven Labs and AI lip dubbing. This initiative aimed to blend cutting-edge technology with cinematic nostalgia, bringing one of Hollywood’s iconic figures back to life on screen.

Ethical and Legal Considerations:

Navigating the project’s ethical and legal landscape was as critical as the technical execution. We rigorously consulted with legal experts to ensure compliance and engaged with the John Wayne Foundation to secure their endorsement. This step was crucial in addressing the moral implications and securing the trust of John Wayne’s dedicated fanbase.

_________________

Technical Development:

Initial Attempts:

My first idea was to use old footage of John and AI voice generation to create new content. But this required one other aspect; how to make the mouth movements match the voice? As of July 2024, more and more AI video generators are offering Lip Syncing, but when we started this project, this wasn’t as common. A lot of research and testing had to be done to get the results I was looking for.

The first try was a little rough with the lip syncing, but was a start. Using Wav2Lip worked, but wasn’t the most realistic result.

Advanced Lip Syncing:

The breakthrough came with LipDub, a tool that enabled more authentic mouth movements by training on specific source videos. This technology ensured that the lip movements were not generic but closely matched John Wayne’s unique speech patterns, greatly enhancing the visual fidelity.

Using LipDub proved to be more realistic. I wrote a script and used Eleven Labs to create the voiceover.

Voice Cloning:

Parallel to perfecting the visuals, I explored voiceover alternatives to provide the board with multiple options. Eleven Labs’ voice cloning technology produced a voice that was subtly yet unmistakably John Wayne, effectively bridging the gap between novelty and authenticity and minimizing the uncanny valley effect.

The voiceover approach using Eleven Labs was more subtle, but still recognizable as the legendary John Wayne.

Final Insights and Innovations:

Although the project did not reach full deployment, the extensive research and iterative process significantly expanded my expertise in AI tools. This experience inspired the development of “Synced,” a custom AI-driven lip sync system I am creating. “Synced” is an ongoing project aimed at refining AI capabilities for future applications in digital media.

Using Chat GPT to help write the code, I am currently working on a local LipSync system that uses the existing video to train the mouth movements. This then uses the new audio to line up with the trained visemes. This sample shows how the AI uses markers to track the face and mouth movements. The AI automatically creates and uses these markers to track the movement of the mouth and face and what movement aligns with what sound.

Impact and Future Applications:

The project underscored both the potential and challenges of using AI for digital resurrection. It laid a robust foundation for future innovations and demonstrated AI’s transformative power in media and entertainment. By blending ethical considerations with technical advancements, this project serves as a template for responsible and innovative AI applications in recreating historical figures.


Comments are closed.