Hi, I'm Cory D. Wright, a Senior AI Specialist, UX/UI Designer, and Animation & Production Guru. My passion is harnessing AI to create intuitive, data-driven designs that solve real-world problems with creativity and innovation.
UX/UI Portfolio Animation/Motion Portfolio Resume LinkedIn ContactWhen the board entrusted me with the ambitious task of digitally resurrecting the legendary John Wayne, I saw an opportunity to push the boundaries of artificial intelligence in media. I spearheaded a groundbreaking project that leveraged advanced AI techniques—including voice cloning via Eleven Labs and AI lip dubbing—to bring one of Hollywood's most iconic figures back to life on screen. Our initiative aimed to blend cutting-edge technology with cinematic nostalgia, creating a seamless and respectful homage to John Wayne's enduring legacy.
Ethical Considerations
Navigating the project's ethical and legal landscape was as critical as its technical execution. We consulted extensively with legal experts to ensure full compliance with intellectual property laws and engaged directly with the John Wayne Foundation to secure their endorsement. Addressing these moral implications was crucial in gaining the trust of John Wayne's dedicated fanbase and setting a precedent for responsible AI use in digital media.
First Attempts
My initial concept involved using archival footage of John Wayne combined with AI-generated voiceovers to create new content. However, synchronizing the mouth movements with the new audio presented a significant challenge. At the project's inception, AI video generators offering lip-syncing were not as prevalent as they are today. Extensive research and testing were necessary to achieve realistic results.
Our early attempts using Wav2Lip provided a starting point but lacked the desired realism. The lip-syncing was somewhat rough, highlighting the need for a more sophisticated solution.
Advanced Lip Syncing
The breakthrough came with LipDub, a tool that enabled more authentic mouth movements by training on specific source videos. This technology ensured that the lip movements were not generic but closely matched John Wayne's unique speech patterns, greatly enhancing visual fidelity. By writing custom scripts and employing LipDub, we significantly improved the realism of the digital recreation.
Voice Cloning
While refining the visuals, I explored voiceover alternatives to offer the board multiple options. Eleven Labs' voice cloning technology produced a voice that was subtly yet unmistakably John Wayne. This approach effectively bridged the gap between novelty and authenticity, minimizing the uncanny valley effect. Using AI, I separated the audio from an existing INSP promotional spot and replaced the original voiceover with the newly generated one, achieving a seamless integration.
The voiceover approach using Eleven Labs was more subtle, but still recognizable as the legendary John Wayne.
Innovations and Personal Contributions
Although the project did not reach full deployment, the extensive research and iterative process significantly expanded my expertise in AI tools. This experience inspired me to develop "Synced," a custom AI-driven lip-sync system. Leveraging ChatGPT to assist with coding, I am currently developing a local lip-sync system that trains on existing video footage to capture mouth movements. It then aligns new audio with the trained visemes. The AI uses markers to track facial and mouth movements, automatically correlating these markers with specific sounds to create synchronized lip-syncing.
Impact and Future Applications
This project underscored both the potential and challenges of using AI for digital resurrection. It laid a robust foundation for future innovations and demonstrated AI's transformative power in media and entertainment. By blending ethical considerations with technical advancements, the project serves as a template for responsible and innovative AI applications in recreating historical figures.
The journey of bringing John Wayne back to the screen through AI was as enlightening as it was challenging. It pushed the boundaries of what's possible in digital media and opened new avenues for how we can honor and preserve cultural icons. Moving forward, I am excited to continue exploring the capabilities of AI in media, always with a mindful approach to the ethical considerations that accompany such powerful technology.
Generative AI
Introduction
Generative AI is revolutionizing the marketing and entertainment industries, among others. While some fear its disruptive potential and others might overindulge in its capabilities, the strategic use of generative AI can unlock unprecedented creative possibilities. The key in advertising is to utilize AI in a way that the output doesn't overtly appear AI-generated—unless that is the intentional aesthetic.
Concept Development
For this mock advertisement, I envisioned a nostalgic scene: a woman in a vintage setting attempting to use a record player as a mobile device. This concept juxtaposes past and present technologies to highlight the evolution of personal audio devices.
Leveraging MidJourney for Image Generation
To bring this idea to life, I employed MidJourney, a state-of-the-art generative AI model. After several iterations and prompt adjustments, I achieved an image that closely matched my vision. One challenge was guiding MidJourney to generate period-appropriate headphones, as it tended to produce more modern designs. This highlighted the limitations and the need for further refinement using additional tools.
The Mid-Journey Result
Enhancing with AI Upscaling
Post-generation, I processed the image through an AI-powered enhancement tool. This step amplified details, enriched colors, and upscaled the image resolution, making it suitable for high-quality applications.
AI-Powered Refinements in Photoshop
To address the anachronistic elements and add finer details, I imported the image into Adobe Photoshop. Utilizing Photoshop's Generative Fill and AI capabilities, I modified the headphones to a more vintage style and replaced the record player with an era-appropriate model. AI-assisted touch-ups helped in seamlessly blending these elements into the scene.
Refinements in Photoshop
Product Integration with AI Design
To complete the advertisement, I needed a product that symbolized cutting-edge technology. Introducing SoundIQ—the hypothetical, most advanced wireless earbuds featuring built-in AI, an integrated music player, and an astonishing two-week battery life.
Using AI design tools, I crafted a sleek image of the earbuds that encapsulated innovation and modernity.
A new set of earbuds.
Final Composition
Returning to Photoshop, I assembled all the elements. The final ad juxtaposes the cumbersome, vintage attempt at portable music with the sleek efficiency of SoundIQ earbuds, emphasizing the leaps in technology
The final ad.
Conclusion
Creating this advertisement through traditional means would have required substantial resources: hiring models, sourcing vintage apparel and equipment, securing locations and permits, and coordinating a full production team. By harnessing generative AI and AI-enhanced tools, I was able to produce a unique, high-quality advertisement efficiently and cost-effectively.
For small businesses and designers, AI democratizes creative content creation, allowing for limitless possibilities without prohibitive costs. This project showcases how AI can be strategically used in advertising to produce compelling visuals that resonate with audiences while maintaining a professional standard.
Embracing the Future of Creative AI
Generative AI is not just a buzzword—it's a catalyst for innovation in advertising. By understanding how to integrate AI tools like MidJourney and Photoshop's AI features, we can push creative boundaries and deliver impactful messages that engage and inspire. The fusion of AI and human creativity opens doors to new realms of possibility in the ever-evolving landscape of digital marketing.
ShopINSP, the online store for the western-themed television network INSP, sought captivating visuals of families enjoying a summer road trip for their latest commercial campaign. As the UX Designer and AI Specialist, I was entrusted with creating these engaging images and bringing them to life using cutting-edge AI technologies.
Traditionally, INSP's marketing materials feature older models to align with their primary demographic. For this campaign, I proposed a fresh approach to broaden the network's appeal by incorporating younger families. This strategy aimed to evoke nostalgia while resonating with a wider audience.
Harnessing Generative AI for Image Creation
To bring this vision to life, I leveraged MidJourney, a state-of-the-art generative AI model powered by deep learning algorithms. Through meticulous prompt engineering, I guided the AI to produce high-quality images that encapsulated the essence of road trip life.
Images reflected the road trip life
Advanced Image Refinement with AI Tools
Post-generation, the images were imported into Adobe Photoshop, where I utilized AI-powered features such as Neural Filters and Content-Aware Fill for further enhancement.
A wide assortment of images were created to show different couples and families on vacation.
Introducing Motion Through AI and Compositing
Recognizing that static images might not fully engage viewers in a commercial spot, I developed an innovative technique to infuse subtle motion using a combination of AI tools and motion graphics software.
3D Modeling and Website Recreation with Blender
To replicate the website experience within the commercial, I incorporated Blender for 3D modeling and rendering.
Final Production and Outcomes
The fusion of generative AI, AI-enhanced motion graphics, and 3D visualization resulted in a compelling commercial that captivated audiences.
AI Learning
As an early adapter of AI, I wanted to be able to share knowledge with colleagues. What started out as just a few PDFs eventually turned into a full magazine. I am now in the process of turning it into a website anyone can access.
AEye Today evolved into telling the story of AI. What is new and how it can be used practically in business. Focusing on generative AI, such as Mid-Journey, Dall-E, ChatGPT, and various audio and video generators, the magazine/website also focuses on how AI can be used, what is AI, and how it has changed in such a short amount of time.
Generative AI
Instead of using stock footage, I wanted to utilize the AI imagery style I had developed for the website into this spot, but wanted a bit more. Using Mid-Journey and Photoshop, I created the base images, but took them one step further to create a moving image by subtly animating the background with AI, and zooming in on the character to create a unique visual.
The "donut"
I then needed to create the middle of the donut. This was achieved using Blender to recreate the webpage, add a camera/user scroll movement and animate the featured products. Each product was created in 3D with the artwork applied. Then pulled out of the listing image (created to match the 3D product). If you watch closely, you'll notice that once the image is pulled out, it is no longer in the image.
Making things move
Since this was a commercial spot, still images with basic zooming or panning seemed a bit boring. I developed a technique using the AI images, Photoshop, Runway, and final composition in After Effects to give slight movement to the images. This was meant to be subtle using the current state of AI video generation in a practical manner.
AI in Development
As a long-time user of 3D Software (dating back to the 90's), I am always looking for new ways to automate or make certain tasks easier. Perhaps a feature I wish was included in the software, or a different way of accessing an interface feature to make work easier. Enter Blender Addons; plug and play additions to my 3D software of choice. But what I needed wasn't always available, and I wasn't versed enough in Python coding to write my own. That was until I teamed up with AI!
I wanted to create my own Blender addons using my experience with the software, as well as my experience in User Interface and User Experience Design, to make the already incredible app even better. Using ChatGPT and Claude to assist me, I laid out my ideas for various addons and asked the large language models to help with the Python code to make them work. After much back and forth and testing, I have created almost a dozen addons so far, ranging from simple UI updates to advanced lip-syncing tools that use AI to create shape key-based mouth movements in sync with any audio uploaded, saving time and energy compared to keyframing each mouth movement manually.
These addons are now available for other Blender users on the Blender Marketplace under my ToonUps brand.
Some Examples
Generative AI
ShopINSP needed model shots, but had a very limited budget. Instead of hiring models, going on location, shooting for hours upon hours to get the right shot, and then spending days in post to finish them, I proposed to use AI, and within a short amount of time, a full virtual photoshoot could take place. This not only saved time, but saved the company money and allowed me to create hundreds of images in dozens of different locations.
With Generative AI images, the sky is the limit when it comes to what you can create, but in most cases, you don't want to go too high. For the Shop at INSP, I wanted to maintain a high level of realism, but also be able to get creative, such as the flashy Santa for a last-minute Christmas ad. Traditional photography would have required a full photoshoot, set setup, costumes, models, post-production, which were not in the budget, or time constraints.
Know your audience
The primary audience of the INSP network is 60+, split almost equally between men and women. This became my basis for the shop's imagery; focusing mostly on showcasing people of a particular age. AI allowed for the easy creation of images of this type. Since, by default, AI will generate 20-something, tall dark, and handsome people, a lot of careful prompting had to be used to get the right demographic.
Bring the whole family
While the primary demographic was of a particular age, we wanted to occasionally showcase family shots. Fathers and sons, mothers and daughters, or an entire family. Multiple products and people in one shot can prove a challenge for AI. This required specialized prompting to get each person to look correct, and a lot of region editing to get everything perfect.
Find out how I use Mid-Journey, Photoshop, and other tools to create these shots.
Any good design, whether it be a photo, painting, or AI creation, needs an idea. With AI, that idea helps to form the basis of the prompt. The subject, location, and action must all be translated into a series of words that ideally resonate with the AI’s training data.
While there is no perfect prompt, I’ve found that starting with the subject and action, followed by location and keywords for lighting setup, camera type, and depth of field, tends to produce good results. I start each prompt with an overall concept and adjust it based on how the AI responds.
Once the scene is set and the subjects look right (hands included), there’s still more to do. Using the region tool in MJ, I often refine clothing, fix hands, or add details to the scene. Final touches are made in Photoshop with both traditional tools and AI enhancements. For ShopINSP, product artwork is added before the final image is processed through an AI enhancer for an even better look.
Generative AI
Some Examples