Vampire: The Masquerade – Bloodlines is a cult classic – a Stable Diffusion filter now shows what an AI remaster could look like.
In 2021, Intel researchers showed how artificial intelligence could apply a kind of photorealism filter to video games in the future. In their example, the team gave Grand Theft Auto 5 a new look: vegetation, asphalt, or cars with reflective surfaces were calculated by a network on the pixel graphics.
The team trained a network with image pairs from GTA 5 screenshots and real-world street images that contained similar content, such as a bike or a car. Intel’s AI system then learned to transfer the style of the images. Objects and scenes that were less common in the training dataset were rendered less convincingly by the method.
Nevertheless, the example showed the potential of AI filters for 3D content, and Nvidia and Tesla also presented similar methods for training algorithms for autonomous driving, for example.
This is what an AI remaster of “Vampire: The Masquerade – Bloodlines” could look like
A lot has happened since then: diffusion models have replaced older image synthesis architectures and can render almost any object or scene in a photorealistic or stylized way. Companies like Runway are already using these models to change the style of videos using image-to-image methods, and there are many projects in the open-source community doing similar things with video and 3D content.
One of them is TemporalKit, a solution for “adding Temporal Stability to a Stable Diffusion Render “. In a recent Reddit post, a person uses TemporalKit to show what an AI remaster of the 2004 classic Vampire: The Masquerade – Bloodlines might look like. (Yes, that was almost 20 years ago, sorry).
Old video game characters become almost photorealistic people, street scenes or kitchen interiors look almost real. As known from other examples, Stable Diffusion still struggles a bit with time stability.
More efficient architectures and hybrid approaches could enable AI filters
Bloodlines’ “remaster” is just a video, but it shows how studios or modding communities could visually remake old classics in the future. With the generative capabilities of models like Stable Diffusion, users could even determine their own remastering style.
One obstacle is performance because even with the fastest graphics cards available today, Stable Diffusion can’t yet generate 60 frames per second. Instead, more efficient architectures such as OpenAI Consistency Models or a hybrid approach that builds on Intel’s 2021 work and solves the problem of missing training data with a generative AI model may be used in the future.