Snap announced at AWE its groundbreaking real-time on-device image diffusion model that generates vivid augmented reality experiences in real time on smartphones, and launched Lens Studio 5.0, a new generative AI tool for AR creators. This move marks an exciting new stage in the development of AR technology, which will greatly change the way AR content is created and bring users a richer and more immersive AR experience. This innovation of Snap not only improves the efficiency of AR content generation, but also lowers the threshold for creation, opening the door to the AR world for more creators.
At Augmented World Expo, Snap demonstrated an early version of a real-time on-device image diffusion model that can generate vivid augmented reality (AR) experiences. The company also launched generative AI tools designed for AR creators.

The model is small enough to run on a smartphone and fast enough to re-render frames in real time based on text prompts, Snap's co-founder and chief technology officer Bobby Murphy said at the event. He noted that while the emergence of generative AI image diffusion models is exciting, these models need to be significantly faster to have a significant impact on augmented reality, so Snap's team has been working on accelerating machine learning models.
It is reported that Snapchat users will start to see lens effects based on this generative model in the next few months, and Snap plans to promote it to creators before the end of the year. "This real-time, on-device generative machine learning model, and future models, mark an exciting new direction for augmented reality, making us rethink the way we render and create AR experiences," said Bobby Murphy.
In addition, Murphy also announced the launch of Lens Studio 5.0, which provides developers with new generative AI tools to help them create AR effects faster than ever before, saving weeks or even months of time. AR creators can use new tools to quickly generate selfie footage by generating highly realistic machine-learned facial effects, and can also generate custom style effects in real time that cover the user's face, body, and surroundings. They can also generate 3D footage in minutes and apply it to their shots.
In addition, AR creators can leverage the company's facial mesh technology to generate aliens, wizards and other characters through text or image prompts, and can also generate masks, textures and materials in minutes. The latest version of Lens Studio also includes an AI assistant that can answer questions AR creators may have. Snap expects these new tools to dramatically change the way AR creation is created, allowing creators to realize their creative ideas faster.
The real-time device-side image diffusion model and Lens Studio 5.0 released by Snap this time herald the vigorous development of AR technology, which will bring users more colorful augmented reality experiences in the future and push the AR industry to new heights. I believe that in the near future, we will see more innovative applications based on this technology.