Recently, I saw Johnny sharing information about the Paint3D project in the group. The full name of this project is "Paint3D: Using a non-illumination texture diffusion model to draw anything in 3D."
Currently, the project does not have a public demo or code available, so we can only read their technical implementation description first.
Paint3D is a novel coarse-to-fine generation framework that can produce high-resolution, illumination-free, and diverse 2K UV texture maps for untextured 3D mesh models based on text or image inputs.
In this paper, we introduce Paint3D, a novel generative framework that can produce high-resolution, lighting-free, and diverse 2K UV texture maps for untextured 3D mesh models based on text or image inputs. The key challenge addressed by this technology is the generation of high-quality textures without embedded lighting information, which allows the textures to be re-lit or re-edited in modern graphics pipelines. To achieve this, our method first leverages a pre-trained depth-aware 2D diffusion model to generate view-conditioned images and performs multi-view texture fusion to create an initial coarse texture map. However, due to the inability of 2D models to fully represent 3D shapes while disabling lighting effects, the coarse texture map may suffer from incomplete regions and lighting artifacts. To address this issue, we train specialized UV Inpainting and UVHD diffusion models for shape-aware refinement of incomplete regions and removal of lighting artifacts. Through this coarse-to-fine process, Paint3D is able to generate high-quality 2K UV textures that are free from lighting influence while maintaining semantic consistency, significantly advancing the state-of-the-art in 3D object texture generation.
Paper Address: https://arxiv.org/abs/2312.13913
The functionalities that the project can achieve include:
Image-conditional texture generation Text-conditional texture generation