Passing Multi-Channel Material Textures to a 3-Channel Loss

05/27/2021
by   Thomas Chambon, et al.
0

Our objective is to compute a textural loss that can be used to train texture generators with multiple material channels typically used for physically based rendering such as albedo, normal, roughness, metalness, ambient occlusion, etc. Neural textural losses often build on top of the feature spaces of pretrained convolutional neural networks. Unfortunately, these pretrained models are only available for 3-channel RGB data and hence limit neural textural losses to this format. To overcome this limitation, we show that passing random triplets to a 3-channel loss provides a multi-channel loss that can be used to generate high-quality material textures.

READ FULL TEXT

page 1

page 2

research
05/26/2023

Random-Access Neural Compression of Material Textures

The continuous advancement of photorealism in rendering is accompanied b...
research
11/15/2022

Deep scene-scale material estimation from multi-view indoor captures

The movie and video game industries have adopted photogrammetry as a way...
research
05/27/2015

Texture Synthesis Using Convolutional Neural Networks

Here we introduce a new model of natural textures based on the feature s...
research
02/26/2022

Blind Image Super Resolution with Semantic-Aware Quantized Texture Prior

A key challenge of blind image super resolution is to recover realistic ...
research
12/19/2016

On Random Weights for Texture Generation in One Layer Neural Networks

Recent work in the literature has shown experimentally that one can use ...
research
12/01/2019

A SVBRDF Modeling Pipeline using Pixel Clustering

We present a pipeline for modeling spatially varying BRDFs (svBRDFs) of ...

Please sign up or login with your details

Forgot password? Click here to reset