December 5, 2022 Computer Vision

Self-Supervised Post-Correction for Monte Carlo Denoising

  • 48 minutes
  • Jonghee Back, Binh-Son Hua, Toshiya Hachisuka, Bochang Moon

  • SIGGRAPH 2022
Share

Abstract

Using a network trained by a large dataset is becoming popular for denoising Monte Carlo rendering. Such a denoising approach based on supervised learning is currently considered the best approach in terms of quality. Nevertheless, this approach may fail when the image to be rendered (i.e., the test data) has very different characteristics than the images included in the training dataset. A pre-trained network may not properly denoise such an image since it is unseen data from a supervised learning perspective. To address this fundamental issue, we introduce a post-processing network that improves the performance of supervised learning denoisers. The key idea behind our approach is to train this post-processing network with self-supervised learning. In contrast to supervised learning, our self-supervised model does not need a reference image in its training process. We can thus use a noisy test image and self-correct the model on the fly to improve denoising performance. Our main contribution is a self-supervised loss that can guide the post-correction network to optimize its parameters without relying on the reference. Our work is the first to apply this self-supervised learning concept in denoising Monte Carlo rendered estimates. We demonstrate that our post-correction framework can boost supervised denoising via our self-supervised optimization. Our implementation is available at https://github.com/CGLab-GIST/self-supervised-post-corr.

Bibtex

@inproceedings{10.1145/3528233.3530730,
author = {Back, Jonghee and Hua, Binh-Son and Hachisuka, Toshiya and Moon, Bochang},
title = {Self-Supervised Post-Correction for Monte Carlo Denoising},
year = {2022},
isbn = {9781450393379},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3528233.3530730},
doi = {10.1145/3528233.3530730},
abstract = {Using a network trained by a large dataset is becoming popular for denoising Monte Carlo rendering. Such a denoising approach based on supervised learning is currently considered the best approach in terms of quality. Nevertheless, this approach may fail when the image to be rendered (i.e., the test data) has very different characteristics than the images included in the training dataset. A pre-trained network may not properly denoise such an image since it is unseen data from a supervised learning perspective. To address this fundamental issue, we introduce a post-processing network that improves the performance of supervised learning denoisers. The key idea behind our approach is to train this post-processing network with self-supervised learning. In contrast to supervised learning, our self-supervised model does not need a reference image in its training process. We can thus use a noisy test image and self-correct the model on the fly to improve denoising performance. Our main contribution is a self-supervised loss that can guide the post-correction network to optimize its parameters without relying on the reference. Our work is the first to apply this self-supervised learning concept in denoising Monte Carlo rendered estimates. We demonstrate that our post-correction framework can boost supervised denoising via our self-supervised optimization. Our implementation is available at https://github.com/CGLab-GIST/self-supervised-post-corr.},
booktitle = {ACM SIGGRAPH 2022 Conference Proceedings},
articleno = {18},
numpages = {8},
keywords = {self-supervised loss, self-supervised learning, self-supervised denoising, Monte Carlo denoising},
location = {Vancouver, BC, Canada},
series = {SIGGRAPH ’22}
}

Back to Research
  • 48 minutes
  • Jonghee Back, Binh-Son Hua, Toshiya Hachisuka, Bochang Moon

  • SIGGRAPH 2022
Share

Related publications