Publications

Procedural Multiscale Geometry Modeling using Implicit Surfaces

Date: 2025-10-11 Authors: Venu Bojja, Bosak Adam, Padron-Griffe Juan Raul

Materials exhibit geometric structures across mesoscopic to microscopic scales, influencing macroscale properties such as appearance, mechanical strength, and thermal behavior. Capturing and modeling these multiscale structures is challenging but essential for computer graphics, engineering, and materials science. We present a framework inspired by hypertexture methods, using implicit surfaces and sphere tracing to synthesize multiscale structures on the fly without precomputation. This framework models volumetric materials with particulate, fibrous, porous, and laminar structures, allowing control over size, shape, density, distribution, and orientation. We enhance structural diversity by superimposing implicit periodic functions while improving computational efficiency. The framework also supports spatially varying particulate media, particle agglomeration, and piling on convex and concave structures, such as rock formations (mesoscale), without explicit simulation. We demonstrate its potential in the appearance modeling of volumetric materials and investigate how spatially varying properties affect the perceived macroscale appearance. As a proof of concept, we show that microstructures created by our framework can be reconstructed from image and distance values defined by implicit surfaces, using both first-order and gradient-free optimization methods.

Practical Inverse Rendering of Textured and Translucent Appearance

Date: 2025-07-27 Authors: Philippe Weier, Jérémy Riviere, Ruslan Guseinov, Stephan Garbin, Philipp Slusallek, Bernd Bickel, Thabo Beeler, Delio Vicini

: Inverse rendering has emerged as a standard tool to reconstruct the parameters of appearance models from images (e.g., textured BSDFs). In this work, we present several novel contributions motivated by the practical challenges of recovering high-resolution surface appearance textures, including spatially-varying subsurface scattering parameters.
First, we propose Laplacian mipmapping, which combines differentiable mipmapping and a Laplacian pyramid representation into an effective preconditioner. This seemingly simple technique significantly improves the quality of recovered surface textures on a set of challenging inverse rendering problems. Our method automatically adapts to the render and texture resolutions, only incurs moderate computational cost and achieves better quality than prior work while using fewer hyperparameters. Second, we introduce a specialized gradient computation algorithm for textured, path-traced subsurface scattering, which facilitates faithful reconstruction of translucent materials. By using path tracing, we enable the recovery of complex appearance while avoiding the approximations of the previously used diffusion dipole methods. Third, we demonstrate the application of both these techniques to reconstructing the textured appearance of human faces from sparse captures. Our method recovers high-quality relightable appearance parameters that are compatible with current production renderers.

Perceived quality of BRDF models

Date: 2025-07-24 Authors: Behnaz Kavoosighafi, Rafał K. Mantiuk, Saghi Hajisharif, Ehsan Miandji, Jonas Unger

: Material appearance is commonly modeled with the Bidirectional Reflectance Distribution Functions (BRDFs), which need to trade accuracy for complexity and storage cost. To investigate the current practices of BRDF modeling, we collect the first high dynamic range stereoscopic video dataset that captures the perceived quality degradation with respect to a number of parametric and non-parametric BRDF models. Our dataset shows that the current loss functions used to fit BRDF models, such as mean-squared error of logarithmic reflectance values, correlate poorly with the perceived quality of materials in rendered videos. We further show that quality metrics that compare rendered material samples give a significantly higher correlation with subjective quality judgments, and a simple Euclidean distance in the ITP color space (ΔEITP) shows the highest correlation. Additionally, we investigate the use of different BRDF-space metrics as loss functions for fitting BRDF models and find that logarithmic mapping is the most effective approach for BRDF-space loss functions.

From Words to Wood: Text‐to‐Procedurally Generated Wood Materials

Date: 2025-04-18 Authors: Mohcen Hafidi and Alexander Wilkie

In the domain of wood modeling, we present a new complex appearance model, coupled with a user-friendly NLP-based frontend for intuitive interactivity. First, we present a procedurally generated wood model that is capable of accurately simulating intricate wood characteristics, including growth rings, vessels/pores, rays, knots, and figure. Furthermore, newly developed features were introduced, including brushiness distortion, influence points, and individual feature control. These novel enhancements facilitate a more precise matching between procedurally generated wood and ground truth images. Second, we present a text-based user interface that relies on a trained natural language processing model that is designed to map user plain English requests into the parameter space of our procedurally generated wood model. This significantly reduces the complexity of the authoring process, thereby enabling any user, regardless of their level of woodworking expertise or familiarity with procedurally generated materials, to utilize it to its fullest potential.

Quantized FCA: Efficient Zero-Shot Texture Anomaly Detection

Date: 2025-01-01 Authors: Andrei Timotei Ardelean, Patrick Rückbeil, Tim Weyrich

Zero-shot anomaly localization is a rising field in computer vision research, with important progress in recent years. This work focuses on the problem of detecting and localizing anomalies in textures, where anomalies can be defined as the regions that deviate from the overall statistics, violating the stationarity assumption. The main limitation of existing methods is their high running time, making them impractical for deployment in real-world scenarios, such as assembly line monitoring. We propose a real-time method, named QFCA, which implements a quantized version of the feature correspondence analysis (FCA) algorithm. By carefully adapting the patch statistics comparison to work on histograms of quantized values, we obtain a 10× speedup with little to no loss in accuracy. Moreover, we introduce a feature preprocessing step based on principal component analysis, which
enhances the contrast between normal and anomalous features, improving the detection precision on complex textures. Our method is thoroughly evaluated against prior art, comparing favorably with existing methods.

Stable Sample Caching for Interactive Stereoscopic Ray Tracing

Date: 2025-01-01 Authors: Philippi Henrik; Wann Jensen Henrik; Frisvad Jeppe Revall

We present an algorithm for interactive stereoscopic ray tracing that decouples visibility from shading and enables caching of radiance results for temporally stable and stereoscopically consistent rendering. With an outset in interactive stable ray tracing, we build a screen space cache that carries surface samples from frame to frame via forward reprojection. Using a visibility heuristic, we adaptively trace the samples and achieve high performance with little temporal artefacts. Our method also serves as a shading cache, which enables temporal reuse and filtering of shading results in virtual reality (VR). We demonstrate good antialiasing and temporal coherence when filtering geometric edges. We compare our sample-based radiance caching that operates in screen space with temporal antialiasing (TAA) and a hash-based shading cache that operates in a voxel representation of world space. In addition, we show how to extend the shading cache into a radiance cache. Finally, we use the per-sample radiance values to improve stereo vision by employing stereo blending with improved estimates of the blending parameter between the two views.

GOEmbed: Gradient Origin Embeddings for Representation Agnostic 3D Feature Learning

Date: 2024-11-08

: Encoding information from 2D views of an object into a 3D representation is crucial for generalized 3D feature extraction. Such features can then enable 3D reconstruction, 3D generation, and other applications. We propose GOEmbed (Gradient Origin Embeddings) that encodes input 2D images into any 3D representation, without requiring a pre-trained image feature extractor; unlike typical prior approaches in which input images are either encoded using 2D features extracted from large pre-trained models, or customized features are designed to handle different 3D representations; or worse, encoders may not yet be available for specialized 3D neural representations such as MLPs and hash-grids. We extensively evaluate our proposed GOEmbed under different experimental settings on the OmniObject3D benchmark. First, we evaluate how well the mechanism compares against prior encoding mechanisms on multiple 3D representations using an illustrative experiment called Plenoptic-Encoding. Second, the efficacy of the GOEmbed mechanism is further demonstrated by achieving a new SOTA FID of 22.12 on the OmniObject3D generation task using a combination of GOEmbed and DFM (Diffusion with Forward Models), which we call GOEmbedFusion. Finally, we evaluate how the GOEmbed mechanism bolsters sparse-view 3D reconstruction pipelines.
>>authors: Animesh Karnewar, Roman Shapovalov, Tom Monnier, Andrea Vedaldi, Niloy J. Mitra, David Novotny

A Surface‐based Appearance Model for Pennaceous Feathers

Date: 2024-11-07 Authors: Juan Raúl Padrón‐Griffe, Dario Lanza, Adrián Jarabo, Adolfo Muñoz

: The appearance of a real-world feather results from the complex interaction of light with its multi-scale biological structure, including the central shaft, branching barbs, and interlocking barbules on those barbs. In this work, we propose a practical surface-based appearance model for feathers. We represent the far-field appearance of feathers using a BSDF that implicitly represents the light scattering from the main biological structures of a feather, such as the shaft, barb and barbules. Our model accounts for the particular characteristics of feather barbs such as the non-cylindrical cross-sections and the scattering media via a numerically-based BCSDF. To model the relative visibility between barbs and barbules, we derive a masking term for the differential projected areas of the different components of the feather’s microgeometry, which allows us to analytically compute the masking between barbs and barbules. As opposed to previous works, our model uses a lightweight representation of the geometry based on a 2D texture, and does not require explicitly representing the barbs as curves. We show the flexibility and potential of our appearance model approach to represent the most important visual features of several pennaceous feathers.

Practical RGB Measurement of Fluorescence and Blood Distributions in Skin

Date: 2024-11-01 Authors: Emilie Nogué, Arvin Lin, Xiaohui Li, Giuseppe Claudio Guarnera, Abhijeet Ghosh

: Biophysical skin appearance modeling has previously focused on spectral absorption and scattering due to chromophores in various skin layers. In this work, we extend recent practical skin appearance measurement methods employing RGB illumination to provide a novel estimate of skin fluorescence, as well as direct measurements of two parameters related to blood distribution in skin ± blood volume fraction, and blood oxygenation. The proposed method involves the acquisition of RGB facial skin reflectance responses under RGB illumination produced by regular desktop LCD screens. Unlike previous works that have employed hyperspectral imaging for this purpose, we demonstrate successful isolation of elastin-related fluorescence, as well as blood distributions in capillaries and veins using our practical RGB imaging procedure.

NeuPreSS: Compact Neural Precomputed Subsurface Scattering for Distant Lighting of Heterogeneous Translucent Objects

Date: 2024-10-18 Authors: T. TG, J. R. Frisvad, R. Ramamoorthi, H. W. Jensen

: Monte Carlo rendering of translucent objects with heterogeneous scattering properties is often expensive both in terms of memory and computation. If the scattering properties are described by a 3D texture, memory consumption is high. If we do path tracing and use a high dynamic range lighting environment, the computational cost of the rendering can easily become significant. We propose a compact and efficient neural method for representing and rendering the appearance of heterogeneous translucent objects. Instead of assuming only surface variation of optical properties, our method represents the appearance of a full object taking its geometry and volumetric heterogeneities into account. This is similar to a neural radiance field, but our representation works for an arbitrary distant lighting environment. In a sense, we present a version of neural precomputed radiance transfer that captures relighting of heterogeneous translucent objects. We use a multi-layer perceptron (MLP) with skip connections to represent the appearance of an object as a function of spatial position, direction of observation, and direction of incidence. The latter is considered a directional light incident across the entire non-self-shadowed part of the object. We demonstrate the ability of our method to compactly store highly complex materials while having high accuracy when comparing to reference images of the represented object in unseen lighting environments. As compared with path tracing of a heterogeneous light scattering volume behind a refractive interface, our method more easily enables importance sampling of the directions of incidence and can be integrated into existing rendering frameworks while achieving interactive frame rates.

Deep SVBRDF Acquisition and Modelling: A Survey

Date: 2024-09-16 Authors: Behnaz Kavoosighafi, Saghi Hajisharif, Ehsan Miandji, Gabriel Baravdish, Wen Cao, Jonas Unger

: Hand in hand with the rapid development of machine learning, deep learning and generative AI algorithms and architectures, the graphics community has seen a remarkable evolution of novel techniques for material and appearance capture. Typically, these machine-learning-driven methods and technologies, in contrast to traditional techniques, rely on only a single or very few input images, while enabling the recovery of detailed, high-quality measurements of bi-directional reflectance distribution functions, as well as the corresponding spatially varying material properties, also known as Spatially Varying Bi-directional Reflectance Distribution Functions (SVBRDFs). Learning-based approaches for appearance capture will play a key role in the development of new technologies that will exhibit a significant impact on virtually all domains of graphics. Therefore, to facilitate future research, this State-of-the-Art Report (STAR) presents an in-depth overview of the state-of-the-art in machine-learning-driven material capture in general, and focuses on SVBRDF acquisition in particular, due to its importance in accurately modelling complex light interaction properties of real-world materials. The overview includes a categorization of current methods along with a summary of each technique, an evaluation of their functionalities, their complexity in terms of acquisition requirements, computational aspects and usability constraints. The STAR is concluded by looking forward and summarizing open challenges in research and development toward predictive and general appearance capture in this field. A complete list of the methods and papers reviewed in this survey is available at computergraphics.on.liu.se/star_svbrdf_dl/.

N-BVH: Neural ray queries with bounding volume hierarchies

Date: 2024-08-01 Authors: Philippe Weier, Alexander Rath, Élie Michel, Iliyan Georgiev, Philipp Slusallek and Tamy Boubekeur

Conference or journal: SIGGRAPH 2024 Date : 2024-08-01 Philippe Weier, Alexander Rath, Élie Michel, Iliyan Georgiev, Philipp Slusallek and Tamy BoubekeurNeural representations have shown spectacular ability to compress complex signals in a fraction of the raw data size. In 3D …

Classifying Texture Anomalies at First Sight

Date: 2024-07-25 Authors: Andrei-Timotei Ardelean, Tim Weyrich

: The problem of detecting and localizing defects in images has been tackled with various approaches, including what are now called traditional computer vision techniques, as well as machine learning. Notably, most of these efforts have been directed toward the normality-supervised setting of this problem. That is, these algorithms assume the availability of a curated set of normal images, known to not contain any anomalies. The anomaly-free images constitute reference data, used to detect anomalies in a one-class classification setting. While this kind of data is easier to acquire than anomaly-annotated images, it is still costly or difficult to obtain in-domain data for certain applications. We address the problem of anomaly detection and localization under a training-set-free paradigm and do not require any anomaly-free reference data. Concretely, we introduced a truly zero-shot method that can localize anomalies in a single image of a previously unobserved texture class. Then, we develop a mechanism to leverage additional test images, which may contain anomalies. Furthermore, we extend our analysis to also include a categorization of the anomalies in the given population through clustering. Importantly, we focus our attention on textures and texture-like images as we develop an anomaly detection method for structural defects, rather than logical anomalies. This aligns with the proposed setting, which avoids the supervisory signal generally needed for detecting logical and semantical anomalies. This poster summarizes our recent line of research on localization and classification of anomalies in real-world texture images.

Neural SSS: Lightweight Object Appearance Representation

Date: 2024-07-24 Authors: T. TG, D. M. Tran, H. W. Jensen, R. Ramamoorthi, J. R. Frisvad

: We present a method for capturing the BSSRDF (bidirectional scattering-surface reflectance distribution function) of arbitrary geometry with a neural network. We demonstrate how a compact neural network can represent the full 8-dimensional light transport within an object including heterogeneous scattering. We develop an efficient rendering method using importance sampling that is able to render complex translucent objects under arbitrary lighting. Our method can also leverage the common planar half-space assumption, which allows it to represent one BSSRDF model that can be used across a variety of geometries. Our results demonstrate that we can render heterogeneous translucent objects under arbitrary lighting and obtain results that match the reference rendered using volumetric path tracing.

Practical Error Estimation for Denoised Monte Carlo Image Synthesis

Date: 2024-07-13 Authors: Arthur Firmino, Ravi Ramamoorthi, Jeppe Revall Frisvad, Henrik Wann Jensen

: We present a practical global error estimation technique for Monte Carlo ray tracing combined with deep learning based denoising. Our method uses aggregated estimates of bias and variance to determine the squared error distribution of the pixels. Unlike unbiased estimates for classical Monte Carlo ray tracing, this distribution follows a noncentral chi-squared distribution, under reasonable assumptions. Based on this, we develop a stopping criterion for denoised Monte Carlo image synthesis that terminates rendering once a user specified error threshold has been achieved. Our results demonstrate that our error estimate and stopping criterion work well on a variety of scenes, and that we are able to achieve a given error threshold without the user specifying the number of samples needed.

Practical Appearance Model for Foundation Cosmetics

Date: 2024-07-01 Authors: Dario Lanza, Juan Raúl Padrón-Griffe, Alina Pranovich, Adolfo Muñoz, Jeppe Frisvad, Adrian Jarabo

Conference or journal: Computer Graphics Forum (EGSR 2024) Date : 2024-07-01 Dario Lanza, Juan Raúl Padrón-Griffe, Alina Pranovich, Adolfo Muñoz, Jeppe Frisvad, Adrian JaraboWe present an appearance model for cosmetics, in particular for foundation layers, that reproduces a range of …

Blind Localization and Clustering of Anomalies in Textures

Date: 2024-06-01 Authors: Andrei-Timotei Ardelean, Tim Weyrich

Conference or journal: Conference on Computer Vision and Pattern Recognition (CVPR) Workshops 2024 Date : 2024-06-01 Andrei-Timotei Ardelean, Tim WeyrichAnomaly detection and localization in images is a growing field in computer vision. In this area, a seemingly understudied problem is …

Navigating the Manifold of Translucent Appearance

Date: 2024-04-01 Authors: Dario Lanza, Adrian Jarabo, Belen Masia

Conference or journal: Computer Graphics Forum (Eurographics 2024) Date : 2024-04-01 Dario Lanza, Adrian Jarabo, Belen MasiaThis work presents a perceptually-motivated manifold for translucent appearance, designed for intuitive editing of translucent materials by navigating through the manifold. Classic tools for …

Multidimensional Compressed Sensing for Spectral Light Field Imaging

Date: 2024-02-27 Authors: Wen Cao, Ehsan Miandji, Jonas Unger

: This paper considers a compressive multi-spectral light field camera model that utilizes a one-hot spectral-coded mask and a microlens array to capture spatial, angular, and spectral information using a single monochrome sensor. We propose a model that employs compressed sensing techniques to reconstruct the complete multi-spectral light field from undersampled measurements. Unlike previous work where a light field is vectorized to a 1D signal, our method employs a 5D basis and a novel 5D measurement model, hence, matching the intrinsic dimensionality of multispectral light fields. We mathematically and empirically show the equivalence of 5D and 1D sensing models, and most importantly that the 5D framework achieves orders of magnitude faster reconstruction while requiring a small fraction of the memory. Moreover, our new multidimensional sensing model opens new research directions for designing efficient visual data acquisition algorithms and hardware.

High-Fidelity Zero-Shot Texture Anomaly Localization Using Feature Correspondence Analysis

Date: 2024-01-01 Authors: Andrei-Timotei Ardelean, Tim Weyrich

Conference or journal: Winter Conference on Applications of Computer Vision 2024 Date : 2024-01-01 Andrei-Timotei Ardelean, Tim WeyrichWe propose a novel method for Zero-Shot Anomaly Localization on textures. The task refers to identifying abnormal regions in an otherwise homogeneous image. …

FROST-BRDF: A Fast and Robust Optimal Sampling Technique for BRDF Acquisition

Date: 2024-01-01 Authors: Ehsan Miandji, Tanaboon Tongbuasirilai, Saghi Hajisharif, Behnaz Kavoosighafi, and Jonas Unger

Conference or journal: IEEE Transactions on Visualization and Computer Graphics Date : 2024-01-01 Ehsan Miandji, Tanaboon Tongbuasirilai, Saghi Hajisharif, Behnaz Kavoosighafi, and Jonas UngerEfficient and accurate BRDF acquisition of real world materials is a challenging research problem that requires sampling …

Perceptual error optimization for Monte Carlo animation rendering

Date: 2023-11-01 Authors: Miša Korać, Corentin Salaün, Iliyan Georgiev, Pascal Grittmann, Philipp Slusallek, Karol Myszkowski, Gurprit Singh

Conference or journal: SIGGRAPH Asia 2023 Date : 2023-11-01 Miša Korać, Corentin Salaün, Iliyan Georgiev, Pascal Grittmann, Philipp Slusallek, Karol Myszkowski, Gurprit SinghIndependently estimating pixel values in Monte Carlo rendering results in a perceptually sub-optimal white-noise distribution of error in …

HoloFusion: Towards Photo-realistic 3D Generative Modeling

Date: 2023-10-01

: Diffusion-based image generators can now produce high-quality and diverse samples, but their success has yet to fully translate to 3D generation: existing diffusion methods can either generate low-resolution but 3D consistent outputs, or detailed 2D views of 3D objects but with potential structural defects and lacking view consistency or realism. We present HoloFusion, a method that combines the best of these approaches to produce high-fidelity, plausible, and diverse 3D samples while learning from a collection of multi-view 2D images only. The method first generates coarse 3D samples using a variant of the recently proposed HoloDiffusion generator. Then, it independently renders and upsamples a large number of views of the coarse 3D model, super-resolves them to add detail, and distills those into a single, high-fidelity implicit 3D representation, which also ensures view-consistency of the final renders. The super-resolution network is trained as an integral part of HoloFusion, end-to-end, and the final distillation uses a new sampling scheme to capture the space of super-resolved signals. We compare our method against existing baselines, including DreamFusion, Get3D, EG3D, and HoloDiffusion, and achieve, to the best of our knowledge, the most realistic results on the challenging CO3Dv2 dataset.
>>authors: Animesh Karnewar, Niloy J. Mitra, Andrea Vedaldi, David Novotny

A Biologically-Inspired Appearance Model for Snake Skin

Date: 2023-07-03 Authors: Juan Raúl Padrón Griffe, Diego Bielsa, Adrian Jarabo, Adolfo Muñoz

: Simulating the light transport on biological tissues is a longstanding challenge, given its complex multilayered structure. In biology, one of the most remarkable and studied examples of tissues are the scales that cover the skin of reptiles, which present a combination of photonic structures and pigmentation. This is, however, a somewhat ignored problem in computer graphics. In this work, we propose a multilayered appearance model based on the anatomy of the snake skin. Some snakes are known for their striking, highly iridescent scales resulting from light interference. We model snake skin as a two-layered reflectance function: The top layer is a thin layer resulting on a specular iridescent reflection, while the bottom layer is a diffuse highlyabsorbing layer, that results into a dark diffuse appearance that maximizes the iridescent color of the skin. We demonstrate our layered material on a wide range of appearances, and show that our model is able to qualitatively match the appearance of snake skin.

Neural Prefiltering for Correlation-Aware Levels of Detail

Date: 2023-07-01 Authors: Philippe Weier, Tobias Zirr, Anton Kaplanyan, Ling-Qi Yan, Philipp Slusallek

Conference or journal: SIGGRAPH 2023 Date : 2023-07-01 Philippe Weier, Tobias Zirr, Anton Kaplanyan, Ling-Qi Yan, Philipp SlusallekWe introduce a practical general-purpose neural appearance filtering pipeline for physically-based rendering. We tackle the previously difficult challenge of aggregating visibility across many …

Denoising-Aware Adaptive Sampling for Monte Carlo Ray Tracing

Date: 2023-07-01 Authors: Arthur Firmino, Jeppe Revall Frisvad, Henrik Wann Jensen

Conference or journal: SIGGRAPH 2023 Date : 2023-07-01 Arthur Firmino, Jeppe Revall Frisvad, Henrik Wann JensenMonte Carlo rendering is a computationally intensive task, but combined with recent deep-learning based advances in image denoising it is possible to achieve high quality …

Affordable method for measuring fluorescence using Gaussian distributions and bounded MESE

Date: 2023-07-01 Authors: Tomáš Iser, Loïc Lachiver, and Alexander Wilkie

Conference or journal: Optics Express Date : 2023-07-01 Tomáš Iser, Loïc Lachiver, and Alexander WilkieWe present an accurate and low-cost method for measuring fluorescence in materials. Our method outputs an estimate of the material’s Donaldson matrix, which is a commonly …

The Visual Language of Fabrics

Date: 2023-07-01 Authors: Valentin Deschaintre*, Julia Guerrero-Viu*, Diego Gutierrez, Tamy Boubekeur, Belen Masia

Conference or journal: Siggraph 2023 Date : 2023-07-01 Valentin Deschaintre*, Julia Guerrero-Viu*, Diego Gutierrez, Tamy Boubekeur, Belen MasiaWe introduce text2fabric, a novel dataset that links free-text descriptions to various fabric materials. The dataset comprises 15,000 natural language descriptions associated to …

HOLODIFFUSION: Training a 3D Diffusion Model Using 2D Images

Date: 2023-06-17 Authors: Animesh Karnewar, Andrea Vedaldi, David Novotny, Niloy J. Mitra

: Diffusion models have emerged as the best approach for generative modeling of 2D images. Part of their success is due to the possibility of training them on millions if not billions of images with a stable learning objective. However, extending these models to 3D remains difficult for two reasons. First, finding a large quantity of 3D training data is much more complex than for 2D images. Second, while it is conceptually trivial to extend the models to operate on 3D rather than 2D grids, the associated cubic growth in memory and compute complexity makes this infeasible. We address the first challenge by introducing a new diffusion setup that can be trained, end-to-end, with only posed 2D images for supervision; and the second challenge by proposing an image formation model that decouples model memory from spatial memory. We evaluate our method on real-world data, using the CO3D dataset which has not been used to train 3D generative models before. We show that our diffusion models are scalable, train robustly, and are competitive in terms of sample quality and fidelity to existing approaches for 3D generative modeling.

SparseBTF: Sparse Representation Learning for Bidirectional Texture Functions

Date: 2023-06-01 Authors: Kavoosighafi, Behnaz and Frisvad, Jeppe Revall and Hajisharif, Saghi and Unger, Jonas and Miandji, Ehsan

Conference or journal: Eurographics Symposium on Rendering Date : 2023-06-01 Kavoosighafi, Behnaz and Frisvad, Jeppe Revall and Hajisharif, Saghi and Unger, Jonas and Miandji, EhsanWe propose a novel dictionary-based representation learning model for Bidirectional Texture Functions (BTFs) aiming at compact …

Practical Temporal and Stereoscopic Filtering for Real-time Ray Tracing

Date: 2023-06-01 Authors: Henrik Philippi, Jeppe Revall Frisvad, Henrik Wann Jensen

Conference or journal: Eurographics Symposium on Rendering (2023) Date : 2023-06-01 Henrik Philippi, Jeppe Revall Frisvad, Henrik Wann JensenWe present a practical method for temporal and stereoscopic filtering that generates stereo-consistent rendering. Existing methods for stereoscopic rendering often reuse samples …

Automatic inference of a anatomically meaningful solid wood

Date: 2023-01-01 Authors: Thomas K. Nindel, Mohcen Hafidi, Tomáš Iser, Alexander Wilkie

Wood is a volumetric material with a very large appearance gamut that is further enlarged by numerous finishing techniques. Computer graphics has made considerable progress in creating sophisticated and flexible appearance models that allow convincing renderings of wooden materials. However, these do not yet allow fully automatic appearance matching to a concrete exemplar piece of wood, and have to be fine-tuned by hand. More general appearance matching strategies are incapable of reconstructing anatomically meaningful volumetric information. This is essential for applications where the internal structure of wood is significant, such as non-planar furniture parts machined from a solid block of wood, translucent appearance of thin wooden layers, or in the field of dendrochronology. In this paper, we provide the two key ingredients for automatic matching of a procedural wood appearance model to exemplar photographs: a good initialization, built on detecting and modelling the ring structure, and a phase-based loss function that allows to accurately recover growth ring deformations and gives anatomically meaningful results. Our ring-detection technique is based on curved Gabor filters, and robustly works for a considerable range of wood types.

Rendering Glinty Granular Materials in Virtual Reality

Date: 2022-12-01 Authors: Nynne Kajs, Mikkel Gjøl, Jakob Gath, Henrik Philippi, Jeppe Revall Frisvad, Andreas Bærentzen

Conference or journal: ICAT-EGVE 2022 Date : 2022-12-01 Nynne Kajs, Mikkel Gjøl, Jakob Gath, Henrik Philippi, Jeppe Revall Frisvad, Andreas BærentzenHighly realistic rendering of grainy materials like sand is achievable given significant computational resources and a lot of time for …

Efficient Storage and Importance Sampling for Fluorescent Reflectance

Date: 2022-11-01 Authors: Q. Hua, V. Tázlar, A. Fichet, A. Wilkie.

Conference or journal: Computer Graphics Forum Vol 42 (2023) Date : 2022-11-01 Q. Hua, V. Tázlar, A. Fichet, A. Wilkie.We propose a technique for efficient storage and importance sampling of fluorescent spectral data. Fluorescence is fully described by a re-radiation …

EARS: Efficiency-Aware Russian Roulette and Splitting

Date: 2022-07-01 Authors: Rath, Alexander and Grittmann, Pascal and Herholz, Sebastian and Weier, Philippe and Slusallek, Philipp

Conference or journal: Siggraph 2022 Date : 2022-07-01 Rath, Alexander and Grittmann, Pascal and Herholz, Sebastian and Weier, Philippe and Slusallek, PhilippRussian roulette and splitting are widely used techniques to increase the efficiency of Monte Carlo estimators. But, despite their …

Polarization-imaging Surface Reflectometry using Near-field Display

Date: 2022-07-01

Emilie Nogué, Yiming Lin, Abhijeet Ghosh July 2022 Eurographics Symposium on Rendering (EGSR) Abstract We present a practical method for measurement of spatially varying isotropic surface reflectance of planar samples using a combination of single-view polarization imaging and near-field display …

Efficiency-aware multiple importance sampling for bidirectional rendering algorithms

Date: 2022-07-01 Authors: Grittmann, Pascal and Yazici, Ömercan and Georgiev, Iliyan and Slusallek, Philipp

Conference or journal: Siggraph 2022 Date : 2022-07-01 Grittmann, Pascal and Yazici, Ömercan and Georgiev, Iliyan and Slusallek, PhilippMultiple importance sampling (MIS) is an indispensable tool in light-transport simulation. It enables robust Monte Carlo integration by combining samples from several …

Affordable Spectral Measurements of Translucent Materials

Date: 2022-07-01

Tomáš Iser, Tobias Rittig, Emilie Nogué, Thomas Nindel, Alexander Wilkie Eurographics Symposium on Rendering (EGSR) Abstract We present a spectral measurement approach for the bulk optical properties of translucent materials using only low-cost components. We focus on the translucent inks …

ReLU Fields: The Little Non-linearity That Could

Date: 2022-07-01

Animesh Karnewar, Tobias Ritschel, Oliver Wang, Niloy J. Mitra Siggraph 2022 Abstract In many recent works, multi-layer perceptions (MLPs) have been shown to be suitable for modeling complex spatially-varying functions including images and 3D scenes. Although the MLPs are able …

Wide Gamut Moment-based Constrained Spectral Uplifting

Date: 2022-07-01 Authors: Lucia Tódová, A. Wilkie, L. Fascione.

Conference or journal: Computer Graphics Forum Vol 41(2022) Date : 2022-07-01 Lucia Tódová, A. Wilkie, L. Fascione.Spectral rendering is increasingly used in appearance-critical rendering workflows due to its ability to predict colour values under varying illuminants. However, directly modelling assets …

Progressive Denoising of Monte Carlo Rendered Images

Date: 2022-05-01

Arthur Firmino, Jeppe Revall Frisvad, Henrik Wann Jensen Computer Graphics Forum Abstract Image denoising based on deep learning has become a powerful tool to accelerate Monte Carlo rendering. Deep learning techniques can produce smooth images using a low sample count. …

Differentiable Transient Rendering

Date: 2021-12-01 Authors: Shinyoung Yi, Donggun Kim, Kiseok Choi, Adrian Jarabo, Diego Gutierrez, Min H. Kim

Conference or journal: Siggraph Asia 2021 Date : 2021-12-01 Shinyoung Yi, Donggun Kim, Kiseok Choi, Adrian Jarabo, Diego Gutierrez, Min H. KimOur general-purpose differentiable transient rendering framework allows to compute derivates of complex, multi-bounce transient sequences with respect to scene …

A Fitted Radiance and Attenuation Model for Realistic Atmospheres

Date: 2021-08-01

Alexander Wilkie, Petr Vévoda, Lukáš Hošek, Thomas Bashford-Rogers, Tomáš Iser, Monika Kolářová, Tobias Rittig, Jaroslav Křivánek August 2021 Proceedings of SIGGRAPH • ACM Transactions on Graphics Abstract We present a fitted model of sky dome radiance and attenuation for realistic …

An OpenEXR Layout for Spectral Images

Date: 2021-07-01

Alban Fichet, Romain Pacanowski, Alexander Wilkie July 2021 Journal of Computer Graphics Techniques (JCGT) Abstract We propose a standardised layout to organise spectral data stored in OpenEXR images. We motivate why we chose the OpenEXR format as basis for our …

Moment-based Constrained Spectral Uplifting

Date: 2021-06-01

Lucia Tódová, Alexander Wilkie, Luca Fascione June 2021 Eurographics Symposium on Rendering (EGSR) Abstract We propose a technique to efficiently importance sample and store fluorescent spectral data. Fluorescence behaviour is properly represented as a re-radiation matrix: for a given input …

A Compact Representation for Fluorescent Spectral Data

Date: 2021-06-01

Qingqin Hua, Alban Fichet, Alexander Wilkie June 2021 Eurographics Symposium on Rendering (EGSR) Abstract We propose a technique to efficiently importance sample and store fluorescent spectral data. Fluorescence behaviour is properly represented as a re-radiation matrix: for a given input …

Perception of material appearance: a comparison between painted and rendered images

Date: 2021-05-01 Authors: Johanna Delanoy, Ana Serrano, Belen Masia, Diego Gutierrez

Conference or journal: Journal of Vision (JoV), 2021, Vol.21(5) Date : 2021-05-01 Johanna Delanoy, Ana Serrano, Belen Masia, Diego GutierrezPainters are masters in replicating the visual appearance of materials. While the perception of material appearance is not yet fully understood, …

The joint role of geometry and illumination on material recognition

Date: 2021-02-01 Authors: Manuel Lagunas, Ana Serrano, Diego Gutierrez, Belen Masia

Conference or journal: Journal of Vision Date : 2021-02-01 Manuel Lagunas, Ana Serrano, Diego Gutierrez, Belen MasiaObserving and recognizing materials is a fundamental part of our daily life. Under typical viewing conditions, we are capable of effortlessly identifying the objects …