Status and new results

The tasks I did during the last two weeks:

  • I created a bash script that can run the PBRT and RPF algorithm automatically with every kind of experiment (different settings) I would like to test.
  • The RPF implementation is made multithreaded using OpenMP,
  • I created a small Java application that calculates the MSE and PSNR between two images, this can be used to discover the change in MSE before and after the RPF filtering step and between different filter runs with different parameters

I also created more results and I would like to share some of them with you as I think these show that my implementation works as it should. The San Miguel scene provided in the pbrt scenes is used for the following images.

Results with images of size 512 by 512

1. The unfiltered scene (using 4 spp)



2.  Image 1 filtered using 4 iterations (box sizes 55, 35, 17 and 7)



3. The reference image (16 spp)


Results with images of size 256 by 256

1. The unfiltered scene (using 4 spp)



2. Image 1 filtered using 1 iteration with a box size of 7



3. The reference image (16 spp)



4. Image 1 filtered using three iterations (box sizes 35, 17 and 7), this image is obviously overblurred, this can be explained by the large box size (35) in comparison to the image size (256), which means that several unrelated areas will be overlapped by the box and details will be lost.



Results for each attribute

I made some updates to the code so that each attribute can be turned on or off as desired. This can be used to show the use of each attribute in the filtering process. The results below show the result of filtering the same image with 1: No scene feature, 2: Only the normal feature, 3: Only the World coordinates feature and 4: All the features.

1. No scene features: The image is thus filtered as a bilateral filter without extra features.


2. Only the normal feature:



3. Only the world coordinate feature:



4. All the features combined (same result as presented in last post)



In the image filtered using only the world coordinates, it is visible that edges across objects are preserved, which is of course a feature of the bilateral filter that is preserved. But because of the small changes in world coordinates on the Killeroo, the details (for example the eyes) are over filtered and thus became almost invisible. The image filtered using only the normal feature shows almost no difference with the image that is filtered using all the features combined. This means that the normal feature is more important in this image than the world coordinate feature. The world coordinate feature can only account for points that are close to each other, where the normals can take care of small differences where the world coordinate does not change a lot, for example the eyes of the Killeroo, the normal changes a lot in that area, where the world coordinates don’t change a lot in that area.

Status update: first results are arriving

After more than a month of searching for the big bug in my algorithm, the first results are arriving. The RPF algorithm returned the same images as was given to it as input because of these bugs. A big memory issue has also been resolved, where the algorithm used up to 55% of my laptops memory (8GB), it now uses 0.5% constantly.  The first filter result is given here below, the first image shows the killeroo scene with depth of field with only 2 samples per pixels unfiltered while the second image shows the same image but filtered with the RPF algorithm (only using the normals and world coordinates as extra filter features). The shadows have improved a lot through the filtering as well as some noise on the body of the killeroo has been filtered out. The heads and the edges of the killeroos still show a lot of noise because only 2 samples per pixel are taken and thus a lot of noise is present and not a lot of samples are available for the RPF algorithm to filter with.


My next task is to alter the RPF algorithm so the scene features which are used in the RPF filtering can be turned on and off easily.

Setting number of samples per pixel in PBRT

I had some problems setting the number of samples per pixel for an image in PBRT.

I thought it would be as simple as adding the line

Renderer “sampler” “integer samplesperpixel” [32]

to the scene description file, but as this line causes a warning at runtime because samplesperpixel is not used, this line can be replaced using these lines:

Sampler “lowdiscrepancy” “integer pixelsamples” [256] 
LightSource “envlight” “integer nsamples” [4] “string mapname” “grace.exr”

Thanks to Mr. Murat KURT for this answer on the google groups page of PBRT. My question and the answers can be viewed here.

This post is made because I am probably not the only one who has encountered this problem. I hope it was helpful.

Status update 5 Dec

I have been busy writing a renderer plugin for PBRT that should give me the correct output I need to perform the RPF algorithm.

I am still working on the basic output of the plugin, the image itself as a 2D array of pixel values (each with his r,g and b value). This turned out harder then planned because the image is never stored as a whole in the renderer but the samples are sent to the film which outputs the image as an image file.

Next step

– Link the bilateral filter algorithm with the output from the plugin.

– Output not only the image itself but also another scene feature.

Planning until Christmas

The goal to reach before Christmas is to be able to show a little demo of a working version of the algorithm to my colleagues, my promoter and mentor.

  • Create a renderer plugin for PBRT that gives back not only the colors at each pixel of the image, but also another scene feature (for example the world coordinates of the first intersection of a sample).
  • Implement a simplified version of the RPF algorithm using the already implemented bilateral filter.
  • Combine the algorithm with the extension to use the output of pbrt as input for the algorithm.

The goal will be to obtain a result that is at least better as the image without using the filtering algorithm.