Cradle_Site_Favicon
Shanna Koopmans
June 13, 2024

Introduction

Unlike in traditional filmmaking where lights are placed and used to light actors, virtual production uses two kinds of lighting: the physical lights on set and the virtual lighting in Unreal Engine. To make the final image look believable, the foreground and background need to blend. Matching the physical and virtual lighting plays a big role in that. This is often a trial-and-error process which takes quite some valuable time. Because of this, the goal was to find an existing technique to match the physical and virtual lighting that could be implemented at the XR Stage at BUas. 

Testing

To gather initial data on existing techniques to match the physical lighting with the virtual lighting, Shanna investigated current literature and conducted interviews with industry experts. During these interviews she got to pick their brains about their approaches and more in-depth information about lighting for virtual production. This was followed by testing, where the techniques were replicated and tried out at the XR Stage. The same environment, physical setting, and actor were utilised for both techniques, ensuring that these variables did not affect the efficacy of the techniques. All details of the tested techniques and the experiment are described in the research paper.

Set up of the testing phase
Final Result

Results

After the testing of the techniques, the data was analysed using a comparative analysis. Unfortunately, neither technique was deemed right to implement at the XR Stage. This was because both took quite a long time and the steps taken to attempt to match the physical and virtual lighting were mostly based on subjective viewpoints instead of facts. On the other hand there were a lot of valuable practical suggestions given during the interviews and by other involved people during the data collection phase. Since one of Shanna’s goals for this research was to help make the process of matching the physical and virtual lighting easier for students, she created a document containing these practical suggestions. To gauge the level of usability and clarity of the document, another experiment took place where multiple students with different levels of prior experience tested it out. Afterwards they filled in a survey where I got more insights regarding any changes that still needed to be made to the best practices based on experience document.

Recommendations

It is important to note that the outcome of this research would likely differ significantly if the scope were broader. more techniques tested and in different studios. There are a few techniques out there that Shanna didn’t get to dive deeply into within her research as they weren’t applicable at the XR Stage at the time either due to equipment or time. However, they are still very relevant and could be useful for virtual production enthusiasts and studios to explore. 

Pixel Mapping

Pixel Mapping is the process of analysing an input image or video stream by defining areas of interest, computing the average pixel values within each area, and then sending those values out to another system via a protocol (such as DMX) so that it can be used to drive and control physical devices. The video below contains a more detailed explanation and shows the effect it has.

S1E6: Pixel Mapping – How to Create Interactive Lighting

CyberGaffer

During the course of this research project, Shanna came across an application that was in the development phase at the time. CyberGaffer is a set of plugins for 3D software accompanied by a standalone application. The plugins capture light information from the virtual scene and transfer it to the application, which then tune the physical lights accordingly. The most recent development update is that CyberGaffer now supports the calibration of LED walls or TVs to match the camera and light fixtures’ colour space. 

First batch of calibration spheres