Using the FSRad
Radiosity Processor

A brief overview of radiosity and
how to get the most from this tool

© 2001 Paul Nettle & Fluid Studios. All rights reserved.


Table of contents
Introduction
Standard computer lighting vs. Real Life
How real life lighting works
Patches and Elements – the guts of Radiosity
Lighting values
What radiosity doesn’t do
Options reference
         General options
         Database options
         Lightmap options
         Post processing options
Command line automation
Quick-start guide - jumping in and getting results
Get up and running quickly with FSRad in your own code
About the source code
Interested in taking a course on radiosity online?
Special thanks and credits
Copyright and legal information


Introduction
For those of you new to FSRad and just want to see some pretty picture, click [here] to run the examples.

The first few sections cover the concept of radiosity, so you can better understand how it works in order to get the most of it. If you refuse to read them, you will only frustrate yourself.

The rest of the document is a reference that covers the various options in the user interface, what they mean, how they interact and so on.

Do you have the latest version of this software? Find out [here].

[top]


Standard computer lighting vs. Real Life
Let’s first start off in familiar territory, standard lighting. Usually, lighting a scene involves placing a series of lights (either point lights or spotlights), assigning a color, maybe giving them a limiting radius and fall-off values (for spot lights), maybe intensity and a few other properties. Often times, these lights cast perfectly black shadows, unless you use multiple lights to cover the scene from a lot of angles, or use ambient light. These are a few of your standard lighting tools.

None of these tools exist in radiosity. Your world is about to change; hopefully for the better. These standard lighting tools come with a visual price. Shadows from point lights are razor-edge sharp; whereas real life never has such sharp shadows (even shadows from sunlight have a bit of a soft edge to them.) Multiple lights and ambient light are used to prevent perfectly black shadows. Finally, a lot of effort is required on the part of the artist to simulate all of those subtle details that fool the human eye into thinking it’s looking at something realistic.

Radiosity is, in all essence, a simulation of real life. So let’s consider how real life lighting works, and we’ll take our first step toward understanding the process of radiosity.

[top]


How real life lighting works
When you turn on a light bulb, it starts to emit lots (and lots!) of photons (particles of light.) These particles travel away from the light and land on various objects. Some of these photons will bounce off of those objects, while other photons will be absorbed by the object, based on the object’s color (a perfectly red object will absorb all colors except red.) Those photons that were not absorbed are reflected and the process starts all over again. Each time these photons bounce off an object, some of them make their way to your eye and you perceive the light.

Now, here’s the important part. If you look at a single spot on the wall, where did those photons come from? They all originated at the light source, but some of them have bounced off of the ceiling, the floor, another wall, a table – all to hit that one spot on the wall – where they were eventually reflected back into your eye. Every object contributes to the lighting of every other object in the room.

One of the subtle real life effects you can see is called “color bleeding” – where light from a green floor might be reflected onto a nearby white wall and then reflected into your eye, causing you to see a hint of green on the white wall. This is why you should never paint your floors green.

Some of the light you see may have been bounced around the room a few thousand times before it ever makes it to your eye. But since every surface absorbs some light (there is no such thing as a “perfect reflector”) the more the light is reflected, the less visual impact it makes.

This is why you can look behind the couch and find that missing sock – light will bounce around and eventually some of it will make its way back there. In computer graphics, ambient light was invented as a way to fake this effect. But in reality, the effect is much more complex. Once you start to get used to seeing radiosity, you’ll be able to spot ambient light because of its constant-color nature.

There’s one more topic to cover here, and that is the effect of area lights and how they cast soft shadows... Consider the standard fluorescent light. It’s a long tube, usually hidden behind a diffuser. That diffuser allows the light to act like it’s coming from the entire surface of the diffuser, rather than the single tube of the fluorescent light itself. Let’s assume there is a table under this fluorescent light and we’re a small insect on the floor beside the table. If we look up, we see the entire diffuser. So the area of the floor we’re standing in will get full illumination from the diffuser.

As we walk under the table, we can see less and less of the diffuser, so we enter into a darker and darker area. When we’re under the table completely, we cannot see any part of the diffuser, so any light we do see (and we’ll see some), is coming from the light that’s reflected from other surfaces in the room.

[top]


Patches and Elements – the guts of Radiosity
In order for radiosity to do what it needs to do, it must slice your scene up into lots of tiny pieces. A single wall might be sliced up into a grid of smaller polygons called elements and patches.

If you are familiar with what lightmaps are, then you’re already familiar with what FSRad considers an element. A lightmap is a grid of pixels that is applied to a polygon and each pixel in the lightmap is an element. When light bounces around the scene and lands on a surface, it actually lands within the pixels of the lightmap (elements) which are eventually used to give the surface the various lighting effects.

Patches are simply groups of elements. You can specify this grouping within the FSRad user interface (on the General tab, labeled “Patch subdivision factor”.) If you set this value to 1x1, then every 1x1 group of elements (i.e. every single element) becomes a patch. If you set it to 2x2, then there are four elements per patch. And so on...

Conceptually, patches and elements are very different. Light is received by the elements to illuminate a surface, and patches are used to reflect that light back into the scene.

To clarify, when light hits a surface, it ends up in the elements. Some of this energy is reflected back into the scene. So the energy from a group of elements is gathered up into a single patch and later reflected back into the scene. This confusing mess is important for performance. Bouncing light around from every element to every other element is expensive, and can take a lot of time. Patches are a way of reducing the accuracy of the radiosity solution (a little bit) at the cost of speeding things up considerably, a rather nice trade-off.

FSRad has an extra advantage over other radiosity processors, in that it uses an “adaptive degradation” technique. This is a fancy word for “reduces the quality when quality means less.” I’ll explain...

As the radiosity process begins, the brightest source of energy is chosen and that energy is emitted from that light out into the scene where it is captured by the elements. We then gather the reflected energy from all the elements and move it to their patches. Then the brightest patch is chosen and the process begins again. After a while, much of the scene’s energy is absorbed, and the amount of energy in the brightest patch will be considerably less than it was on the first pass. Although this reflected energy is still important for those subtle details, it is less important than it was on the first pass (in terms of accuracy.) So at this point, we’re able to increase the size of our patches, which speeds things up. We can then continue to emit light from these larger patches, and if the energy for a patch drops too low, we can make the patch bigger still.

[top]


Lighting values
If we go back to how standard lighting works, we find that we specify intensity and color in terms of RGB values, often in the range of 0 to 255. But real life has no such limitations. The amount from a 100 watt light bulb is negligible when compared to the energy that sun produces. That’s a pretty dynamic range.

Radiosity allows us to simulate this dynamic range, but at some point, we’ll want to display these colors on the computer, so we’ll need to temper them to something more reasonable. The human eye handles this dynamic range by adjusting the size of the iris. Film adjusts this with the f-stop. And video cameras handle this with gain.

On the computer we tend to use clamping (“saturation”). This just means that we take any value above 255 and make it 255. It turns out that this is a pretty shoddy way of doing things because an RGB color value of [650, 10000, 650] is a very green color, but if we clamp each color component individually, we get [255, 255, 255], which is white.

So FSRad offers two clamping methods, normal saturation (the shoddy kind) and saturation with color ratio retention. The latter allows us to retain the green of the light color, while still keeping it within the limits of the computer’s RGB values.

Also, since we’re used to coloring our lights with RGB value of only 0...255, we find that it’s actually shining very (VERY) small fractions of light around. So we need some form of multiplier to convert those initial RGB lighting values to something more useful for radiosity. For area lights, this number is usually pretty big (the default is a million.) You can think of this value (called the “Area Light Multiplier”) as a conversion factor from RGB values (used by the computer) to energy values (used by the simulation of light.)

Another important thing to understand about area lights in a radiosity processor is that the size of the surface that is emitting energy has a direct impact on how bright the light is. A emission surface that has twice the area of another emission surface will emit twice the energy. This is very important because this is something you’ll need to consider each and every time you create a light source for the radiosity process.

Point lights in FSRad are treated very differently. Since they don’t exist in reality, they are faked in FSRad. Avoid using them if you can, they are only in here for convenience, and don’t offer the true-to-life visual effects that area light sources do.

As an added note, because radiosity lights are polygons, it’s usually a good idea to actually model your lights into the scene. Light that comes from nowhere isn’t very realistic now, is it?

[top]


What radiosity doesn’t do
Mother Nature is one complex bitch and radiosity only simulates a portion of her effects.

In real life, photons are pretty small so their interaction with surfaces happens at the atomic level. If you look at a small portion of your wall with an electron microscope, it will appear anything but flat and that’s nowhere near the atomic level.

If you were to shine a red laser on a wall with a flat white painted surface, you’ll see a spot of red where the laser light is reflected pretty evenly back into the room (this is called “diffuse reflection”). However, if you shine your laser on a polished floor, you might be able to follow the reflection of that beam on a wall (this is called “specular reflection”). If the floor wasn’t polished very well, you might see that reflected beam appear distorted on the wall, which would be the combination of diffuse and specular reflection. Similarly, when you burn dry leaves with a magnifying glass, you see the effects of “caustics” (light that is refracted to a small area.) These are only a few of the complex effects that Mother Nature produces.

So radiosity generalizes all surfaces and makes the assumption that they are all “perfect diffuse reflectors.” In plain English, that means that radiosity assumes they reflect light equally in all directions. However, this isn’t necessarily a bad thing, because the majority of light we see is from diffuse reflection. Therefore, radiosity simulates the most common lighting effects, which is why it still produces appealing results.

[top]


Options reference
The following section covers the various options in the GUI. You should at least familiarize yourself with them all, because many of them have side-effects and changing one option might affect the performance of another.

General options

Patch subdivision factor: Specifies the grouping of elements into patches. Lower values provide better results at the cost of speed. It’s usually a good idea to keep both of these values equal (i.e. 2x2 and not 3x2).

Area light multiplier: Specifies the global intensity of all area lights, in terms of energy values. This value should usually be very large because lights are typically described in RGB values, which are usually too small for energy values needed to fully illuminate a scene.

Point light multiplier: Similar to area light multiplier, but for point lights. Because point lights are “faked” (they don’t exist in the real world, and therefore have no analogous instance in a radiosity processor), this value doesn’t need to be quite as large as it does for the area light multiplier.

Convergence (energy): Determines when the radiosity process ends, in terms of the amount of energy being distributed. As the radiosity progress progresses, more and more energy is absorbed, leaving less and less energy to be redistributed into the scene. Use this value to determine when “enough is enough”. Specifically, this number means, "When the amount of energy being distributed in a single iteration becomes less than this value, stop processing." Note that the further along the progress goes, the less energy is distrubted during each iteration, so lower numbers here are better.

IMPORTANT: When adaptive subdivision is enabled, there is an important relationship between convergence and the adaptive subdivision threshold. Both values are based on the amount of energy being distributed on a per-iteration basis. If you set the convergence above the subdivision threshold, then the process will end before any subdivision is allowed to happen. However, with the reverse scenario (convergence is less than the adaptive subdivision threshold), then when the per-iteration energy drops below the threshold, the patches will be subdivided, and the amount of energy per iteration will go up. The process will eventually end when the adaptive subdivision reaches the maximum subdivision level and therefore can no longer subdivide, allowing the per-iteration energy to drop to the convergence level.

Limit total number of iterations: Use this checkbox (and value) to automatically stop the processing after a specified number of iterations. This is useful if you just want a quick pass and/or if you want an automated way to stop the process from taking too long.

Direct light only: Use this to illuminate only from lights and stop. This prevents any light from bouncing off of a surface back into the scene. This simulates how a standard ray-tracer works. Useful for quick results. If you use this, for quick results, also see “Enable the user of the ambient term for estimation of unshot energy.”

Use Nusselt’s Analogy for more accurate form factors (slower): This invariably gives you better results at the expense of a 5% speed decrease for processing. It’s usually a good idea to leave this off for quick results, but leave it on for final results.

Enable the use of the ambient term for estimation of unshot energy: Arial'> Use this for quick passes, when you don’t expect to run the entire process. This acts much like ambient light, in that it estimates how much energy was not redistributed and treats that as ambient light.

Exit when finished: Check this box to cause the program to terminate when the processing completes.

Leave results on screen when finished: Check this box if you want to study the statistics information (from the progress dialog) after processing completes.

Adaptive degradation: Check this to enable adaptive degradation (see the sections above for a discussion on this feature.)

Threshold: When the amount of energy being distributed in a single pass drops below this value, then every 2x2 grid of patches are merged into a single patch. In other words, if the patch subdivision is set to 4x4, then the subdivision will grow to be 8x8. When this happens, all patches in the scene will be merged. Smaller values will retain the best quality and only improve speed a little, while larger values (say 0.5%) will improve speed considerably at the cost of accuracy. So use smaller values for final renders, and larger values for quick results. Note that you may need to process a scene for a few seconds in order to get a feel for the actual energy values in order to properly set this value.

Max size: Use these values to limit the adaptive patch subdivision and prevent it from merging patches together until they are too big. These values specify their element grid sizes. Therefore, with an initial patch subdivision factor of 2x2, and this max size set to 8x8, the patches will only be (adaptively) a total of 2 times (from 2x2 to 4x4 to 8x8).

CLI Help: Get help on the command line interface

GO: Start the radiosity processing.

[top]


Database options

Input file: Duh...

Write RAW files of all lightmaps: Use this to store your lightmaps into individual files. These files will have a .RAW extension (RGB color values stored linearly in the files, 8 bits each.) In order to load these files, you’ll need to know the resolution of the lightmap, so the filename will be its index, followed by the resolution of the map itself. Use the edit box to specify the directory that these lightmaps should be place into.

Write finished OCT file: Specify the output OCT file. OCT files include the lightmap data as well as all geometry. In the current version, note that the lights in the output OCT file will not be correct, so don’t try to use the output OCT file as the input of another radiosity process.

Default surface reflectivity (R/G/B): Use these values to specify the default reflectivity of a surface. Some file formats (such as the OCT file format) do not specify surface reflectivites, while other formats (ENT files) may. If a surface has no reflectivity specified by its file format, then it’s important that it have some value. Specify that default here.

Polygons per node (max): The scene is first built into an octree. However, in this current version, the octree is unused, so the entire scene must fit within the root node. Specify a very large number here (large enough to contain your largest scene times a few hundred for good measure.)

Minimum node radius: Specifies the smallest octree node radius, in world units.

Max tree depth: Prevents the octree from growing too deep. This is important for scenes that have many overlapping polygons, which can cause infinite recursion.

Gaussian map resolution: Just use 8.. it’s a good value, don’t you agree?

Fewest split search range: Just use 5. I like that number!

[top]


Lightmap options

U & V: These values specify the density of the lightmap. Specifically, they represent the size of each lightmap texel, in world coordinates. So 16x16 represents a lightmap texel that is 16 world-units wide and 16 world-units tall. Smaller values mean smaller texels, which in turn produce higher definition lightmaps, but use more memory. Note that cutting the size of both values in half (say from 16x16 to 8x8) will require exactly four times a much lightmap memory. This value has no effect on the size (resolution) of the lightmap, but does have a direct impact on the number of lightmaps produced. Finally, a side-effect to these values is that adjusting them will affect the overall scene lighting. So if you cut the size of them in half (say, from 16x16 to 8x8) you will need to adjust your lighting by doubling the multiplier (either the area light multiplier, the point light multiplier, or both.)

Width & Height: These values specify the resolution (in pixels) of the lightmaps that are produced.

[top]


Post processing options

Clamping: Specify how to treat the output energy levels when converting them to RGB values for the lightmaps. You probably don’t want to use “None” (this can produce some odd color artifacts). “Saturate to 0/255” will produce saturated lighting that may appear all washed out with very little variance, depending on the lighting used. Most will want to use “Saturate to 0/255 (retain color ratio)” because it maintains proper colors when saturating.

Gamma: Use this to adjust the gamma of the output. Since RGB values are linear (128 is half the intensity of 255) and most monitors are non-linear, gamma allows you to “remap” your RGB values to match that of your monitor. The most common value for monitor gammas is 2.2, so you should probably put that there and leave it there. However, you may play with this to help wash out the colors (higher values) or improve contrast (lower values) for specific artistic effects that you cannot achieve with proper lighting within your scene.

Ambient: This is, by default, set to [0, 0, 0] and disabled so you can’t adjust it. It is this way because radiosity doesn’t use ambient, and if you use this, you’re a total loser and you suck donkey balls! Anyway, if you must use it (or if some total loser that sucks donkey balls set this to something and you want to turn it off), then make sure you hold down both keys CTRL and SHIFT when you switch over to the post processing tab. This only works the first time you switch to this tab, so you may need to close the application, then restart it, then hold down CTRL+SHIFT, and then click the Post processing tab. I made this tricky on purpose. If you use this at all, you have completely lost my respect and my support and I may just punch you in the nose on principle alone.


[top]


Command line automation
Here's an example:

FSRad lw=128 lh=128 infile=foo1.oct outfile=foo2.oct gamma=2.2


As you can see, it's parameter_name=value pairs. Parameter names are case sensitive and must not contain spaces (long filenames are okay, but not long filenames with spaces in them. Sorry.)

The software will first load all default parameters (as shown in the GUI normally) and then the command-line parameters will be used to override those defaults. The command-line will not modify the defaults stored in the registry -- only the UI affects what's stored in the registry.

The Commands:
lwLightmap width
lhLightmap height
utpuTexels Per Unit (U)
vtpuTexels Per Unit (V)
otthreshOctree threshold
otmaxdepthOctree max depth
otminradiusOctree minimum radius
reflectrDefault surface reflectivity (red)
reflectgDefault surface reflectivity (green)
reflectbDefault surface reflectivity (blue)
bspminsplitBSP min split range
bspgausBSP Gaussian resolution
rawfolderFolder for writing RAW lightmap files (enable writeraw, too)
writerawSet to 1/0 to enable/disable writing RAW lightmap files (set rawfolder, too)
writeoctSet to 1/0 to enable/disable writing OCT files for final output (set octfile, too)
octfileFilename for the output OCT file (enable writeoct, too)
infileInput filename (OCT or ENT)
convergenceConvergence (amount of energy distribute)
usemaxiterationsSet to 1/0 to enable/disable the maximum interations limiter (set maxiterations, too)
maxiterationsMaximum number of iterations (set usemaxiterations, too)
areamultArea light multiplier (for ENT input files)
pointmultPoint light multiplier (for OCT input files)
subuInitial patch subdivision (U)
subvInitial patch subdivision (V)
useambtermSet to 1/0 to enable/disable the use of the Ambient term for treating unshot energy as ambient light.
usenusseltSet to 1/0 to enable/disable the use of the Nusselt Analogy (about 5% slower, but more accurate results, usually a good idea.)
directonlySet to 1/0 to enable/disable "direct light only" -- turning this on, disables the iterative radiosity process.
adaptmaxuSet the maximum horizontal (U) size for adaptive patch subdivision (set useadapt, too)
adaptmaxvSet the maximum vertical (V) size for adaptive patch subdivision (set useadapt, too)
useadaptSet to 1/0 to enable/disable the adaptive patch subdivision (set adaptmaxu and adaptmaxv, too)
adaptthreshSet the adaptive patch threshold (set useadapt, too)
gammaGamma correction (most commonly should be set to 2.2)
ambrStandard ambient light (red) - if you use this, you are LAME!
ambgStandard ambient light (green) - if you use this, you are LAME!
ambbStandard ambient light (blue) - if you use this, you are LAME!
clampretainSet clamping to retain color ratio - recommended
clampsaturateSet clamping to saturation (simply clamps to black/white)
clampnoneSet clamping to none (may cause strange color-wrap-around effects)


[top]


Quick-start guide - jumping in and getting results
For the impatient or "just curious" folks:

If you look in this distribution package, you should find some example ENT files. Simply run those files through the FSRad program.

Doing this is simple enough, just visit the Database tab (in the GUI) and select the input ENT file from the examples, then enable the option "Write finished OCT file" and specify a name. When processing completes, the OCT file will be contain the resulting geometry and lightmaps.

The OCT file can then be loaded by the Viewer program (also included.) In the viewer, use the mouse and buttons to navigate around.

[top]


Get up and running quickly with FSRad in your own code
If you want to make use of the results, here's the best way to do so:

  1. Hack the code in GeomDB.* to read/write your own file format.


- or -

  1. Convert your geometry into a ENT file (see GeomDB.* for information) Although the processor can read OCT files, ENT files are better because they contain materials that alllow entire polygons to be emitters, rather than simple point lights.
  2. Run the FSRad tool on the file
  3. Have the tool write out an OCT file (contains the geometry and lightmaps along with the newly remapped polygons and UVs) -- note that the processor may need to split polygons that extend past a lightmap boundary. So the output geometry MAY NOT be the same as the input!
  4. Convert the OCT file back to your own file format.


Everything you need to compile/run it is here... To compile the project, go into Source/FSRad and double-click on the DSW file. The Run folder contains compiled binary files. When you compile the program, look for it there.

[top]


About the source code
There are three major pieces of code here:

  • FSTL - Fluid Studios Template Library
  • GEOM - 3D math & geometry library
  • FSRAD - Radiosity processor


FSTL is my own miniature version of the STL. For the most part, it's faster than the standard STL (with the exception of reference counting) and is more compact. There are a number of interface differences, but for the most part, they follow the same interface construct. There are no compiled files for this, only inline code in the headers. As a side-note, this is a ~really~ cool template library. ;)

GEOM is a 3D math and geometry library. This compiles into a library.

FSRad is the actual program. It uses the FSTL (headers) and the GEOM (headers and lib) to compile. All project files are there.

If you want a starting point, go into RadGen.cpp and look at the go() routine -- this routine is the main controller of the radiosity process.

[top]


Interested in taking a course on radiosity online?
I'm most likely going to be starting on a course (or few) at [GameInstitute.com] that covers radiosity. This/these course(s) will discuss standard techniques, as well as the new innovations exposed by this project. If you want to know more or to be informed when the course becomes available, be sure to drop me an email at [midnight@FluidStudios.com] and include GI Course Notification in the subject, and I'll be sure to save your request and notify you when the time comes.

On a more personal note...

Those of you that know me, know that when not confined to an NDA, have always shared as much information as I can. So why am I advertising here that you should pay for this information by signing up for an online course? Well, hear me out, please...

When [GameInstitute.com] and I first talked about this possibility, I expressed that I had every intention of freely distributing the source code and producing a few documents that explained the work. I mentioned this because of the obvious conflict of interest that it raises. Joe Meeneghan (CEO of GI, and an all-around-great-guy) was very open and willing to discuss options.

As it turns out, the conflict is minimal.

The documents I planned on producing would cover very specific topics, as they applied to this software. However, the [GameInstitute.com] courses would include many various techniques (including the more common hemicube approach, for example.) The course would include these techniques, along with implementations of them, and also cover the finer implementation details that I don't typically cover in my more general documents. Also, those that signed up with the course would find added value in my personal attention.

All in all, the [GameInstitute.com] course could be summed up as an advanced extension to the documents I typically produce, with a much finer level of information detail.

[top]


Special thanks and credits
I'm pretty lucky to be able to release this... you owe a bit of grattitude, too, and here's why:

This project was contracted by Jason Zisk of nFusion Interactive, LLC. Jason and nFusion were kind enough to allow me to maintain rights to the code, so that I could do with it what I wanted to... I chose to release it publicly. How many companies do you know that are _that_ cool? :)

You can visit their website here: [http://www.n-Fusion.com/]

This radiosity processor was initially used for processing the radiosity in the buildings of the game Deadly Dozen. It will probably be used for future nFusion games, and maybe others, thanks to their generosity and willingness to share.

Since the initial release of this tool, I've had help from others. Here's a list of those kind individuals that have helped out. I'm sure this list will grow with future releases...

  • Hadrien Nilsson [hadrien.nilsson@webmails.com] - for taking the time to point out a few bugs with very specific examples so they could be fixed quickly and efficiently.

[top]


Copyright and legal information
Everything you see in this distribution is Copyright (©) 2001, Paul Nettle and Fluid Studios. All rights are reserved.

This software is free for private and personal use. Use at your own risk, Yadda Yadda.

This software is NOT free for commercial use. Commercial users must pay for it with credit (i.e. credit to the author in your readme file or someplace.) Also, please send an email to me at: [midnight@FluidStudios.com] so I can track the software's usage.

Further copyright information will be found in all the source files.

[top]


© 2001 Paul Nettle & Fluid Studios. All rights reserved.