RAYTRACING AND POV


Eduardo Llaguno

ellaguno@softtek.com


The rendering process known as raytracing is based on a simulation of physical reality in which objects, opaque and translucent, and rays of light exist. The translucent objects may be simulated with different indexes of refraction. The rays of light are emitted by mathematically described light sources and there must exist a definition of a virtual camera as in any other 3D image generating algorithm.

The rays of ligth are followed in their path until they "crash" (or intersect) with an object of the scene. In that moment, taking into account the color of the ray of incident light, the normal vector in the surface, the object in the intersection point, and the physical properties of the surface of the object two new rays are calculated. These rays are named as the reflected and the refracted rays(as thos which upon entering a water medium from are, are "bent"), and their color and direction are calculated.

However, the final simulation of reflection and refraction is built up inversely as it happens in nature, because otherwise it becomes prohibitively expensive in terms of the computing time needed even for the simplest of images one could imagine. One one ray of light comes out from the light source, we can calculate its colisions or intersections with the objects in the scene and continue then this process with the reflected rays until one of them hit our virtual camera. Obviously, continuing with this process very few (if any) of the emmitted rays will enter our virtual camera, and because od this the process if carried out inversly: it is taken as a fact that some ray must enter through one of the pixels of the rendered image and it is calculated (backwards) the object from which the ray is coming from. When a surface is found and the point of this surface can be calculated with the original ray from which the reflection and the reflection ray is combined with the first and take the properties of the surface acordinf also with the light source defined on the scene.

Continuing with the inverse process of the RayTracing simulation, the algorithm consumes a lot of time calculating one image (is common to wait several hours for just one image). Thats there are some methods of optimization schemes. One is the most basic called the adaptive tree: we do a diagram parting from one of the rays that enters the camera, and associate this on a tree diagram the rays that are "generated" on the inverse manner when it found a interest surface and we continue recursively with this rays we obtain a ray tree, in one of each, between the deepest one, contribuing with less color at the end of the associated pixel. A crude solution is tu cut the tree from certain height, a mor elegant solution and precise is to trim branches (this means to stop calculating associated rays) thats helping with small color percentage at the end.

Another technique very used in the optimization has to be with the assignin boxes (bounding box) to each object to avoid complex calculus on the intersection with rays that passes so far of the surface of the object. Si first thing you calculate if a ray has an intersection with the box, if the intersection does no exists, you supposed this ray does not intersects the object in any place. If the ray intersects the box youc have to make all the necesary calculus with all the surfaces of the object. Is very important the place or the adequate creation of this regions to be the smallest posible and simultaneusly to round al the object associated to it.

A common problem in the generation of the computer images is the creation of "jaggies", the effect is like looking a ladder on the diagonal lines, or when two objects with no vertical or horizontal lines are mixed it's also called the aliasing and is a simple form because the limited size of the pixel on a computer monitor. The proces to avoid this problems on the RayTracing is called supersampling and this is instead of calculating only one ray per pixel of the image, you calculate more, tipically subdividing the pixel in nine regions on a grid of three by three (considered the pixel is really square), and finally taking the average color for each ray returned to the pixel.

However under certain conditions this technique does not resolv all the aliasing problems so is used a refining of the method called stochastic supersampling that is all the nine pixels instead of been send it thru a predefined gred inside of a pixel, is send thru stochastic points inside of the pixel.

This techniques of antialiasing obviusly introduces the necesity of making more calculus, so this optimization technique involves to calculate the color of a ray that intersects the pixel at it's central part and the other four rays on the corners of the pixel. If the color if this ray don't change a lot is taken the average of the color, if this is not done you can subdivide the pixel in regions and the calculate more rays. This technique is called adaptative supersampling.

The effects produced by the normal RayTracing are no capable to simulate is the light refraction (when is brake on several colors) and the ambient light effects (like making a white wall reflect a color light or when a wall is not a perfect light reflector). For effects like mirror images, the refraction of the light on translicud objects are very realistic.

The program called "Persistans of Vision Ray Tracer (POVRay)" is a freeware that allow to create scenes with a realistic envy, better than more comercial software, using RayTracing techniques (combined with other techniques to enhance the realism). The scenes generated by files que descriptors of the same, defined on their own language and writen on any text processor, saved on ASCII files. When you want to generate a scene on graphics mode can use other modeling tools like MORAY (shareware) that allows to model your scene on a graphic enviroment (placing objects, editing, asigning lights, cameras and objects). Another way to generate this scenes is editing on a modeling software like 3D studio and use a converter called "3ds2pov" so the scene is converted to a POV file.

Even this software is oriented to create "still images", manipulating on adequate form you can generate a sequence of images to make an animation. The final result is built with shadows, perspectives, refractions and reflections of light very near to perfection. When you start to use POV you don't start from scratch, you can see several scenes that can be used to create a new one.

The technique used is of great fidelity, but even so the process of generating one image is very slow. The code has been much optimized from version 1 to version 2, having know the facility of automatically assigning bounding boxes besides a direct improvement of existing code.

POV has become a standard for many artists continually generate new scenes and put them to the disposition of the public by means of Internet and other electronic information services.


©Todos los derechos reservados, se prohibe la reproducción salvo permiso por escrito del autor.
©All rights reserved, any copying or duplication of this material is prohibited without written authorization by its author.


Return to Newsletters
Return to the Home Page of the Professional Group SIGGRAPH Mexico City