Home Feedback Contents Search

 

Process

  3DCenter

Up

 

The General Process of 3D Rendering


No single process can describe the variety of 3D rendering programs out there. My objective here is to explain several abstract rendering processes, with emphasis on traditional polygon based rendering.


1) Conventional Mesh Based Rendering


Create Basic Model Elements: 

This means building simple 3D geometric shapes from primitives. These shapes can be spheres, cones, cubes, or any other basic 3D shape. The primitives (triangles or other polygons or even some 3D elements) are manipulated thru the fundamental graphics API, such as OpenGL or Quick Draw to form polygon meshes. To render curved objects, a spline mesh would be used instead. Modeling is the first step to rendering objects in 3D. At the modeling stage, the system does not perform much calculation other than simple geometric manipulations. Surface textures and lighting are not actively rendered.


Assemble Complex Objects:  

The basic shapes are arranged to form a complex object, such as a hand, a car or a chicken. The graphic API allows for spatial translations and geometric rearrangement using 3D matrices.


 Arrange Objects in 3D-scene: 

A solitary object is not that interesting. Putting the complex object into a 3D environment composed of other basic model element is often more desirable. Again, matrix transform can be applied.


 Composition: 

Once the environment is set. The relative size and location of the object in x, y and z directions need to be set. This is done with 3D scaling and shearing in different axis.


Choose Material:  

The graphic API supports different textures and materials. Allowing the renderer to set color, roughness, surface bump etc. The API takes care of texture mapping, or the application of 2D paint elements or pictures to 3D surfaces. This is either performed as a 2D sheet wrap over the 3D object or as a paint function in true 3D.


Arrange Lighting Conditions: 

All objects in the physical world interact with light, creating bright and dark areas. Lighting transform is one of the most critical and calculation intensive areas of rendering. The first step of lighting is to select among ambient light from a point source, area sourced luminous field, and directional remote lighting effects. These three different lighting conditions can drastically affect the 3D scene.


Set viewpoints: 

Once the environment is fully composed and ready. A viewpoint is chosen for the rendering. This process involves the matrix transformation of all of the elements in the scene. The interface for such transform is usually through a virtual camera and line of sight that could be manipulated by the renderer.


Computational Rendering: 

The most basic rendering is wire frame based, in which the engine renders only the wire frame. This of course is very limited because wire frames are not that pretty and all of our earlier textual and lighting effects would have gone to nothing. Physical rendering compiles all of the special, textual and luminous data and renders them thru a higher-level API/rendering engine. Such engines are usually based on light and ambient reflections. Two of the most popular methods are the  ray-tracer and the radiosity meter. Ray tracers project synthetic rays of light onto the 3D model, any point of reflection is caught and rendered as part of the object. The qualities of the rendered object thus depend on the quantity of sampling rays. Radiosity meter takes a different approach by calculating the amount of light and thermal energy transferred in a 3D environment to determine the 3D surface texture. Radiosity is thus more efficient but less accurate. 


2) Vorxel Based 3D Volumetric Sculpting:

This is a relatively new alternative to traditional polygon based modeling. A vorxel (volume pixel) is essentially a tiny image object exhibiting basic textural, coloration, and behavior characteristic. When massive fields of such “particle” graphic objects  are arranged in a volume of space, a composite 3D design forms. Thus, each Vorxel exists in space as an independent unit as suppose to a part of a polygon. This modeling process can create extremely realistic terrain with great texture at the cost of computational efficiency and visual clarity. However, with the continued advancement of computer technology and the introduction of hybrid vorxel/polygon modeling, vorxel is becoming an attractive alternative to traditional polygon based modeling.

Delta ForceDelta Force

 

 

 

 

Vorxel based terrain as seen on Nova Logic's Delta Force


3) Fractal Modeling:

Fractals are the mathematical representation of chaos series, which are complex and unpredictable series exhibiting a set pattern. Fractals express the chaotic characteristic of self-similarity and master-instance relationship, which implies that a fractal pattern is based on constant iteration of the same shape. This is a very convenient and powerful method for modeling natural phenomenon and objects as nature frequently exhibit fractal traits. Take for example, a tree. Its branches and leaves divide in predictable patterns, patterns that could be defined by a single or multiple fractal relationships augmented by random number generators. This means that the data for a whole tree and indeed a whole forest could be expressed in a single iterative expression.

A Pretty 3D Fractal Tree

Another 3D Fractal Tree

Now that's a tree! This was a real time fractal rendering done by Nvidia to wow the crowd at the GeForce256 launch ceremony in SIGGRAPH 99. The file is gigantic, but certainly worth the wait. 

Also check out this neat 2D Fractal Tree Applet


                                        This site created and maintained by Jia Xu.
                                        Send mail to jxu@mbhs.edu with questions or comments about 3DCT.
                                        Last modified: January 17, 2000