Displaying 3D polygon animations
Application programming interface, eg Direct3D, OpenGL
An API is a toolset of programming instructions and 3D modelling standards that can communicate directly with a game engine. The developers of an API release their product to the public so that other software developers can design products that are powered by its service. In terms of 3D graphics and games, APIs such as Direct3D and OpenGL are used by programmers to communicate directly with products such as the Unreal Engine or the Unity Engine. Using the tools, commands and coding instructions provided by the API, they can feed sequences and instructions to the game engine to determine what should happen when. The result is the ability for a user to interact with and control the 3D game world as the instructions from the game player and the computer are passed back and forth with designated results.
Graphics pipeline
The graphics pipeline refers to the sequence of steps used by the computer to create a 2D raster representation of a 3D scene. Once a 3D model has been created the graphics pipeline is the process of turning that 3D model into what the computer displays on screen. Since the viewer is looking at the graphics on a flat 2D plane (the tv screen), the illusion of 3D is created by a constantly updating sequence of 2D images. The result is movement, animation and the belief that models can move ‘into’ and ‘out of’ the on screen playing field.
There are a number of steps that the computer must go through to interpret and recreate the stored model on the screen, including modelling, texturing and lighting.
Other factors must also be taken into account and recreated in accordance with where the 3D model is appearing at that particular moment. According to the light source and camera on the screen, additional factors can include viewing, projection, clipping (whether a model is overlapping or intersecting with another model sharing the same space on screen), shading and display.
Rendering techniques
Radiosity is a method used by computer programs and 3D modelling to realistically interpret the way that light diffuses. This can vary according to the intensity, radiation and position of a light source, along with the surface that it is shining onto. The light emanating from A lightbulb in a room will gently diffuse as it gets further from the source. Radiosity in computer terms is an attempt to accurately replicate these natural occurrences.
Ray tracing is an attempt by the computer to accurately simulate how light hits an object based on where the viewer is watching from. The idea of a view ray is based on a straight line from the camera’s viewfinder to the object. The light source can be illuminating an object from any point. These two factors combine and affect how light bounces and diffuses when hitting the object. The word tracing refers to an attempt to trace the path of light over and around the object and interpret where highlights and shadows would occur. In ray equals ‘a ray of light’, and trace equals ‘following a path from a given start point to an end point’. So the term ray tracing interprets as ‘following a path of light’.
Rendering engines
Rendering is the process of generating an image from a model (or models in what collectively could be called a scene file), by means of computer programs. A scene file contains object data structure; Data structure can include factors relating to geometry, viewpoint, texture, lighting, and shading information.These combined would ‘describe’ a virtual scene.
The data contained in the scene file is then passed to a rendering program to be processed and output to a digital image or raster graphics image file.
Rendering for interactive media, such as games and simulations often uses a rendering engine. These engines calculate and display scene file descriptions in real time. Usually this real time output is managed at rates of approximately 20 to 120 frames per second.
In real-time rendering, the goal is to show as much information as possible as the eye can process in a 30th of a second (or one frame, in the case of 30 frame-per-second animation). The goal here is primarily speed and not photo-realism
Distributed rendering techniques
Rendering processes deal with four main factors sequentially to recreate a model - frame distribution, pixel distribution, object distribution and hybrid distribution. In most cases relating to games, many things are happening on screen at once with models at different stages of rendering in accordance with when they were introduced into the game.
A technique called ‘parallel rendering’ allows the rendering engine to process lots of different files at once, handling the different stages of rendering they are at. It is well known that rendering is a parallel process (meaning models are rendered at different times in ‘parallel’ with each other) and has been the subject of much research. The computer must designate its workload to successfully render multiple models at once, rather doing them one after another. This would certainly ruin performance and make for a sluggish, pretty much unplayable game.
There are two reasons for using parallel rendering - performance scaling and data scaling. Performance scaling allows frames to be rendered more quickly while data scaling allows larger data sets to be visualized. Different methods of workload distribution tend to favor one type of scaling over the other.
Other advantages and disadvantages of parallel rendering techniques include issues such as latency (time between stimulation and response) and load balancing (workload) issues.
Lighting
Rendered lighting refers to how a scene is lit and how that light affects any objects in the scene. Different lighting can create atmosphere, affect textures, radiosity and ray tracing or even be used for gameplay.
Textures
The term textures refers to how 2D surfaces are applied to 3D models. Rather than building high surface detail on a model and reducing speed, rendering and load times, textures can wrapped around and stuck onto the model rather like transfers. They can often give the illusion that a model is more complex than what it actually is, particularly when combined with lighting.
Fogging
Fogging is used in 3D graphics to simulate the perception of distance by introducing a fog. The further away things are, the more the computer shrouds them in digital fog, reducing texture detail, colour and contrast. Objects of a higher texture detail, colour and contrast are darker and more vibrant, giving us the impression that we are standing closer to them.
Shadowing
Shadowing is a computer calculation that attempts to work out where light is hitting an object and where shadows are logically cast. Other factors taken into account include the intensity of the shadow darkness, it’s length, sharpness and how quickly it diffuses into the surrounding environment.
Vertex and Pixel Shaders
A Vertex Shader is also GPU component and is also programmed using a specific assembly-like language, like pixel shaders, but are oriented to the scene geometry and can do things like adding cartoony silhouette edges to objects, etc.
A Pixel Shader is a GPU (Graphic Processing Unit) component that can be programmed to operate on a per pixel basis and take care of stuff like lighting and bump mapping.
One is not better than the other, they are both equally as valuable. Vertex shaders tend to relate to the models themselves, whereas Pixel shaders tend to deal with how those models relate to the environment.
A Pixel Shader is a GPU (Graphic Processing Unit) component that can be programmed to operate on a per pixel basis and take care of stuff like lighting and bump mapping.
One is not better than the other, they are both equally as valuable. Vertex shaders tend to relate to the models themselves, whereas Pixel shaders tend to deal with how those models relate to the environment.
Level of detail
Levels of detail in modelling, texturing, shading and lighting affect how quickly and how easily a scene can render or how a game can run. Many modern games use several models of the same thing to be used at different points within the game. A low to mid polycount model is often used in game to allow the game to run quickly and smoothly on the fly. Bear in mind the average game has many models on screen at once, so the collective polycount of all of them must be low enough for the computer to easily process all data.
Levels of detail in modelling, texturing, shading and lighting affect how quickly and how easily a scene can render or how a game can run. Many modern games use several models of the same thing to be used at different points within the game. A low to mid polycount model is often used in game to allow the game to run quickly and smoothly on the fly. Bear in mind the average game has many models on screen at once, so the collective polycount of all of them must be low enough for the computer to easily process all data.
A high poly count model is often used in pre rendered video or cutscenes. As the video is pre rendered it is not being generated on the fly. All lighting, textures and models are pre set with no differing variables being added into the equation. The video therefore streams rather like a pre-recorded movie.
Geometric theory
Vertices
A vertex is a single point placed on the screen, its position being plotted and calculated by the computer on an X, Y and Z basis. X, Y and Z make up a virtual 3D space or graph, rather like X and Y make up a point on a 2D graph. Therefore, using X, Y and Z variables a vertex is a point plotted in virtual space.
A vertex is a single point placed on the screen, its position being plotted and calculated by the computer on an X, Y and Z basis. X, Y and Z make up a virtual 3D space or graph, rather like X and Y make up a point on a 2D graph. Therefore, using X, Y and Z variables a vertex is a point plotted in virtual space.
Lines
Two vertices will have a path drawn between them, plotting a path through a virtual 3D space. This line is comprised of pixels creating a visual representation of a line.
Curves
A curve can be comprised of several (more than two) vertices in a line that together make up a computer plotted representation. A line can be drawn that passes through all these vertices to create a arc or curve.
A curve can be comprised of several (more than two) vertices in a line that together make up a computer plotted representation. A line can be drawn that passes through all these vertices to create a arc or curve.
Edge
An edge is a line that connects to several other lines. For example, a simple cube is comprised of twelve equal length lines that all share vertices at the corners.
Face
A simple cube polygon is comprised of six equal faces, which in turn are made from twelve edges, which in turn are made from eight vertices.
A simple cube polygon is comprised of six equal faces, which in turn are made from twelve edges, which in turn are made from eight vertices.
Polygons
A polygon is a simple or complex collection of adjoining vertices, edges, lines or faces. These elements come together to form a shape greater than the sum of its individual parts.
Element
It is the elements in a polygon that the artist can manipulate to create a different resulting shape. Elements can be lines, curves, edges, faces or vertices. A collection of these elements makes up a polygon.
Primitives
A primitive is a very basic computer generated shape that the artist can begin to cut, distort, elaborate and embellish to create a more complex model. Primitives examples include a cube, sphere, cylinder, plane or cone.
Meshes, for example wireframe.
A mesh is what a 3D model looks like when the texture maps and even the polygon faces have been removed to leave only the outlines of its component polygons, consisting of vector points connected by lines. A wireframe can also be called a wire mesh.
A mesh is what a 3D model looks like when the texture maps and even the polygon faces have been removed to leave only the outlines of its component polygons, consisting of vector points connected by lines. A wireframe can also be called a wire mesh.
Coordinate geometry (two-dimensional, three-dimensional)
This relates to 2D and 3D space on the screen and how elements are plotted and represented on them. Points on a 2D space are plotted using X and Y; X equaling up or down and Y equaling the distance across. A 3D space is much the same with one important addition - as well as X and Y, there is also Z. Z equals the distance a point is plotted into or out of the screen. Therefore X equals height, Y equals width and Z equals depth.
For example, a single vertex could be plotted at the points X 0.33, Y 5.2 and Z -10. An X, Y or Z point can be an integer (whole number), a fraction of an integer (a number with a decimal point) or be positive or negative.
For example, a single vertex could be plotted at the points X 0.33, Y 5.2 and Z -10. An X, Y or Z point can be an integer (whole number), a fraction of an integer (a number with a decimal point) or be positive or negative.
Surfaces
Surfaces can be computer generated calculations that mimic light intensity, reflectivity, colour or surface materials. Surfaces can also be 2D rasterized graphics that can be applied to the faces on a polygon shape.
Mesh construction
Box modelling
Box modelling is a technique used by 3D developers to model a complex shape from a primitive. Selecting a simple primitive such as a sphere, cone, cube or cylinder the modeller adds faces, uses the cut tool, deletes, stretches and deforms the object until it does not resemble the primitive anymore. A primitive can be taken and elaborated on as much as the user wishes until it resembles the final complex polygon.
An example of box modelling is the construction of a human hand; the hand would actually begin as cube primitive that is then squashed and stretched to make it a rectangle shape - believe it or not, this actually is the palm. From this shape four faces are cut into one side and extruded, eventually becoming the fingers. More and more faces are added and extruded until there is enough geometry to alter the shape of the primitive entirely. It becomes unrecognisable as the primitive it started out as.
Extrusion modelling
Sometimes called inflation modelling, this second method differs hugely from box modelling. Extrusion modelling involves tracing the outline of a drawing or photograph, creating a curved line made of many vertices. Each line joining each vertex can be extruded (pulled) outwards, creating faces. Each face can then be extruded, divided up into more faces, deleted and modified until the initial single curve has become a 3D shape in X, Y and Z space.
A good example of extrusion modelling is the modelling of a human face. In Maya or something similar, a drawing of the front and side view of a human head can be inserted on to the X and Z axis. Starting with side view, the modeller traces the outline of the face and head. This should result in a single curve made of multiple vertices. Each point between these vertices is a line which can be extruded. Selecting all the initial lines in the curve and extruding outwards them will result in a group of attached faces that follow the contour of the head - it should look rather like a strip of paper that curves and bends along the face outline. Continuing the process with another set of extrusions, the ‘strip’ grows wider with more geometric faces and more geometry options. The vertices and edges making up these geometric faces can added to and deformed ad infinitum whilst switching between the front and side views of the human head. Eventually a wireframe mesh can be created that exactly follows the human head contours.
A good example of extrusion modelling is the modelling of a human face. In Maya or something similar, a drawing of the front and side view of a human head can be inserted on to the X and Z axis. Starting with side view, the modeller traces the outline of the face and head. This should result in a single curve made of multiple vertices. Each point between these vertices is a line which can be extruded. Selecting all the initial lines in the curve and extruding outwards them will result in a group of attached faces that follow the contour of the head - it should look rather like a strip of paper that curves and bends along the face outline. Continuing the process with another set of extrusions, the ‘strip’ grows wider with more geometric faces and more geometry options. The vertices and edges making up these geometric faces can added to and deformed ad infinitum whilst switching between the front and side views of the human head. Eventually a wireframe mesh can be created that exactly follows the human head contours.
Using common primitives, for example cubes, pyramids, cylinders, and spheres.
These are used for box modelling (see above). They can also be fused together to make more complex base shapes. These base shapes can be elaborated and refined until more complex results are achieved.
These are used for box modelling (see above). They can also be fused together to make more complex base shapes. These base shapes can be elaborated and refined until more complex results are achieved.
3D development software
Software
Examples include 3D Studio Max, Maya, Lightwave, AutoCAD, Cinema 4D. All these programs contain complete toolsets for building polygon meshes, adding surfaces and lighting details, creating animations and adding scripting for user interactivity. They give the user the ability to import 2D artwork to apply to their models, edit and cut movies together, add sound and music and much more. They are the industry standard for creating complex 3D graphics, movies, games and applications.
File formats
Examples of file formats include:
3ds
Examples of file formats include:
3ds
Used by the Autodesk 3ds Max 3D modeling, animation and rendering software. It was the original native file format of the old Autodesk 3D Studio DOS (releases 1 to 4), which was popular until its successor (3D Studio MAX 1.0) replaced it in April 1996.
.mb
.mb
MB files can be opened by AutoDesk Maya. MB is a file extension for a binary scene file used with AutoDesk Maya software. MB stands for Maya Binary scene. MB files contain three-dimension models, textures, lighting and animation data to be used with the AutoDesk Maya animation software.
.lwo
LWO is an object file format used by LightWave. LightWave is a program used for 3-D modeling, animation, and rendering. LWO files contain objects stored as meshes, and include polygons, points, and surfaces that describe the model’s appearance. They also might reference image files for textures.
.lwo
LWO is an object file format used by LightWave. LightWave is a program used for 3-D modeling, animation, and rendering. LWO files contain objects stored as meshes, and include polygons, points, and surfaces that describe the model’s appearance. They also might reference image files for textures.
.C4d
C4D is an object file format used by Cinema 4D. C4D is a three-dimensional model created with Cinema 4D, a professional 3D modelling and animation program. It can be exported to image-editing programs, such as Photoshop and Illustrator, as well as video-editing programs, like After Effects and Final Cut Pro.
plug-ins.
A plug-in is a software component that adds a specific feature to an existing software application. When an application supports plug-ins, it enables customization.
C4D is an object file format used by Cinema 4D. C4D is a three-dimensional model created with Cinema 4D, a professional 3D modelling and animation program. It can be exported to image-editing programs, such as Photoshop and Illustrator, as well as video-editing programs, like After Effects and Final Cut Pro.
plug-ins.
A plug-in is a software component that adds a specific feature to an existing software application. When an application supports plug-ins, it enables customization.
Constraints
Polygon count
The polycount is the entire amount of polygon shapes within a mesh. The higher the polycount, the more detailed the model and the higher the file size.
File size
The file size takes into account the poly count, the resolution of the 2D raster surfaces lighting, bump mapping and the collected amount of models within the scene. The higher the level of detail on these, the greater the file size. The file size should be taken into account in terms of the rendering time.
Rendering time
A greater file size will result in longer rendering time. This should be taken into account not only creating in game video or footage but also 3D models within the game world. Models that take longer to render may slow up the game engine and running if the file size is too great. A trade off between looks and performance should always be kept in mind.