Part B - Direct3D

Graphics and Cameras

Introduce the graphics primitive display paradigm
Add multiple viewpoint functionality to the framework
Store the graphic representation in a vertex buffer
Implement world, view, and projection transformations

Sample | Framework | Coordinator | Design | Camera
Object | Graphic | Display | Exercises

Computer graphics uses the primitive display paradigm to define and render graphical representations.  Under this paradigm, we describe the surface of any object is a collection of primitives, where each primitive is a standard geometric shape completely defined by a finite set of vertices.  Idealizing the surface of a three-dimensional object using such standard shapes simplifies its representation and minimizes the time required to render it on graphics hardware.  This paradigm has endured because of its simplicity and ability to support sufficiently accurate approximations that consume minimal resources. 

In this chapter, we create a three-dimensional world, using objects represented by sets of such primitives.  We introduce two different viewpoints on the scene with adjustable cameras, add perspective with a projection frustum and accommodate window resizing by adjusting the frustum. 

Graphics Sample

The Graphics Sample displays three orthogonal grids meeting at the origin of world space, a red square, a green box, and a smaller blue box, all set against the photograph of Stonehenge and its travelling sprite in the background.  The green box rotates steadily about an axes through its centroid parallel to the world x axis.  The small blue box is attached to this green box and rotates with it.  The red square's centroid is at the origin of world space and the square can rotate about its centroid.

Graphics Primitives and Cameras

The user can detach and re-attach the child box.  If the user presses the roll key, the red square rotates about an axis through its centroid parallel to the world z axis.  If the user presses the pitch key, the green box spins about an axis through its centroid parallel to the world x axis. 

There are two cameras: a free one at world coordinates (-5, 0, -80) pointing in the direction of the positive z axis and one attached to the green box.  The user toggles between the two cameras by pressing the select camera key.  Using the camera control keys, the user moves the currently selected camera forwards/backwards, yaws right/left about an axis through its centroid and parallel to the world y axis, pitches up/down about an axis through its centroid and parallel to the world x axis, or rolls about an axis through its centroid and parallel to the world z axis. 

The default key mappings for user-controlled actions are:

  • '4' - speed up the sprite to the right
  • '5' - speed up the sprite to the left
  • '6' - speed up the sprite in up direction
  • '7' - speed up the sprite in down direction
  • 'R' - roll square about axis parallel to world z
  • 'T' - pitch green box about axis parallel to world x
  • 'P' - detach blue box
  • 'O' - attach blue box to green parent
  • 'PgUp/PgDown' - pitch the currently selected camera
  • 'Left/Right' - yaw the currently selected camera
  • 'W' - move the selected camera forward
  • 'S' - move the selected camera backward
  • 'Q' - roll the camera counter-clockwise
  • 'E' - roll the selected camera clockwise
  • 'A' - pan the selected camera left
  • 'D' - pan the selected camera right
  • '<' - pan the selected camera left
  • '>' - pan the selected camera right
  • 'Up/Down' - fly the selected camera up/down
  • 'X' - select the camera

The , (comma) key toggles between filled and wire-frame modes.


Five components are upgraded to incorporate the graphics primitive display paradigm:

  • Object - manages lit graphics primitives as well as sprites
  • Graphic - manages and renders sets of graphics primitives as well as sprites
  • Coordinator - renders drawable objects and manages housekeeping of all design elements
  • Design - update the positions and orientations of the objects in the scene
  • Display - implements the view and projection transformations

There are two new components:

  • Vertex - represents the defining points of the graphics primitives
  • Camera - represents the viewpoints on the scene

Graphic and Camera Components


The topics covered in this chapter include:

  1. the library of graphics primitives
  2. decomposition of object information into vertex and world data
  3. transformations from local object space to homogeneous clip space
  4. vertex buffering
  5. hardware capabilities
  6. rendering control

Graphics Primitives

The library of graphics primitives includes triangle sets, line sets, and point sets as illustrated below.  Strips and triangle fans possess a slightly higher degree of continuity than the other sets.


The formulas for the number of vertices needed to define a collection are listed in the table below.  n denotes the number of primitives in the collection.

Primitive Type Number of Vertices
point set n
line set 2n
line strip n + 1
triangle set 3n
triangle strip n + 2
triangle fan n + 1

Vertex Information

A vertex holds information about a single defining point of a primitive.  The information minimally includes the point's position in the local space, plus here, the colour of the primitive at that point.  In other words, a vertex contains a vector plus more.  The type of vertex defines the additional properties.  In this simplest of samples we have only added colour.  We call such a vertex a lit vertex

In gaming, graphical representations of objects do not typically change.  Each object uses a representation that is approximated as perfectly rigid throughout its lifetime.  In such representations, the local positions of the vertices do not change.  This approximation lets us divide each object's data into constant and mutable parts:

  • constant = vertex information - defines the graphics primitives in a local frame of reference
  • mutable = world transformation - defines the object's current position and orientation in the world frame of reference

Throughout a game, the framework adjusts the object's position and orientation, leaving the local description of the vertices of its graphical representation unchanged.  The framework stores the local vertex information in a vertex buffer and the object's current position and orientation separately in the object's world matrix.  The drawing process involves converting a stream of local vertex data into a stream of vertex data transformed into clip space data and coloured. 



The three transformations required to convert an object's local vector data into a vector in clip space are successively the world transformation, view transformation, and projection transformation as shown below.  The framework applies these transformations to each vertex of each graphics primitive.  It stores the world transformation that converts local space vertex data into world space data in the Frame part of each object.  It uses the position and orientation of the current Camera object to determine the view transformation that converts world space data into camera space data.  It stores the projection transformation that converts camera space data into homogenous clip space data in the APIDisplay object. 

Local Frames of Reference

The fixed function pipeline of Direct3D applies all three transformations to each stream of vertices for each object in the scene.  Fixed function means that the API defines the details of these operations and the programmer has no input.  The framework passes each transformation matrix to Direct3D through the SetTransform() method on the display device.  The first argument in a call to this method identifies the transformation:


The second argument is the address of the matrix that holds the transformation data.  Direct3D accepts all matrix data as a D3DXMATRIX type.  The framework's calls to the SetTransform() method require casting from the platform-independent type (Matrix). 

The world, view, and projection matrices change under distinct influences.  The world matrix for each object changes as the object translates and rotates in world space.  The view matrix changes as the camera translates and rotates in world space.  The projection matrix changes as the window resizes. 

Vertex Buffering

The location of the vertex buffer on a host depends upon resource availability.  If the buffer is stored in video memory, the drawing process doesn't need to access system memory or pass vertex data through the CPU.  If, on the other hand, graphics processing already consumes most of video memory, storing the vertex buffer off video memory may improve performance. 

Direct3D lets us store the vertex buffer on or off video memory.  To use video memory, the framework turns on the D3DCREATE_HARDWARE_VERTEXPROCESSING behavior flag when creating the display device.  To let the system select the memory, the framework turns on the D3DCREATE_SOFTWARE_VERTEXPROCESSING behavior flag when creating the display device. 

Local Frames of Reference

The HAL, which is situated between the hardware and Direct3D, allows both software and hardware vertex processing.  Software processing guarantees a fixed and comprehensive set of capabilities and fully supports programmable vertex shaders (beyond the scope of these notes - where the programmer controls the detail operations).  Hardware processing produces varied results across different hardware. 

Hardware Capabilities

The GetDeviceCaps() method on the Direct3D interface reports the capabilities of a display device.  This method populates an instance of a D3DCAPS9 struct with those capabilities.  The reported data takes the form of flags and integer values, depending upon the member of the struct. 

Rendering Control

The SetRenderState() method on the display device lets the programmer control select aspects of rendering.  To access this method, the framework provides a wrapper set() method on the APIDisplay object.  The first argument to this method is an enumeration constant that identifies the state to be set and the second argument specifies the setting's boolean value. 

The render states for this sample cover lighting, alpha-blending and wireframe drawing.  Lighting is turned off since the vertices define the colour. 

Model Layer Topics

The platform-independent topics include:

  1. colour description for design elements
  2. mobility of design elements
  3. management of design elements
  4. key press latency


As noted in the chapter on background images, the framework describes colour using four components (red, green, blue, and alpha).  The framework uses a short format and a long format for storing these components.  The short format uses a 32-bit unsigned integer, which limits the data for each component to 8 bits.  The long format uses a 128-bit struct, which allocates 32 bits per component.  The member data types are floats, each with a ranges of[0.0f,1.0f]

Direct3D provides macros for storing colour information in 32-bit unsigned integer and instances of 128-bit structs, and for converting from one format to another. 

Dynamic Design Elements

Design elements like Objects and Cameras can change position and orientation within a scene.  Each element has its own local frame of reference.  Each element's position, orientation, and scale, if any, can be stored in a single homogeneous matrix.  This matrix transforms any vector defined in the element's local frame of reference into a vector in world space. 

The framework stores the homogeneous matrix for each design element in the Frame part of that element's instance variables.  Each element's interface derives from the Frame base class, giving the element access to all of the Frame class' methods.

dynamic design elements

In this sample, the iObject and iCamera interfaces expose the Frame methods that change and report position and orientation. 

Management of Design Elements

To enable the creation and destruction of design elements on the fly (within the Design object), the framework relegates their management to the Coordinator object, using a decentralized coupling system for element creation and destruction. 

The Coordinator class holds a vector of pointers to each set of design elements.  The class for each type of design element has access to the Coordinator object's address through a base class pointer.  The Coordinator class defines a pair of methods for adding and removing each type of design element from its vectors.  These add() and remove() methods respectively insert and remove the address of an instance of a design element to and from the appropriate vector.  Upon creation of a design element, the constructor for that element adds its address to the Coordinator object by calling the add() method for that element type.  Upon deletion of a design element, the element's destructor removes its address from the Coordinator object by calling the remove() method for that element type. 

coordinator primitive pattern

Key Press Latency

The duration of a user's key press can be fractions of a second and span multiple renderings.  Because the frame rate is faster than the duration of a typical key press, the excess needs to be managed.  Since the framework polls a key's state before rendering every frame, it will typically interpret a single key press as a series of multiple presses of the same key.  In the case of any key that toggles a state, the result would be a series of toggles that turn the state on and off several times, leaving its final state outside user control.  To avoid reading of a user key press as more than a single toggle, the framework tracks the time since the last toggle and processes a toggle only if sufficient time has elapsed.  The minimum elapsed time is defined as the key latency

Parameter Settings

Translation Layer

Settings for the Translation Layer cover:

  • the macros for the
    • action descriptions
    • key-action mappings
  • the enumeration constants for the
    • user actions
    • primitive types
    • render state flags
 // Translation.h
 // ...
 typedef enum Action {
 } Action;

     L"Sprite X +", \
     L"Sprite X -", \
     L"Sprite Y +", \
     L"Sprite Y -", \
     L"Roll Square",   \
     L"Pitch Box",\
     L"Detach Child Box", \
     L"Attach Child Box", \
     L"Pitch Camera Up",     \
     L"Pitch Camera Down",   \
     L"Yaw Camera Left",     \
     L"Yaw Camera Right",    \
     L"Move Camera Forward",     \
     L"Move Camera Backward",    \
     L"Roll Camera Left", \
     L"Roll Camera Right", \
     L"Pan Camera Left", \
     L"Pan Camera Right", \
     L"Pan Camera Left Alt", \
     L"Pan Camera Right Alt", \
     L"Fly Camera Up", \
     L"Fly Camera Down", \
     L"Select Camera", \
     L"Toggle Wire Frame Mode", \

    KEY_4, KEY_5, KEY_6, KEY_7, KEY_R, KEY_U, KEY_O, KEY_P, \

 // Primitive Types
 typedef enum PrimitiveType {
     POINT_LIST     = 1,
     LINE_LIST      = 2,
     LINE_STRIP     = 3,
     TRIANGLE_LIST  = 4,
     TRIANGLE_FAN   = 6
 } PrimitiveType;

 // Rendering State Flags
 typedef enum RenderState {
     ALPHA_BLEND    = 1,
     LIGHTING       = 2,
     WIRE_FRAME     = 3,
 } RenderState; 

Model Settings

Settings for the Model Layer cover the macros that define the directory for the file that holds the vertex data for the red square, the key-press latency, the speeds for converting time into distance travelled and radians swept, and the enumeration constants that identify object categories:

 // Model.h
 // ...
 // File Directories
 #define TEXTURE_DIRECTORY L"..\\..\\resources\\textures"
 #define ASSET_DIRECTORY   L"..\\..\\resources\\assets"
 // ...
 // latency - keystroke time interval
 #define KEY_LATENCY     (unitsPerSec / 2)

 // camera settings
 // camera speed control factors
 #define CAM_SPEED       0.02f
 #define TURNING_RADIUS  120.00f

 // frame motion parameters
 // factors applied to the time interval
 #define FORWARD_SPEED 10.0f / unitsPerSec
 #define ROT_SPEED     0.03f * FORWARD_SPEED
 #define CONSTANT_ROLL 10.0f * ROT_SPEED
 // ...
 // Object categories
 typedef enum Category {
 } Category;


The upgraded Coordinator component incorporates perspective support, key latency support, and automated support for addition and removal of design elements. 

The iCoordinator interface includes pure virtual methods for adding and removing design elements of different type:

 // iCoordinator.h

 class iCoordinator : public Base {
     virtual void add(iObject* o)         = 0;
     virtual void add(iTexture* t)        = 0;
     virtual void add(iCamera* c)         = 0;
     virtual void add(iGraphic* v)        = 0;
     // ...
     virtual void remove(iObject* o)      = 0;
     virtual void remove(iTexture* t)     = 0;
     virtual void remove(iCamera* c)      = 0;
     virtual void remove(iGraphic* v)     = 0;
 // ...

The Coordinator class includes vectors of design element pointers, key latency variables, and projection parameters:

 // Coordinator.h

 class Coordinator : public iCoordinator {
     // ...
     std::vector <iGraphic* > graphic;          // points to graphics
     std::vector<iTexture*> texture;          // points to textures
     std::vector<iCamera*>  camera;           // points to cameras
     std::vector<iObject*>  object;           // points to objects
     unsigned               currentCam;       // index - current camera
     unsigned               lastCameraToggle; // most recent camera toggle
     unsigned               lastWFrameToggle; // most recent wire frame toggle
     // display device
     float                  nearcp;           // near clipping plane
     float                  farcp;            // far clipping plane
     float                  fov;              // field of view in radians
     Matrix                 projection;       // projection transformation
     bool                   wireFrame;        // wire frame mode
     // ...
     void render(Category category);

     void setProjection(float, float, float);
     // ...
     // ...
     void  add(iObject* o)  { ::add(object, o); }
     void  add(iTexture* t) { ::add(texture, t); }
     void  add(iCamera* c)  { ::add(camera, c); }
     void  add(iGraphic* g) { ::add(graphic, g); }
     // ...
     void  remove(iObject* o)  { ::remove(object, o); }
     void  remove(iTexture* t) { ::remove(texture, t); }
     void  remove(iCamera* c)  { ::remove(camera, c); }
     void  remove(iGraphic* g) { ::remove(graphic, g); }



The add() template adds the received address for the design element of type T to the list of addresses:

 template <class T>
 void add(std::vector<T>& v, T o) {
     bool rc = false;
     for (unsigned i = 0; i < v.size() && !rc; i++)
         if (!v[i]) {
             v[i] = o;
             rc = true;
     if (!rc) v.push_back(o);


The remove() template removes the received address for the design element of type T from the list of addresses:

 template <class T>
 bool remove(std::vector<T>& v, T o) {
     bool rc = false;
     for (unsigned i = 0; i < v.size(); i++)
         if (v[i] == o) {
             v[i] = 0;
             rc = true;
     while (v.size() && !v[v.size() - 1]) v.pop_back();

     return rc;


The constructor initializes the key latency timers, the current camera index, the wire frame flag, and the projection parameters:

 Coordinator::Coordinator(void* hinst, int show) {
     // ...
     lastCameraToggle = 0;
     lastWFrameToggle = 0;
     // current state
     currentCam = 0;
     wireFrame  = false;
     // projection parameters
     fov    = 0.9f;
     nearcp = 1.0f;
     farcp  = 1000.0f;

Get Configuration

The getConfiguration() method determines the projection matrix for the configured aspect ratio and passes the matrix to the APIDisplay object:

 bool Coordinator::getConfiguration() {
     bool rc = false;
     if (userInput->getConfiguration()) {
         // ...
             // ...
             if (display->setup()) {
                 projection = ::projection(fov, (float) width
                  / height, nearcp, farcp);
                 rc = true;
             } else
     // ...

Set Projection

The setProjection() method resets the projection parameters to the received values:

 void Coordinator::setProjection(float angle, float n, float f) {
     fov    = angle;
     nearcp = n;
     farcp  = f;


The update() method responds to requests for switching cameras and in/out of wire-frame mode, and updates the current camera:

 void Coordinator::update() {
     // toggle and update the current camera
     if (camera.size() && userInput->pressed(CAMERA_SELECT) &&
         now - lastCameraToggle > KEY_LATENCY) {
         lastCameraToggle = now;
         if (currentCam == camera.size()) currentCam = 0;
     if (camera.size() && camera[currentCam]) {
     // toggle and update the wire frame state
     if (userInput->pressed(WIRE_FRAME_SELECT) &&
         now - lastWFrameToggle > KEY_LATENCY) {
         lastWFrameToggle = now;
         wireFrame = !wireFrame;


The no-argument render() method draws the sprite objects, turns off the lighting, turns on alpha blending, renders the lit objects, turns off alpha blending, and turns on lighting.  The single-argument render() method draws the objects that belong to the specified category:

 void Coordinator::render() {
     display->set(WIRE_FRAME, wireFrame);
     display->set(LIGHTING, false);
     display->set(ALPHA_BLEND, true);
     display->set(ALPHA_BLEND, false);
     display->set(LIGHTING, true);
 void Coordinator::render(Category category) {
     // draw objects that only belong to category
     for (unsigned i = 0; i < object.size(); i++)
         if (object[i] && object[i]->belongsTo(category))


The resize() method recalculates the projection matrix for the resized window and passes it to the APIDisplay object:

 void Coordinator::resize() {
     if (active && userInput->getWindowMode()) {
         projection = ::projection(fov, (float)window->getClientWidth()
          / window->getClientHeight(), nearcp, farcp);

Suspend, Restore, and Release

The suspend() method suspends the connection of the design elements to the display device in anticipation of loss of focus:

 void Coordinator::suspend() {
     for (unsigned i = 0; i < texture.size(); i++)
         if (texture[i]) texture[i]->suspend();
     for (unsigned i = 0; i < graphic.size(); i++)
         if (graphic[i]) graphic[i]->suspend();
     active = false;

The restore() method resets the connection to the display device, recalculates the projection matrix for the window's aspect ratio, passes it to the APIDisplay object, and resets the keyLatency timers:

 void Coordinator::restore() {
     // ...
     projection = ::projection(fov, (float) window->getClientWidth()
      / window->getClientHeight(), nearcp, farcp);
     lastCameraToggle = now;
     lastWFrameToggle = now;
     // ...

The release() method releases the design elements' connections to their COM objects:

 void Coordinator::release() {
     for (unsigned i = 0; i < texture.size(); i++)
         if (texture[i]) texture[i]->release();
     for (unsigned i = 0; i < graphic.size(); i++)
         if (graphic[i]) graphic[i]->release();
     // ...


The destructor deletes all existing design elements in the application:

 Coordinator::~Coordinator() {
    for (unsigned i = 0; i < object.size(); i++)
        if (object[i]) object[i]->Delete();
    for (unsigned i = 0; i < texture.size(); i++)
        if (texture[i]) texture[i]->Delete();
    for (unsigned i = 0; i < graphic.size(); i++)
        if (graphic[i]) graphic[i]->Delete();
    for (unsigned i = 0; i < camera.size(); i++)
        if (camera[i]) camera[i]->Delete();
    // ...


The Design component implements the game design.  Here, the Design class contains the logic for the sample described above. 

The Design class defines four instance pointers to the dynamic design elements:

 // Design.h

 class Design : public iDesign {
     iObject* parent;  // points to the parent box
     iObject* child;   // points to the child box
     iObject* square;  // points to the square plate
     iObject* sprite;  // points to moving sprite
     // ...


The constructor initializes the instance pointers: 

 Design::Design(void* h, int s) : Coordinator(h, s) {
     parent = nullptr;
     child  = nullptr;
     square = nullptr;
     sprite = nullptr;


The initialize() method defines the cameras and the drawable objects and translates and rotates them to their initial positions and orientations: 

 void Design::initialize() {

     sprite = CreateSprite(CreateGraphic(120, 120));
     sprite->translate(0, 20, 0);

     // cameras -----------------------------------------------------------
     CreateCamera()->translate(-5, 0, -80);
     // second camera attached to the box
     iCamera* objectCamera = CreateCamera();

     // model -------------------------------------------------------------
     Colour blue(0.1f, 0.1f, 0.9f);
     Colour green(0.1f, 0.8f, 0.1f);
     Colour red(.9f, .1f, .1f, 0.5f);
     Colour grey(.2f, .2f, .2f);

     // vertex lists
     iGraphic* boxg  = CreateBox(-10, -10, -10, 10, 10, 10, green);
     iGraphic* boxb  = CreateBox(-5, -5, -5, 5, 5, 5, blue);
     iGraphic* gridw = CreateGrid(-25, 25, 10, grey);
     iGraphic* plate = TriangleList(L"colouredsquare.txt", red);

     // objects
     square = CreateObject(plate);
     parent = CreateObject(boxg);
     child  = CreateObject(boxb);
     iObject* xz = CreateObject(gridw);
     iObject* xy = Clone(xz);
     iObject* yz = Clone(xz);

     // translate, rotate, and attach
     child->translate(0, 0, 10);
     parent->translate(-8, -20, 40);
     xz->translate(25, 0, 25);
     xy->translate(25, 25, 0);
     yz->translate(0, 25, 25);


The update() method calculates the effects of initiated actions on the drawable objects, updates their orientations, and optionally attaches or detaches the child box from or to the green box: 

 void Design::update(int now) {

     static bool left = false, down = false;
     int delta = now - lastUpdate;
     float del = delta * SPEED;
     float dsx = left ? - del : del, dsy = 0;
     int dz = 0;  // roll  the square around || to world z axis
     int dx = 0;  // pitch the box    around || to world x axis

     // changes introduced through keyboard input
     if (pressed(SPRITE_MINUS_X)) dsx -= del;
     if (pressed(SPRITE_PLUS_X))  dsx += del;
     if (pressed(SPRITE_MINUS_Y)) dsy -= del;
     if (pressed(SPRITE_PLUS_Y))  dsy += del;
     if (pressed(ROLL_SQUARE))    dz  += delta;
     if (pressed(PITCH_BOX))      dx  += delta;

     // keep sprite within limits and reverse directions at limits
     Vector p = sprite->position();
     if (p.x + dsx <= 0) {
         dsx = 2 * p.x + dsx;
         left = false;
     else if (p.x + dsx + sprite->width() >= width) {
         dsx = 2 * (width - p.x - sprite->width()) - dsx;
         left = true;
     if (p.y + dsy <= 0) {
         dsy = 2 * p.y + dsy;
         down = false;
     else if (p.y + dsy + sprite->height() >= height) {
         dsy = 2 * (height - p.y - sprite->height()) - dsy;
         down = true;
     sprite->translate(dsx, dsy, 0);

     // adjust the orientations for user input
     if (parent) parent->rotatex(dx * ROT_SPEED + CONSTANT_ROLL);
     if (square) square->rotatez(dz * ROT_SPEED);

     // attach/detach child object
     if (pressed(MDL_ATT_CHILD) && child && parent) child->attachTo(parent);
     if (pressed(MDL_DET_CHILD) && child && parent) child->attachTo(nullptr);


The Camera component manages the different viewpoints on the scene.  Each Camera object represents a separate viewpoint. 

The iCamera interface derives from the Frame and Base classes and exposes two virtual methods to the Coordinator and Design classes:

  • update() - updates the Camera object
  • Delete() - deletes the Camera object
 // iCamera.h

 class iCamera : public Frame, public Base {
     virtual void update()          = 0;
     virtual void Delete() const    = 0;
     friend class Coordinator;
     friend class Design;
 iCamera* CreateCamera();

The Camera class defines two class variables:

  • a pointer to the current Camera object
  • the view matrix for the current camera
 // Camera.h

 class Camera : public iCamera {
     static iCamera* current; // points to the current camera
     static Matrix   view;    // view transformation for the current camera
     virtual ~Camera();
     static Frame** getCurrent() { return (Frame**)¤t; }
     static void*   getView()    { return &view; }
     Camera(const Camera& c);
     void* clone() const         { return new Camera(*this); }
     bool pressed(Action a);
     void update();
     void Delete() const         { delete this; }



The constructor adds the current object's address to the Coordinator object and orients the current object heading into the screen, if the world z axis is directed out of the screen:

 Camera::Camera() {
     current = this;


The pressed() method reports the state of the specified action for use in the update() method:

 bool Camera::pressed(Action a) { return coordinator->pressed(a); }


The update() method adjusts the position and orientation of the camera and stores its view transformation:

 void Camera::update() {
     int delta = now - lastUpdate;
     int dx = 0, // pitch up/down
         dy = 0, // yaw left/right
         dz = 0; // advance/retreat
     int rx = 0, // rotate about local x axis
         ry = 0, // rotate about local y axis
         rz = 0; // rotate about local z axis

     // keyboard input
     if (pressed(CAM_STRAFE_LEFT) || pressed(CAM_STRAFE_LEFT_ALT))
         dx -= delta;
     if (pressed(CAM_STRAFE_RIGHT) || pressed(CAM_STRAFE_RIGHT_ALT))
         dx += delta;
     if (pressed(CAM_FLY_DOWN))   dy -= delta;
     if (pressed(CAM_FLY_UP))     dy += delta;
     if (pressed(CAM_ADVANCE))    dz += delta;
     if (pressed(CAM_RETREAT))    dz -= delta;
     if (pressed(CAM_PITCH_UP))   rx -= delta;
     if (pressed(CAM_PITCH_DOWN)) rx += delta;
     if (pressed(CAM_YAW_LEFT))   ry -= delta;
     if (pressed(CAM_YAW_RIGHT))  ry += delta;
     if (pressed(CAM_ROLL_LEFT))  rz -= delta;
     if (pressed(CAM_ROLL_RIGHT)) rz += delta;

     // adjust camera orientation
     if ((rx || ry || rz)) {
         // yaw left/right
         if (ry) rotate(orientation('y'), ry * ANG_CAM_SPEED);
         // pitch up/down
         if (rx) rotate(orientation('x'), rx * ANG_CAM_SPEED);
         // roll left/right
         if (rz) rotate(orientation('z'), rz * ANG_CAM_SPEED);
     // adjust camera position
     if ((dx || dy || dz)) {
         Vector displacement =
          (float) dx * CAM_SPEED * orientation('x') +
          (float) dy * CAM_SPEED * orientation('y') +
          (float) dz * CAM_SPEED * orientation('z');
         translate(displacement.x, displacement.y, displacement.z);

     current = this;

     // update the view transformation
     Vector p = position();
     Vector h = ::normal(orientation('z'));
     Vector u = ::normal(orientation('y'));
     view = ::view(p, p + h, u);


The destructor removes the object's address from the Coordinator's list:

 Camera::~Camera() { coordinator->remove(this); }


The Object component manages all categories of drawable objects, including two-dimensional sprites and three-dimensional objects represented by graphics primitives and passes world transformation data to the Graphic component.  In this sample, there are two categories:


The iObject interface exposes a new virtual method that reports whether or not the current object belongs to the specified category:

 // iObject.h

 class iObject : public Frame, public Base {
     // ...
     virtual bool belongsTo(Category category) const = 0;
     // ...
     friend class Coordinator;
     friend class Design;
 iObject* CreateObject(iGraphic*, unsigned char a = 0);
 iObject* CreateSprite(iGraphic*, unsigned char a = 0);
 iObject* Clone(const iObject*);

The Object class receives the category upon instantiation and retains it in an instance variable:

 // Object.h

 class Object : public iObject  {
     Category category; // category
     // ...
     Object(Category, iGraphic*, unsigned char);
     // ...
     bool belongsTo(Category c) const { return c == category; }
     // ...



The constructor receives the category and stores it in an instance variable: 

 Object::Object(Category c, iGraphic* v, unsigned char a) :
  category(c), graphic(v), texture(nullptr), alpha(a ? a : TEX_ALPHA) {


The render() method renders the current object's graphic representation in either of two ways depending upon the object's category:

  • SPRITE - uses sprite technology
  • LIT_OBJECT - sets the world transformation and uses graphics primitive technology
 void Object::render() {
     if (graphic) {
         if (category == SPRITE) {
             Vector pos = position();
             texture->attach(graphic->width(), graphic->height());
             graphic->render((int)pos.x, (int)pos.y, alpha);
         else {


The Graphic component manages the graphic representations throughout the framework, includes a VertexList template for building sets of graphics primitives and contains three functions for building VertexList objects from graphics primitives:

  • CreateBox() - creates a brick-like box with six faces
  • CreateGrid() - creates a grid of equally spaced lines
  • TriangleList() - creates a set of graphics primitives from a list of vertices on file

The Graphic component manages both sprite and graphic primitive representations.  In each case, the component makes the appropriate API calls to draw on the backbuffer. The component includes an APIVertexList template for building vertex buffers that implements the VertexList functionality in the Translation Layer.  The Graphic component also includes a LitVertex class that defines the structure of a single lit vertex. 

Graphic Class

The iGraphic interface derives from the Base class and exposes a method that sets the world transformation on the fixed function pipeline:

 // iGraphic.h

 class iGraphic : public Base {
     virtual void setWorld(const void*) = 0;
     // ...
     friend class Coordinator;
     friend class Design;
     friend class Object;
 iGraphic* CreateGraphic(int = 0, int = 0);
 iGraphic* CreateBox(float, float, float, float, float, float, const Colour&);
 iGraphic* CreateGrid(float, float, int, const Colour&);
 iGraphic* TriangleList(const wchar_t*, const Colour&);

The Graphic class, which implements sprite representations, defines the setWorld() as a stub:

 class Graphic : public iGraphic {
     // ...
     void setWorld(const void*) {}
     // ...

VertexList Template

The VertexList template, which implements graphic primitive representations, derives from the Graphic class and defines the structure of classes built for any vertex type (T): 

 template <class T>
 class VertexList : public Graphic {
     APIVertexList<T>* apiVertexList; // points to the API Primitive Set
     virtual ~VertexList()                { if (apiVertexList)
                                            apiVertexList->Delete(); }
     VertexList(): apiVertexList(nullptr) { }
     VertexList(PrimitiveType, int);
     VertexList(const VertexList& src)    { apiVertexList = nullptr;
                                           *this = src; }
     VertexList& operator=(const VertexList&);
     void* clone() const                  { return new VertexList(*this); }
     virtual unsigned add(const T& v);
     Vector position(unsigned) const;
     void setWorld(const void* w)         { apiVertexList->setWorld(w); }
     void render(int, int, unsigned char) { apiVertexList->draw(); }
     void suspend()                       { apiVertexList->suspend(); }
     void release()                       { apiVertexList->release(); }
     void Delete() const                  { delete this; }

The CreateVertexList function creates a VertexList object on dynamic memory: 

 template <class T>
 iGraphic* CreateVertexList(PrimitiveType t, int np) {
     return new VertexList<T>(t, np);


The constructor creates an APIVertexList to interface with the Direct3D API. 

 template <class T>
 VertexList<T>::VertexList(PrimitiveType t, int np) {
     apiVertexList = CreateAPIVertexList<T>(t, np);

The assignment operator deletes the APIVertexList object and clones the source APIVertexList object:

 template <class T>
 VertexList<T>& VertexList<T>::operator=(const VertexList<T>& src) {
     if (this != &src) {
         if (apiVertexList)
         apiVertexList = src.apiVertexList->clone();
     return *this;


The add() method adds a single vertex to the list:

 template <class T>
 unsigned VertexList<T>::add(const T& v) {
     return apiVertexList ? apiVertexList->add(v) : 0;


The position() method reports the position of the specified vertex in local coordinates:

 template <class T>
 Vector VertexList<T>::position(unsigned i) const {
     return apiVertexList ? apiVertexList->position(i) : Vector();

VertexList Functions

Create Box

The CreateBox function creates a set of graphics primitives that model a brick-like box.  This function creates an empty triangle list primitive set with coloured vertices, populates the set with pairs of triangles that represent the six sides of the box, and returns the address of the Graphic object: 

 iGraphic* CreateBox(float minx, float miny, float minz, float maxx,
  float maxy, float maxz, const Colour& colour) {
     VertexList<LitVertex>* vertexList =
      (VertexList<LitVertex>*)CreateVertexList<LitVertex>(TRIANGLE_LIST, 12);
     float x = (minx + maxx) / 2;
     float y = (miny + maxy) / 2;
     float z = (minz + maxz) / 2;

     minx -= x;
     miny -= y;
     minz -= z;
     maxx -= x;
     maxy -= y;
     maxz -= z;
     // locate centroid at origin
     Vector p1 = Vector(minx, miny, minz),
            p2 = Vector(minx, maxy, minz),
            p3 = Vector(maxx, maxy, minz),
            p4 = Vector(maxx, miny, minz),
            p5 = Vector(minx, miny, maxz),
            p6 = Vector(minx, maxy, maxz),
            p7 = Vector(maxx, maxy, maxz),
            p8 = Vector(maxx, miny, maxz);
     add(vertexList, p1, p2, p3, p4, colour); // front
     add(vertexList, p4, p3, p7, p8, colour); // right
     add(vertexList, p8, p7, p6, p5, colour); // back
     add(vertexList, p6, p2, p1, p5, colour); // left
     add(vertexList, p1, p4, p8, p5, colour); // bottom
     add(vertexList, p2, p6, p7, p3, colour); // top
     return vertexList;

Create Grid

The CreateGrid function creates a set of graphics primitives that model a brick-like box.  This function creates an empty triangle list primitive set with coloured vertices, populates the set with pairs of triangles that represent the six sides of the box, and returns the address of the Graphic object: 

 iGraphic* CreateGrid(float min, float max, int n, const Colour& colour) {
     VertexList<LitVertex>* vertexList =
      (VertexList<LitVertex>*)CreateVertexList<LitVertex>(LINE_LIST, 2*n+2);
     float x = (min + max) / 2;

     min -= x;
     max -= x;
     float cur = min, inc = (max - min) / float(n - 1);
     for (int i = 0; i < n; i++, cur += inc) {
         vertexList->add(LitVertex(Vector(min, 0, cur), colour));
         vertexList->add(LitVertex(Vector(max, 0, cur), colour));
         vertexList->add(LitVertex(Vector(cur, 0, min), colour));
         vertexList->add(LitVertex(Vector(cur, 0, max), colour));
     return vertexList;

Triangle List

The TriangleList function creates a set of graphics primitives that model a rectangle.  This function creates an empty triangle list primitive set with coloured vertices, populates the set with one pair of triangles that represent the rectangle, and returns the address of the Graphic object: 

  iGraphic* TriangleList(const wchar_t* file, const Colour& colour) {
     iGraphic* graphic = nullptr;
     int len = strlen(file) + strlen(ASSET_DIRECTORY) + 1;
     wchar_t* absFile = new wchar_t[len + 1];
     ::nameWithDir(absFile, ASSET_DIRECTORY, file, len);
     std::wifstream in(absFile, std::ios::in);
     delete [] absFile;
     float x, y, z, xc = 0, yc = 0, zc = 0;
     unsigned no = 0;

     // count the number of records
     while (in) {
         in >> x >> y >> z;
         if (in.good()) {
             xc += x;
             yc += y;
             zc += z;
     if (no) {
         float max = 0;
         VertexList<T>* vertexList =
          (VertexList<T>*)CreateVertexList<LitVertex>(TRIANGLE_LIST, no / 3);
         xc /= no;
         yc /= no;
         zc /= no;
         for (unsigned i = 0; i < no; i++) {
             in >> x >> y >> z;
             vertexList->add(LitVertex(Vector(x - xc, y - yc, (z - zc)),
             if (x - xc > max) max = x - xc;
             if (y - yc > max) max = y - yc;
             if (z - zc > max) max = z - zc;
         graphic = vertexList;
     return graphic;

Add Function

The add() function creates a set of graphics primitives that model a rectangle.  This function creates an empty triangle list primitive set with coloured vertices, populates the set with one pair of triangles that represent the rectangle, and returns the address of the Graphic object: 

 void add(VertexList<LitVertex>* vertexList, const Vector& p1, const Vector& p2,
  const Vector& p3, const Vector& p4, const Colour& colour) {
     vertexList->add(LitVertex(p1, colour));
     vertexList->add(LitVertex(p2, colour));
     vertexList->add(LitVertex(p3, colour));
     vertexList->add(LitVertex(p1, colour));
     vertexList->add(LitVertex(p3, colour));
     vertexList->add(LitVertex(p4, colour));

APIVertexList Template

The APIVertexList template connects VertexList classes to the graphics primitive support provided by the APIs. 

The APIVertexList template derives from the APIGraphic class.  Its instance variables include:

  • type - the type of Direct3D primitive
  • vb - the interface to the Direct3D vertex buffer COM object
  • vDecl - the interface to the Direct3D vertex declaration COM object
  • nPrimitives - the number of graphics primitives in the set
  • maxNo - the maximum nuber of vertices in the vertex array
  • vertex - the address of the vertex list
  • nVertices - the number of vertices in the vertex list
 template <class T>
 class APIVertexList : public APIGraphic {
     D3DPRIMITIVETYPE        type;        // primitive type
     IDirect3DVertexBuffer9* vb;          // points to the vertex buffer
     IDirect3DVertexDeclaration9* vDecl;  // vertex declaration
     unsigned                nPrimitives; // number of primitives
     unsigned                maxNo;       // maximum number of vertices
     T*                      vertex;      // points to the array of vertices
     unsigned                nVertices;   // number of vertices
     virtual ~APIVertexList()                { release(); delete [] vertex; }
     void     setup();
     unsigned attach();
     APIVertexList() : vertex(nullptr), vb(nullptr) {}
     APIVertexList(PrimitiveType, unsigned);
     APIVertexList& operator=(const APIVertexList&);
     APIVertexList(const APIVertexList& src) { vertex = nullptr; vb = nullptr;
                                                vDecl = nullptr; *this = src; }
     APIVertexList* clone() const            { return new APIVertexList(*this); }
     virtual unsigned add(const T& v);
     Vector  position(unsigned i) const      { return vertex[i].position(); }
     void    draw();
     void    suspend();
     void    release()                       { suspend(); }
     void    Delete() const                  { delete this; }


The constructor initializes the number of primitives to the value received, initializes the address of the vertex buffer, converts the received primitive type into its Direct3D equivalent, allocates memory for the vertex array and sets the index of the next vertex to be added to the array:

 template <class T>
 APIVertexList<T>::APIVertexList(PrimitiveType t, unsigned np) :
  nPrimitives(np), nVertices(0), vb(nullptr) {

     if (np <= 0) {
         maxNo  = 0;
         vertex = nullptr;
     else {
         // Determine the number of vertices for the Primitive Type
         switch (t) {
             case POINT_LIST    : maxNo = np;
              type = D3DPT_POINTLIST;     break;
             case LINE_LIST     : maxNo = 2 * np;
              type = D3DPT_LINELIST;      break;
             case LINE_STRIP    : maxNo = np + 1;
              type = D3DPT_LINESTRIP;     break;
             case TRIANGLE_LIST : maxNo = 3 * np;
              type = D3DPT_TRIANGLELIST;  break;
             case TRIANGLE_STRIP: maxNo = np + 2;
              type = D3DPT_TRIANGLESTRIP; break;
             case TRIANGLE_FAN  : maxNo = np + 1;
              type = D3DPT_TRIANGLEFAN;   break;
             default            : maxNo = np;
              type = D3DPT_POINTLIST;
         vertex = new T[maxNo];
     vDecl = nullptr;

Direct3D supports the following primitive types:


Assignment Operator

The assignment operator copies over the instance variables from the source object and releases the interfaces to the vertex declaration and the vertex buffer COM objects: 

 template <class T>
 APIVertexList<T>& APIVertexList<T>::operator=(const APIVertexList<T>& src) {
     if (this != &src) {
         maxNo       = src.maxNo;
         nPrimitives = src.nPrimitives;
         nVertices   = src.nVertices;
         type        = src.type;
         (APIGraphic&)(*this) = src;
         if (vertex) {
             delete [] vertex;
             vertex = nullptr;
         vertex = new T[nVertices];
         for (unsigned i = 0; i < nVertices; i++)
             vertex[i] = src.vertex[i];
         if (vb) {
             vb = nullptr;
         if (vDecl) {
             vDecl = nullptr;
     return *this;


The add() method adds the received vertex to the array of vertices and returns the index of the added vertex:

 template <class T>
 unsigned APIVertexList<T>::add(const T& v) {
     unsigned n = nVertices;
     if (nVertices < maxNo)
         vertex[nVertices++] = v;
     return n;


The setup() method creates the vertex declaration, creates the vertex buffer, and populates the vertex buffer with the data in the array of vertices:

 template <class T>
 void APIVertexList<T>::setup() {
     unsigned vBufSize = APIVertexDeclaration<T>::size() * nVertices;

     if (FAILED(d3dd->CreateVertexDeclaration(APIVertexDeclaration<T>::format(),
         error(L"APIVertexList::07 Unable to create vertex declaration");
     else if (!nVertices) {
         error(L"APIVertexList::09 No vertices have been stored");
         vb = nullptr;
     else if (FAILED(d3dd->CreateVertexBuffer(vBufSize, 0, 0, D3DPOOL_DEFAULT,
      &vb, nullptr))) {
         error(L"APIVertexList::11 Couldn\'t create the vertex buffer");
         vb = nullptr;
     else {
         void* pv;
         if (SUCCEEDED(vb->Lock(0, vBufSize, &pv, 0)))
             for (unsigned i = 0; i < nVertices; i++)

The CreateVertexDeclaration() method on the display device creates a vertex declaration for the specified vertex type with the specified format. 

The CreateVertexBuffer() method on the display device creates the Direct3D vertex buffer on the default memory pool (D3DPOOL_DEFAULT). 

The Lock() method on the vertex buffer locks the buffer and returns its address.  The call to the populate() method on the Vertex object copies the data from the vertex element into the vertex buffer.  The Unlock() method on the vertex buffer unlocks the buffer.  A locked buffer must be unlocked before its data can be accessed in any drawing process. 


The attach() method sets up the vertex buffer if necessary, sets the vertex declaration on the display device and attaches the buffer to the display device:

 template <class T>
 unsigned APIVertexList<T>::attach() {
     if (!vb) setup();
     if (vb) {
         d3dd->SetStreamSource(0, vb, 0, APIVertexDeclaration<T>::size());
     return nVertices;


The draw() method attaches the vertex buffer to the display device and draws the set of primitives to the backbuffer:

 template <class T>
 void APIVertexList<T>::draw() {
     if (vb) d3dd->DrawPrimitive(type, 0, nPrimitives);

We assume that the appropriate world, view, and projection transformations have been set on the display device and that the BeginScene() method on the display device has executed. 


The suspend() method releases the interfaces to the vertex buffer and vertex declaration COM objects:

 template <class T>
 void APIVertexList<T>::suspend() {
     if (vb) {
         vb = nullptr;
     if (vDecl) {
         vDecl = nullptr;


The LitVertex class defines the structure of a LitVertex object, which holds the information for a lit vertex. 

The instance variables of the Vertex class include the coordinates of the bound vector that defines the vertex's position in local coordinates and the vertex's colour:

 class LitVertex {
     float  x; // x coordinate in the local frame
     float  y; // y coordinate in the local frame
     float  z; // z coordinate in the local frame
     Colour c; // colour
     LitVertex(const Vector&, const Colour&);
     void   populate(void**) const;
     Vector position() const;

Vertex Declaration

The vertex declaration defines the vertex format and its size: 

 template <>
 D3DVERTEXELEMENT9 APIVertexDeclaration<LitVertex>::fmt[MAXD3DDECLLENGTH + 1]
  = {

 template <>
 unsigned APIVertexDeclaration::vertexSize = 16;


The constructor initializes the coordinates of the bound vector and the color of the vertex to the received values: 

 LitVertex::LitVertex() : x(0), y(0), z(0), c(0) {}

 LitVertex::LitVertex(const Vector& p, const Colour& x) :
  x(p.x), y(p.y), z(p.z), c(x) {}


The populate() method populates the received address with the vertex data according to vertex type:

 void  LitVertex::populate(void** pv) const {
     float* p = *(float**)pv;
     *p++ = x;
     *p++ = y;
     *p++ = z;
     *((unsigned*)p++) = COLOUR_TO_ARGB(c);
     *pv  = p;

The COLOUR_TO_ARGB() macro converts the four floating-point values of a Colour - red, green, blue, alpha - into a packed unsigned, under an argb ordering. 


The position() method on the Vertex object returns the bound vector for the vertex:

 Vector LitVertex::position() const {
     return Vector(x, y, z);


The Display object passes the world, view, and projection matrices to the Direct3D API.  Since the calculation of the projection matrix depends upon the aspect ratio of the display device and resizing of the application window changes this aspect ratio, resizing requires a recalculation of the projection matrix.  For this sample the lighting is turned off and no shading is applied.


The iDisplay interface exposes two new methods:

  • beginDrawObject() - set the world transformation
  • resize() - resizes the projection matrix
 // iDisplay.h

 class iAPIDisplay {
     virtual void configure(int, int, int)                             = 0;
     virtual void setProjection(void*)                                 = 0;
     virtual bool setup()                                              = 0;
     virtual void beginDrawFrame(const void*)                          = 0;
     virtual void set(RenderState, bool)                               = 0;
     virtual void endDrawFrame()                                       = 0;
     virtual bool restore()                                            = 0;
     virtual void release()                                            = 0;
     virtual void Delete()                                             = 0;
     friend class Coordinator;
     friend class APIUserInput;

 iAPIDisplay* CreateAPIDisplay();

Class Definition

The Display class adds instance variable that holds the aspect ratio and the projection transformation, two private methods that set the projection transformation and the lighting parameters, a new public method that resizes the application window, and a new public method that sets the world transformation: 

 class Display : public iDisplay {
class APIDisplay : public iAPIDisplay, public APIBase {

    // selected configuration
    int      displayId;         // APIDisplay adapter identifier
    int      mode;              // resolution mode identifier
    int      pixel;             // pixel format identifier
    Matrix   projection;        // transformation from camera to clip space
    Matrix   view;              // transformation from world to camera space

    D3DPRESENT_PARAMETERS d3dpp; // parameters for creating/restoring D3D
                                 // APIDisplay device

    void setupProjection();      // sets up the projection matrix
    void setupBlending();        // sets up alpha-blending

    APIDisplay(const APIDisplay& d);            // prevents copying
    APIDisplay& operator=(const APIDisplay& d); // prevents assignments
    virtual ~APIDisplay();

    void configure(int, int, int);
    void setProjection(void*);
    bool setup();
    void beginDrawFrame(const void*);
    void set(RenderState, bool);
    void endDrawFrame();
    bool restore();
    void release();
    void Delete() { delete this; }



The setup() method on the Display object retrieves the interface to the display device for either hardware vertex processing or software vertex processing, depending on the capabilities available and sets up the projection and lighting parameters: 

 bool APIDisplay::setup(void* hwnd) {
     // ...
     D3DCAPS9 caps;
     d3d->GetDeviceCaps(adapter, D3DDEVTYPE_HAL, &caps);

     // hardware or software vertex processing?
     DWORD behaviorFlags;
     if ((caps.DevCaps & D3DDEVCAPS_HWTRANSFORMANDLIGHT) == 0)

     // retrieve the Interface to the D3D APIDisplay device
     if (d3dd)
         error(L"APIDisplay::11 Pointer to Direct3D interface is not nullptr");
     else if (FAILED(d3d->CreateDevice(adapter, D3DDEVTYPE_HAL, (HWND)hwnd,
      behaviorFlags, &d3dpp, &d3dd)))
         error(L"APIDisplay::12 Failed to create Direct3D device");
     else {
        // setup successful
         rc = true;

     return rc;

The GetDeviceCaps() method on the Direct3d object populates an instance of a D3DCAPS9 struct with the capabilities of the selected device.  We check these capabilities to determine if hardware transform and lighting is available.  If so, we specify hardware vertex processing (D3DCREATE_HARDWARE_VERTEX_PROCESSING).  If not, we select software vertex processing (D3DCREATE_SOFTWARE_VERTEX_PROCESSING).

Setup Projection

The setupProjection() method on the Display object creates the projection transformation and sets it in the Direct3D API.  This function uses the global projection function from the math library:

 void APIDisplay::setProjection(void* projection) {
     if (d3dd)
         d3dd->SetTransform(D3DTS_PROJECTION, (D3DXMATRIX*)projection);

Setup Blending

The setupBlending() method defines the relative contributions from the source and the destination during alpha-blending:

 void APIDisplay::setupBlending() {

     if (d3dd) {
         // how alpha-blending is done (when drawing transparent things)
         d3dd->SetRenderState(D3DRS_SRCBLEND, D3DBLEND_SRCALPHA);
         d3dd->SetRenderState(D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA);

The SetRenderState() method on the display device sets the state of the device.  The complete list of settable variables is in the DirectX documentation.  The first agrument in a call to this method is the enumeration constant that identifies the state to be set.  The second argument is the new setting.  Here, the D3DRS_LIGHTING enumeration constant turns off all lighting.  The D3DRS_SHADEMODE enumeration constant sets the shading state to a single solid color across an entire graphics primitive defined by the colour of the first vertex of each primitive. 


The resize() method on the Display object recalculates the aspect ratio for the current window dimensions and resets the projection transformation accordingly: 

 void Display::resize() {
     if (runInWindow) {
         int width  = context->get(GF_WN_WDTH);
         int height = context->get(GF_WN_HGHT);

         // reset the aspect ratio
         aspect = float(width) / height;
         context->set(GF_FR_ASP, aspect);
         // reset projection transformation

Begin Draw Frame

The beginDrawFrame() method on the Display object retrieves the camera vectors from the Context object, generates the view transformation matrix, and sets it in the Direct3D API: 

 void APIDisplay::beginDrawFrame(const void* view) {

     // set the view transformation
     if (d3dd && view) d3dd->SetTransform(D3DTS_VIEW, (D3DXMATRIX*)view);

     // ...


The set() method sets the render state: 

 void APIDisplay::set(RenderState state, bool b) {

     if (d3dd) {
         switch (state) {
             case WIRE_FRAME:
                  b ? D3DFILL_WIREFRAME : D3DFILL_SOLID);
             case ALPHA_BLEND:
                 d3dd->SetRenderState(D3DRS_ALPHABLENDENABLE, b);
             case LIGHTING:
                 d3dd->SetRenderState(D3DRS_LIGHTING, b);


The restore() method on the Display object resets the projection transformation and the lighting parameters: 

 bool Display::restore() {
     // ...
     // complete the restoration
     if (rc)
     // ...


  • Check the D3DFVF section in the DirectX documentation
  • Read about primitives, FVF and vertex buffers at Toymaker
  • Introduce a movable box, define a set of user actions to translate it, and associate default keys with those actions
  • Create a vertex list that represents a pyramid and construct a pyramid Object using that list.  Use the CreateGrid() function as an example.  Create a pyramid object using this vertex list in your Design object

Previous Reading  Previous: 3D Mathematics Next: Visibility   Next Reading

  Designed by Chris Szalwinski   Copying From This Site