TigerHeart II: First impressions on OpenGL

I am currently implementing some classes for the new “TigerHeart” graphics engine using the OpenGL pipeline. And it is the right way because I am a beginner in OpenGL, which is a little bit more different from Direct3D than I thought. Since I am a professional in programming with Direct3D I can judge how to design interfaces, classes and their interaction so they can be used for both APIs.
A good example is shader programming: In Direct3D vertex and pixel shaders can be applied roughly independent from another. But using OpenGL you have to create a program object, to which multiple shaders can be attached. Afterwards this “program” must be linked and applied to utilize the shaders.

per vertex lighting per pixel lighting

You can see my current progression state at both images above. It is a cube model with shared, rounded vertex normals, what is not a reasonable assignment but a good test. The first one shows the common per vertex diffuse and specular lighting. It has got a poor quality because both lighting colors have to be linearly interpolated between the eight vertices.
On the second image you can see the same calculations but relocated to the pixel shader. There is no need for a normal map unless you want to add details to the object without appending vertices. The quality seams to be nearly perfect.

The mesh is rendered using vertex buffer objects (VBO), which is the fastest way to draw complex models using OpenGL, and shaders are compiled with GLSL. This language is comparable to Microsoft’s HLSL but have got some design differences. For example the compiler is integrated into the graphics driver and there is no possibility to specify shader model targets.

TigerHeart II: OpenGL vs. Direct3D

I wrote that the second version of “TigerHeart” should be able to use OpenGL and Direct3D for rendering. But why do we need to support both APIs today? It is a lot of work, which has to be done twice.

Currently some hardware vendors haven’t got good OpenGL drivers. So you can get problems to run your software on some graphics cards. But as a game developer you need the widest possible hardware range, which is able to execute your software without any difficulty. For this reason it is a good decision to choose Direct3D because its behavior is obviously more matchable. Moreover you get better support for older hardware since a lot of features can be simulated by the CPU when the GPU does not support it and HLSL shaders can be compiled to Shader Model 1, what is not possible with GLSL.
Media Seasons is not only developing games but also graphics software for television. Therefore a requirement is to output the rendered graphics on the SDI channels of a “NVIDIA Quadro FX” card. We use the SDI SDK to accomplish this. Unfortunately that software development kit only supports OpenGL, what is comparable to the Quadro drivers, which are optimized for OpenGL because it is still the standard API for professional products. So we are forced to implement a rendering path for it. But we do not have to make our TV software compatible for further graphics cards because in this market segment the developer is able to exactly specify the required hardware.

TigerHeart II: Objectives for the graphics engine

The 3D engine is the largest and most addressed extension of the first “TigerHeart” version. Because it is possible to use it not only for three-dimensional presentation but also for hardware-accelerated flat, 2D drawing the notation will be changed to the more indicative term ‘graphics engine’.
Following characteristics should be achieved:

  1. API base: It must be able to utilize Direct3D and OpenGL for hardware-accelerated display.
  2. Standardization: The objects should be accessible and modifiable independently from the graphics API, which is currently active. So a scene, which is created with DirectX, can be rendered with OpenGL and vice versa and tools and helper functions have to be created only once. Anyway the engine gains specialized interfaces, methods and attributes for particular API functionalities and performance issues.
  3. Object types: There are two major kinds of objects. Displayable ones (like graph nodes and polygon meshes) are unilaterally interdependent, transformable and have got the ability to be rendered and attributes like bounding volumes and a position vector. They use states (like shaders and textures), which are the second kind and on a par, for rendering. Besides octrees will help for culling and collision detection, fonts are being used by text objects to show letters, animations to transform nodes by presets, cameras for view manipulation and light sources for illumination.
  4. Multi-pass: Displayable objects should have the ability to be rendered multiple times using different sets of states. This is important for generating shadows, effects and complex lighting.
  5. Streaming: Rendering operations are stored into buffers, which can be sorted to reduce state changes and to meet other, hardware-dependent conditions for performance increases. These stream buffers can be processed in another thread than the generating one. That way the scenery is being modifiable for the next frame while the previous one is still being drawn.

TigerHeart II: Objectives for the core

The first important step in the planning phase of a new software project is to set the objectives. Here are the ones that should be achieved for the second version of the “TigerHeart” engine core:

  1. Classes and interfaces: C++ classes are accessed externally via interfaces similar to COM. Furthermore methods can be called and attributes can be retrieved and modified using special functions (e.g. for script utilization).
  2. Derivation: It is possible to use existing classes as base for new ones. But those extensions should be accessed by non-derived interfaces.
  3. Object and library management: TigerHeart manages object creation from classes, which are provided by static and dynamic libraries.
  4. Data access and storage: Files from directories and archives are loaded and saved using wrappers. Data can be stored into memory and files with a standard interface and compression.
  5. Meta data: Every class is able to retain user-defined attributes.
  6. Design: Construction of functions, classes and interfaces happens in XML files, which are transformed into various header, source code, script files and what ever is needed. These files will be enhanced by own code.
    Maybe the handling is simplified by an editor later.
  7. Threads: There is a native support for processing on multiple cores.
  8. x86 / x64: Code has to be compilable to 32 and 64 bit executables for Microsoft Windows XP and Vista. Compatibility with other platforms is optional.