Voxels for Unity: Release at the Asset Store

In fall the tool had been rejected by an administrator of the Unity Asset Store because they perceived it as a complete tool, which cannot compete with existing packages. Generated objects with high color variation had too many draw calls and the complete conversion process was taking too much time. After explaining that my extension focuses on the scanning step, that this step is done in a unique way by using the GPU so it would add a matchless feature to the store and that the generator scripts are only examples to show the ability to use it in existing or own frameworks, the administrator advised me to adopt the description, to change the title and maybe to include more examples. I really modified the statement and added a subtitle to underline the main function of the extension. But before I did a lot of work to improve some of the points, which had been criticized.

A mesh and a tree from the Asset Store had been converted to voxel objects. Half of a combined object had been voxelized and is displayed using a sphere for every cell.

First I tried to implement multi-threading and I could really accelerate the rasterizing by 100% for more complex tasks. Unfortunately sometimes there are errors in the result objects and I did not find the reason up to now. So I removed the option from the inspector but it is still available using an undocumented API command.
Next I included an iterator to access only filled cells and I changed the mesh processor script to utilize it. That way I improved the building time for voxel meshes. Moreover I added a new processor class to store coloration data into a 2D texture and implemented options in the mesh script to make use of them. Such data can also be stored per vertex now. Both ways are allowing the merge objects with high color variation much better and so massively reducing the draw calls. Thus I optimized the creation as well as the presentation.

As you can see in the manual I refactored the class Voxels.Converter to Voxels.Rasterizer because the former name rather described the whole execution. And I removed Voxel from scripts like Voxels.VoxelMesh so the duplicated labels vanished.

I resubmitted the package on Thursday. In fall it took about two weeks to get a rejection notification but this time “Voxel for Unity: Rasterizer” has gone live within a day and can be purchased at the Asset Store now.

Category: Content Creation, Technology, Voxels / Author: Lightrocker / Date: 2016-02-21 - Sunday / Comment(s): none

Voxel for Unity: Tutorial

I added a tutorial video to YouTube. It shows how to add the extension to your objects, so they get converted to voxel meshes. Moreover you can see how to set up materials for best outcome and how important properties of the plug-in impact the result.



Please turn on the sub-titles, if you want to get more information!

Category: Content Creation, Technology, Voxels / Author: Lightrocker / Date: 2015-10-09 - Friday / Comment(s): none

Voxels for Unity 1.0: The manual

I completed the manual for the first version of the tool, which is still waiting for approval to be published in the Unity Asset Store.

You can download the PDF here or read it on SlideShare:

Category: Content Creation, Technology, Voxels / Author: Lightrocker / Date: 2015-09-24 - Thursday / Comment(s): none

Voxels for Unity: The release version



“Voxels” is being published as an extension to the Unity runtime and editor via the Asset Store. Its purpose is to scan existing graphics for its colors or materials and store those into a three-dimensional data structure. Those scanned cells can be called “volume pixels”, or voxels for short, and are being transferred to processor classes. Two types of processors are included in the release package.
The first one constructs a hierarchy of game objects from the incoming data and attaches a specified mesh, a renderer and a material to every of its objects. The material is instantiated using a standard shader or a given template and modulated by the sampled color data of the individual cell. An optimization can be enabled to merge multiple voxels that share the same material into single meshes.
And the second processor is able to store the voxel data into a particle system, which is also instantiated by a given one.
Because there is a very simple programming interface it is easy to write own scripts, which are building other game objects, passing the voxels to volume textures, converting the data for (existing) scripts respectively plug-ins or even exporting them to be used outside of Unity. You can use the two included processor classes as blueprints for your own efforts.

But you can also stay away from programming and only utilize the user interface of the components to create appealing voxel graphics. The conversion can be done in the edit mode and while the game or application is running.

Converter inspector in edit mode Converter inspector in runtime while processing

It is also possible to mix inspector adjustments and scripting. For example you set up converter and processor properties in the editor but the operation will only be started when an event in the game like loading a level occurs.

The scanning process works by rendering the desired items in slices to off-screen buffers, which are being copied to main memory afterwards. Whenever a pixel is set, its color is added to a cell in an octree. Later the colors are being combined per cell using mathematical operations or a more complex algorithm for the most frequent color method. That way source materials, lights and shaders remain intact and you get a result comparable to the original, if the settings are right. And this solution is the unique feature of the tool because other, existing extensions are using the ray casting methods provided by Unity. Those methods do not perceive vertex transformations by the shader or material computations per pixel.

Voxels logo

Further on you can also create results that only resample the shape of the basis but contain total different surfaces and appear as holograms, dust clouds, artistic effects and so on. Another option is the ability to animate voxels, e.g. for detonating 3D models or transforming between different ones.

Converter inspector in runtime while processing

Unfortunately not everything works the way it has been planned. Shadow maps seem to become effectless and particle systems are culled in the scanning process because of the near depth clipping planes. Up to now no options could be found to prompt Unity to use other clipping values for shadow map creation or not to hide specific objects while rendering. But I am still looking for answers. Moreover I want to include new features into upcoming versions. Some will help to make the conversion more powerful, others may handle the results in various directions.

Category: Programming, Technology, Voxels / Author: Lightrocker / Date: 2015-09-16 - Wednesday / Comment(s): none

Voxels: A personal introduction

In my free time of the past year I developed a tool for Unity, which arose out of Project: Evolution. For the planned main effect I need pixels in three-dimensional shape to advance graphically from level to level. Those cells are commonly called voxels, which is a combination of “volume” and “pixels”, and they have to be generated from 3D models with generic meshes like I did it for the 2D presentation in the prototype. So I started to program a solution to do the conversion and shortly afterwards a decision grew inside me. Because there was no plug-in comparable to mine I wanted to release it as a product, which other people can use.

The actual functionality for meshes had been finished relatively rapidly but at that state it was a quick-and-dirty implementation, which could be used in-house but was not ready to be published. I began to adapt the user interface and polish the functionality. And it lasted a long time and there are still sections to improve and features to be added. But I am satisfied by now and intent to release “Voxels overview” in the next days.

Category: Programming, Technology, Voxels / Author: Lightrocker / Date: 2015-09-14 - Monday / Comment(s): none

Let’s get started!

The development of “Project: Evolution” has just begun.

Watch the stream on Twitch: www.twitch.tv/Lightrocker
Follow my tweets: twitter.com/LightrockerRon
See my updates on Facebook: www.facebook.com/LightrockerRon

I hope you will enjoy it!

Category: Content Creation, Game Development, Programming, Project: Evolution / Author: Lightrocker / Date: 2014-03-26 - Wednesday / Comment(s): none

COLLADA – A powerful file format for digital content creation

COLLADA is an XML file schema, which allows users to exchange 3D assets between various DCC applications like “Autodesk® 3ds Max®“, “Autodesk® Maya®” or “SOFTIMAGE®|XSI™” and other interactive 3D applications. It was originally initiated by Sony Computer Entertainment® America (SCEA) to create a development format for “PlayStation® 3” and “PlayStation® Portable” projects and became a standard of The Khronos Group, which also holds the OpenGL standard.

There are a lot of problems to transfer data from one 3D application to another, because every software uses proprietary file formats, which are optimally suited to the features the application supports. Most programs also support other file formats like “Wavefront Object“, which however can only store a small set from the features list. That way it is often only possible to convert model data like polygon meshes with a single texture coordinate set and more complex data like multiple texture sets or animations get lost. Even commercial converters are not able to translate every asset completely.
So for realtime 3D or game development programmers were forced to abstain from features, which were not supported by the file formats their team was using but which became more and more important with increasing game quality. Or you had to write an exporter or importer for every 3D application, which the artists were using. Just a single im- or exporter can be a lot of work.

A solution would be a standard file format that every application can read from and write to. The first attempt was “FBX®“, which is now owned by Autodesk®. It became free to use but a long time it does not solve important issues like exchanging multi-textures between programs because the existing plug-ins were not able to do that, even though the format itself could handle it. And there is no source code to extend the plug-ins by yourself. You still had to write your own for every application.
Today “FBX®” is the most common format for asset exchange between DCC applications but not important for game programming.

In October 2006 the COLLADA format had been published. It was and is open source, easily readable XML and extendable. But there were no plug-ins and no programming kits. Because “FBX®” has got a C++ SDK it was easier to write im- and exporters for that format than for COLLADA at this time. Fortunately some month later a SDK was released and the first plug-ins for programs appeared. Today many 3D applications support the format but not all major ones like “LightWave 3D®“, which we are using. However native support is announced unofficially for the current main version.

We have already implement a COLLADA file loader into a character animation plug-in for “Viz|Artist 3.0™” called “Action Model“. And currently I am using COLLADA as standard development format for “TigerHeart” II projects.
Files are being read using the SDK and converted to “TigerHeart” objects. These can now be modified using C++ and will also be changeable in a editor later. After that object data can be stored back without destroying original COLLADA data that was not converted or modified. In doing so multiple applications can access a COLLADA file, modifying its data without loosing something that was useless for a program. A sound editor may use some 3D data for setting effects but it cannot utilize textures. Although it does not destroy those texture objects by overwriting the original file at export time, because only new and modified data is updated to the COLLADA file database. That is no standard behavior for the COLLADA runtime but can be easily integrated using the SDK.

COLLADA has got also disadvantages. Because the format is very flexible, diverse programs can store their data differently. The programmer has to adapt his software for every utilized application. And even then an update of an application or its plug-in can force him to change his code. But this is being done lots of times faster than to write an im-/exporter for every proprietary file format.
Many common file formats have got the second downside. They are very slow to load and save because much data has to be interpreted multiple times. It does not really matter at editing time but loading time is essential for interactive programs and games. Programmers are able to accelerate reading texture file content using an own format and they also are able to do it with all other data. Generally speaking it is important to convert as few as possible and often to minimize bandwidth.
COLLADA files and other development assets could be converted before they are regularly used by the project or when they are finalized. That depends on the time you are saving during the development. In any case they should be converted before the product is delivered to the customer.

I believe that COLLADA will become the standard game development format one day, if no other competitive format occurs. Currently it is already utilized by some big names like Sony®, Google™, “3ds Max®”, “Maya®”, “Unreal® Engine” or “XSI™”.

Category: Content Creation, Programming / Author: Lightrocker / Date: 2007-07-27 - Friday / Comment(s): none

TigerHeart II: First impressions on OpenGL

I am currently implementing some classes for the new “TigerHeart” graphics engine using the OpenGL pipeline. And it is the right way because I am a beginner in OpenGL, which is a little bit more different from Direct3D than I thought. Since I am a professional in programming with Direct3D I can judge how to design interfaces, classes and their interaction so they can be used for both APIs.
A good example is shader programming: In Direct3D vertex and pixel shaders can be applied roughly independent from another. But using OpenGL you have to create a program object, to which multiple shaders can be attached. Afterwards this “program” must be linked and applied to utilize the shaders.

per vertex lighting per pixel lighting

You can see my current progression state at both images above. It is a cube model with shared, rounded vertex normals, what is not a reasonable assignment but a good test. The first one shows the common per vertex diffuse and specular lighting. It has got a poor quality because both lighting colors have to be linearly interpolated between the eight vertices.
On the second image you can see the same calculations but relocated to the pixel shader. There is no need for a normal map unless you want to add details to the object without appending vertices. The quality seams to be nearly perfect.

The mesh is rendered using vertex buffer objects (VBO), which is the fastest way to draw complex models using OpenGL, and shaders are compiled with GLSL. This language is comparable to Microsoft’s HLSL but have got some design differences. For example the compiler is integrated into the graphics driver and there is no possibility to specify shader model targets.

Category: TigerHeart II / Author: Lightrocker / Date: 2007-07-04 - Wednesday / Comment(s): none

TigerHeart II: OpenGL vs. Direct3D

I wrote that the second version of “TigerHeart” should be able to use OpenGL and Direct3D for rendering. But why do we need to support both APIs today? It is a lot of work, which has to be done twice.

Currently some hardware vendors haven’t got good OpenGL drivers. So you can get problems to run your software on some graphics cards. But as a game developer you need the widest possible hardware range, which is able to execute your software without any difficulty. For this reason it is a good decision to choose Direct3D because its behavior is obviously more matchable. Moreover you get better support for older hardware since a lot of features can be simulated by the CPU when the GPU does not support it and HLSL shaders can be compiled to Shader Model 1, what is not possible with GLSL.
Media Seasons is not only developing games but also graphics software for television. Therefore a requirement is to output the rendered graphics on the SDI channels of a “NVIDIA Quadro FX” card. We use the SDI SDK to accomplish this. Unfortunately that software development kit only supports OpenGL, what is comparable to the Quadro drivers, which are optimized for OpenGL because it is still the standard API for professional products. So we are forced to implement a rendering path for it. But we do not have to make our TV software compatible for further graphics cards because in this market segment the developer is able to exactly specify the required hardware.

Category: TigerHeart II / Author: Lightrocker / Date: 2007-06-06 - Wednesday / Comment(s): none

TigerHeart II: Objectives for the graphics engine

The 3D engine is the largest and most addressed extension of the first “TigerHeart” version. Because it is possible to use it not only for three-dimensional presentation but also for hardware-accelerated flat, 2D drawing the notation will be changed to the more indicative term ‘graphics engine’.
Following characteristics should be achieved:

  1. API base: It must be able to utilize Direct3D and OpenGL for hardware-accelerated display.
  2. Standardization: The objects should be accessible and modifiable independently from the graphics API, which is currently active. So a scene, which is created with DirectX, can be rendered with OpenGL and vice versa and tools and helper functions have to be created only once. Anyway the engine gains specialized interfaces, methods and attributes for particular API functionalities and performance issues.
  3. Object types: There are two major kinds of objects. Displayable ones (like graph nodes and polygon meshes) are unilaterally interdependent, transformable and have got the ability to be rendered and attributes like bounding volumes and a position vector. They use states (like shaders and textures), which are the second kind and on a par, for rendering. Besides octrees will help for culling and collision detection, fonts are being used by text objects to show letters, animations to transform nodes by presets, cameras for view manipulation and light sources for illumination.
  4. Multi-pass: Displayable objects should have the ability to be rendered multiple times using different sets of states. This is important for generating shadows, effects and complex lighting.
  5. Streaming: Rendering operations are stored into buffers, which can be sorted to reduce state changes and to meet other, hardware-dependent conditions for performance increases. These stream buffers can be processed in another thread than the generating one. That way the scenery is being modifiable for the next frame while the previous one is still being drawn.

Category: TigerHeart II / Author: Lightrocker / Date: 2007-05-29 - Tuesday / Comment(s): none

« Previous Entries