Why?.. where I come from..
I have been creating game/graphics engines for years now, while doing that I need to import 2D/3D/Mesh/Scene/Text/Sound assets efficiently, without hassle.
With 3D objects/meshes, I first end up just generating simple primitives (Balls, Cubes, Planes) on the fly and for complicated things I use gltf or
assimp.
Assimp is full of features that you do not need and misses features that you need (while providing massive maintenance footprint), frankly it is ill suited for real-time applications.
GLTF is modern, changing and quite universal format, except, when creating something new, one has to extend the format with "extra" data or creating own custom extensions, which leads to having to maintain custom exporters. I am also assuming that the performance is often bad due to having to support all possible permutations that GLTF file comes in (not all data is required by gltf specs, whereas my engine can require to always need 2 sets of texture coordinates, which is arbitrary, but acceptable requirement).
Custom 3D format gives a lots of benefits;
- Defining structures that the graphics engine needs and nothing else.
- Most of the error handling can be offloaded onto tooling side, assets are configured on tooling side to match the engine.
- No dependencies on 3rd party, for example if one day I want to define meshes with bezier curves, I am free to do so. Or if the cubic-spline interpolation is mathematically imperfect, I can change the specification.
- Optimization potential by the tooling (gltf, obj, etc. do not support texture compression right off the bat, often textures are just uncompressed).
Journey
2019 I was creating a vulkan based graphics engine and started designing 3D mesh/Scene format (
kokkeli-mop) at the time I just sketched together some bare bone presentation of how a 3D mesh structures could be presented with
flatbuffers. At the time I was trying to learn vulkan rtx raytracing and the 3d fileformat was just a side-thought of a side-project.
2020 I abandoned of trying to do graphics engine from scratch or rather from lowest level and opted to create graphics algorithms and engine side on top of another engine. During development, I considered using GLTF in engine but I reconsidered and revived my MOP fileformat for this project and after couple months of development, friend recommended writing a blog of the journey of developing MOP, hopefully documenting the reasoning behind design choices..
MOP So far..
As this is ongoing process, all things are in constant change, iterations.. once everything has been proven to work, I can say that MOP has reached status 1.0.
|
First iteration of drawing the MOP graph.. I can say this is purely academical work at this point.
|
|
Somewhere along the line, scene graph sketches, but still very academic work. |
At the beginning I had defined all the attributes and material/descriptorset bindings, as uint32 ID's, but later on I decided that actually, everything should be bound by semantic strings, the mesh provides with semantics data that the graphics pipelines might use, in attributes, or in uniforms, or somewhere else. I reasoned that parsing a string and having logic that dynamically interprets data versus having hardcoded indexes, the dynamic way wins.
|
latest version as of jan. 27. 2021 .. Some improvements on naming, and red arrows determining where blobs of data separate into different files.
|
At the moment I have created fbs files that define flatbuffers structures. Python scripts that convert gltf files into MOP structures. Of the structures, I have tested that Mesh works, and I am currently working on the Material blobs.
Next post is hopefully going to be a explanation of Mesh structure and if I have completed Material structure, also about that as well.
links: