I’m seeking advice, suggestions, and ideas on how to handle the updating of large amounts of data in OpenGL and c++.
My partner and I have gone through two methods.
The first is vertex by vertex rendering.
Right away, this was crossed off the table; it’s super slow!
When using it, the simulation ran at 3 FPS.
All it was rendering was just some untextured cubes composing a flat landscape.
Imagine adding some animals to that, or even a full blown world!
This method is definitely not an option.
The second method is VBOs.
The problem with VBOs is that they don’t like having vertice added or removed (though removing can be accomplished with a hack of setting data to null).
We could just create a new VBO every time something changes in this way, but the 3D data is likely to change often, if not every frame.
Thus that would also be very inefficient, and VBOs are not an option either since, we can’t add vertices/faces and can only dirty-hack remove existing vertices/faces.
What, then, would be the best method for us to use?
Would we be able to use a shader or something to create a custom datatype on the graphics card, upload the 3D data to it initially, and then afterwards, only send the transformation details (such as a transformation matrix or what have you) and what faces/vertice have been removed/added?
Even if the above method is a possibility, how else could I go about doing this, and what is its pros and cons? Perhaps one of those would better serve my needs.
Edit: Here is what I mean by “large amounts of data”.
It shall be known that the project that requires this fast updating of “large amounts” of OpenGL data is a simulator that attempts to simulate its own world. World being defined as an entire existence, which may include multiple universes, dimensions, galaxies, solar systems, planets, etc, such that it is not limited to a single “planet”, which is a common definition of the term “world”. Thus, this “large amount of data” is the portion of the world that is visible to the player.
As this is a world simulator, all the data of the world is procedurally generated. This means 3D models are NOT being loaded. The “3d models”, if you wish to call them that, are generated from the 3D data of the objects in the world, such as a house, tree, car, flower, cat, or cup.
In short, the world simulation aims to simulate the world as detailed as possible (the limit being set by the resources of the computer running it). For the purpose of this question, let us assume that the simulation is being run on a computer that allows for every object to possess very high polygon counts which we will say, for the purpose of this question, exceeds 100k for a cat.
That in itself is not a problem; just initially place the data in a VBO. However, what happens as the cat grows? New vertices and faces would be added and removed. The case extends to actions the cat does such as walking around the world (Animations are not explicitly defined and used in the code. They are, rather, a byproduct of the simulation, in which the 3D data itself is updated, creating the effect of an animation).
That is where the problem lays. Creating a new VBO for every frame of the ‘animation’ (keep in mind the above statement on animations), or for every time the cat grows, is not an option due to performance concerns. It’d more or less achieve the same results as vertices by vertices rendering, but with more overhead, and the only performance gains seen being when no new vertices and faces are added or removed.
In conclusion, large amounts of data, such as 100k polygons per model, will be seen. The specific edits that will be done to this data results from the growing, or otherwise shrinking, of objects in the world and the simulation of these objects (See the note I made earlier on animations.)
To clarify confusion, this is how the data is changed:
1. The position of each vertices is updated relative to the camera. Eg, a dog walking away from the camera.
2. Vertices are added and removed from the model. Thus faces are also changed, removed, or created.
Changes #1 and #2 are expected to occur at least once a second for every object in the world, excluding terrain, rocks and other inanimate objects.
I do not see how patterns of changes can be predicted, as the data can change in unimaginable ways.
Examples of changes that will occur:
1. The simplest case, an apple falling from the sky. This is easily be done by simply applying a transformation matrice. The rest that follow are more complex.
2. A sword lopping off the arm of a soldier. In this example, vertices are removed from one model and put into a new model. VBOs do not explicitly support this, but a dirty hack exists to work around it.
3. An engineer welding two pieces of metal together. In this example, vertices are removed from one model, and added to the next. VBOs do not support this. In this specific case, that problem can be worked around by simply retaining the individual VBOs and only do the merger of data in the world simulation code.
4. A chemical reaction. Such examples of this would be an acid eating away a material or mixing liquid detergent and hydrogen peroxide reference video)
5. A physical reaction, such as ice melting into water.
6. Liquids. The most complex case, as their surface dynamically changes in countless ways.
A wave of inspiration swept over me and I’ve come up with two ideas on how to solve this problem. I’m now working with my partner towards implementing both. In short, one method creates a custom data structure similar to VBOs, that has the in-built feature of adding and removing vertices, while the other is a hack using VBOs, destroying the old one whenever vertices are added or removed and recreating it, but doing it on the GPU so that data is not transferred all the time, which is the source of the nightmarish slow performance of vertex by vertex rendering. More details to come.