I am not a game developer, but I have been doing iOS software engineering for many years. I have a particular interest in graphics and animation, but the finer details are still a little foreign to me.
Here is the scenario I am working with:
I have an extremely memory-contrained iOS environment (i.e., no more than 16-20MB of active memory usage) where I am trying to display many high-resolution sprite frames in rapid succession to show an animation. Each frame might be up to 1210×1210 pixels, and I can have up to 200 frames in a single animation.
Obviously, these constraints mean I have to completely optimize my memory usage. There’s no way I can even consider having more than one frame of an animation inflated and being used by my application’s memory at any given point. Yet I need images to be loaded super fast.
I had considered trying to pack a bunch of frames together into a PVRTC2 or PVRTC4 image file. The quality and disk-space usage of PVRTC files is superior to other formats, considering they’re only using 2 or 4bpp.
I attempted to use SpriteKit to read the texture atlas I created and display the image frames. This “worked” in that it did what I wanted, but the memory usage was insane, around 600-700MB. Apparently, SpriteKit is always going to make copies of texture data for its own use, which is unfortunate.
My question is if it’s possible to completely bypass copying image data by my application before sending it to the GPU for rendering. My understanding was that the whole point of PVR image formats was to be an exact representation on disk of the uncompressed image data that the GPU can use directly, without the need for copying the image into its own format. (As far as I know, PVR is the format the GPU needs to render the texture.)
Indirectly, I was able to mostly achieve what I wanted using a framework I helped develop for iOS called Fast Image Cache. Basically, it creates image tables on disk that are fully uncompressed and are page-aligned in advance such that Core Animation can use the image data directly without doing a single
memcpy. Fast Image Cache uses mapped memory to further avoid any copies whatsoever. In essence, data goes directly from disk to being rendered on display without any copies being made. (At least, no copies that are made within the context of your application, which could be held against your memory usage limits.)
The problem with FIC is that it is a very naive image table. It doesn’t do any sort of texture packing. For my testing, I loaded each frame of the animation as a separate image inside FIC. There was a ton of disk IO overhead going on. In addition, FIC only supports 32bpp (with or without alpha) and 16bpp bitmap formats, which means the image tables are huge.
Is there a way for me to achieve something similar to FIC using OpenGL and PVRTC images? If I need to build out my own simple texture atlas support to map regions of PVRTC images to individual animation frames, I’ll do that, though it’d be nice if something else besides SpriteKit could do this for me.
The crucial point is being able to quickly load fairly large image data to be rendered without impacting my app’s memory footprint.