Difference between multiple AudioUnits and one AudioGraph with a MultiChannelMixer

| | August 6, 2015

I’m porting my game to iOS using MonoTouch, and I’m having some trouble getting sound output to work. SystemSound/AVAudioPlayer is way too simple for my needs (I need looping and multiple simultaneous sounds), and OpenTK is not trivial to get working, in addition for it not supporting audio decoding by itself.

So I’ve settled with CoreAudio, which seems to do the trick. I’ve downloaded some samples, and gotten it to play and loop some sounds. Now I want it to play multiple sounds at the same time.

I noticed that it was trivial to get multiple sounds by creating multiple AudioUnits, each one separately connected to RemoteIO, but this doesn’t seem to be very efficient. You can also mix multiple sounds by creating a graph and mixing everything on a MultiChannelMixer connected to RemoteIO which seems to be the right way to do it.

But I’m basing my decisions on pure assumptions (and in my short iOS experience, I’ve seen my assumptions are usually wrong when it comes to iOS, like some kind of inverse Principle of Least Astonishment), so I would like to ask the CoreAudio gurus over here if there is a real difference between these two approaches, or if there is a “more correct” approach to do what I want.

Leave a Reply