perjantai 9. elokuuta 2013

Audio with OpenAL

I took part in a gamjam ( Assembly2013 GameJam finnishe only sorry ), in preparation I started improving my current game engine/framework (called "craft"  ). While inspecting things, I realized, I lacked audio playback capabilities (also text rendering support.. maybe I'll write post about signed distance field text rendering engine later..). I tried to get these done before Assembly2013, but failed, due to OpenAL not being as trivial as I thought and all openal tutorials being too trivial.

OpenAL has 5 abstract things (in my opinion), Device, Context, Source, Buffer and Listener. The device is where it all begins, device can create context, and context has one listener and may have sources and buffers.

How to initialize openal (sorry for using pictures, but blogged cant handle pure code):

at 153 we open the preferred device, the specifications also give us a way to query for devices, if we want:

Now that we have the "device", we can create context (at line 168), paremeters for the function are device, and NULL terminated attribute list. With the list you can specify how many mono or stereo sources you want (ALC_MONO_SOURCES, ALC_STEREO_SOURCES). With ALC_SYNC to AL_TRUE you can specify if you want your own mixer thread or not (but then you have to call alcProcessContext(context) yourself). ALC_FREQUENCY apparently means the frequency of the context (to which everything will be sampled.. (apparently on windows it it 44,100 hardcoded). ALC_REFRESH which is al refresh rate.

Once we have the context, we make it current, I imagine that all calls to OpenAL needs a context that is current/bound. At line 188 I've opted to use "generateSources" method, the class "Context" has a pool of pregenerated source IDs. The reason for having the sources managed by Context, comes from the fact, that devices have limited amount of channels and if our design assigned a Source ID per player abstraction (like my original code did), we will run out of Source IDs after 32 players (or what ever is the maximum on the device). 

It is pretty simple function, whenever called, It tries to add more Source IDs to the pool.

And this retainSource and releaseSource are how the players then get their SourceIDs, the idea is that they only retain SourceID when they playback something, once the playback is finished, they should release the SourceID.

So now we have some code that handles initialization context and reuse of sources. Lets move on to playback a bit.. 

OpenAL seems unable to playback stereo sounds as "3D" sources, so all 3D sounds needs to be in mono format (MONO8 or MONO16), also sources seem to set to one type of audio once used, for this reason I am using only the alSourceQueueBuffers to queu buffers to source, if I used the setSource with buffer ID, I would not be able to use that source for queue buffer style (and vice versa). At the writing of this article, I am not sure how well Sources respond to changing between input buffer formats (once playing, it should not be possible to mix MONO8 and STEREO8 buffers.. but if I stop the source in between.. maybe).


My first implementation of decoding was "hardcode ogg decoding everywhere".. that wasn't very smart, I came up with an interface Decoder class, that provides all the needed information of the file and when requested, decodes either fully or in chunks the file into provided ByteArray. 
After the Decoder interface was ready, I implemented OggDecoder that supports both ways of decoding a packed ByteArray.


Playing sounds with OpenAL is pretty simple, with audio that fits fully into a Buffer, just queue the buffer into the source, and call play function. For streaming, it gets a tad more complicated, you have to have n buffers, that you joggle in the stream (I prefer 3 buffers), when 1 buffer gets empty, unqueue it from Source, fill it up, and queue it again.
The Source has couple interesting settings, AL_GAIN that in my books equals the volume, and AL_PITCH that I would rather translate as speed.

3D positional audio

The positional audio seems to work only with MONO audio, otherwise it work pretty much the same as other audio sources, though it has some extra settings that gives it the ability to fade, when the source gets farther away from listener.


These settings govern what happens to the source audio, depending on the selected distance model ( alDistanceModel( alEnum ) ), I am not going to say anything more about it, for now, there are nice graphs about it in the OpenAL Programmers guide pdf, page 87 and forward. I think the distance model is Context specific.


Well, having "wav" and "flac" support with Decoder interface would be nice, also I've been reading Game programming gems2 about audio design patters, I think my current approach is a bit too low level and it would benefit from having Music/Stereo system separated from 3D positional audio.


Ei kommentteja:

Lähetä kommentti