- reorganize the loop which checks for the right wave-format
- use the return value of UpdateAudioStream
- ensure SetError is called in SDL_NewAudioStream
- use SDL_bool if possible
- assume NULL/SDL_FALSE filled impl
- skip zfill of current_audio at the beginning of SDL_AudioInit (done before the init() calls)
WASAPI_WaitDevice is used for audio playback and capture, but needs to
behave slighty different.
For playback `GetCurrentPadding` returns the padding which is already
queued, so WaitDevice should return when buffer length falls below the
buffer threshold (`maxpadding`).
For capture `GetCurrentPadding` returns the available data which can be
read, so WaitDevice can return as soon as any data is available.
In the old implementation WaitDevice could suddenly hang. This is
because on many capture devices the buffer (`padding`) wasn't filled
fast enough to surpass `maxpadding`. But if at one point (due to unlucky
timing) more than maxpadding frames were available, WaitDevice would not
return anymore.
Issue #3234 is probably related to this.
Anthony Pesch's notes on his patch:
"Currently, the WASAPI backend creates a stream in shared mode and sets the
device's callback size to be half of the shared stream's total buffer size.
This works, but doesn't coordinate will with the actual hardware. The hardware
will raise an interrupt after every period which in turn will signal the
object being waited on inside of WaitDevice. From my empirical testing, the
callback size was often larger than the period size and not a multiple of it,
which resulted in poor latency when trying to time an application based on the
audio callback. The reason for this looked something like:
* The device's callback would be called and and the audio buffer was filled.
* WaitDevice would be called.
* The hardware would raise an interrupt after one period.
* WaitDevice would resume, see that a a full callback had not been played and
then wait again.
* The hardware would raise an interrupt after another period.
* WaitDevice would resume, see that a full callback + some extra amount had
been played and then it would again call our callback and this process would
repeat.
The effect of this is that the pacing between subsequent callbacks is poor -
sometimes it's called very quickly, sometimes it's called very late.
By matching the callback's size to the stream's period size, the pacing of
calls to the user callback is improved substantially. I didn't write an actual
test for this, but my use case for this was my Dreamcast emulator
(https://redream.io) which uses the audio callback to help drive the emulation
speed. Without this change and with the default shared stream buffer (which
has a period of ~10ms) I would get frame times that were between ~3-30
milliseconds; after this change I get frame times of ~11-22 milliseconds.
Note, this patch also has a change that removes passing a duration to the
Initialize call. It seems that the default duration used (when 0 is passed)
does typically match up with the duration returned by GetDevicePeriod, however
the Initialize docs say:
> To set the buffer to the minimum size required by the engine thread, the
> client should call Initialize with the hnsBufferDuration parameter set to 0.
> Following the Initialize call, the client can get the size of the resulting
> buffer by calling IAudioClient::GetBufferSize.
This change isn't strictly required, but I made it to hopefully rule out
another source of unexpected latency."
Fixes Bugzilla #4592.
Matt Brocklehurst
We've noticed that if you are playing audio on Windows via the WASAPI interface and you unplug and reconnect the device a few times the program hangs.
We've debugged the problem down to
static void
WASAPI_WaitDevice(_THIS)
{
... snip ...
if (WaitForSingleObjectEx(this->hidden->event, INFINITE, FALSE) == WAIT_OBJECT_0) {
... snip ...
}
This WaitForSingleObjectEx does not havbe a time out defined, so it hangs there forever.
Our suggested fix we found was to include a time out of say 200mSec
We have done quite a bit of testing with this fix in place on various hardware configurations and it seems to have resolved the issue.
Cameron Gutman
I was trying to use SDL_GetQueuedAudioSize() to ensure my audio latency didn't get too high while streaming data in from the network. If I get more than N frames of audio queued, I know that the network is giving me more data than I can play and I need to drop some to keep latency low.
This doesn't work well on WASAPI out of the box, due to the addition of GetPendingBytes() to the amount of queued data. As a terrible hack, I loop 100 times calling SDL_Delay(10) and SDL_GetQueuedAudioSize() before I ever call SDL_QueueAudio() to get a "baseline" amount that I then subtract from SDL_GetQueuedAudioSize() later. However, because this value isn't actually a constant, this hack can cause SDL_GetQueuedAudioSize() - baselineSize to be < 0. This means I have no accurate way of determining how much data is actually queued in SDL's audio buffer queue.
The SDL_GetQueuedAudioSize() documentation says: "This is the number of bytes that have been queued for playback with SDL_QueueAudio(), but have not yet been sent to the hardware." Yet, SDL_GetQueuedAudioSize() returns > 0 value when SDL_QueueAudio() has never been called.
Based on that documentation, I believe the current behavior contradicts the documented behavior of this function and should be changed in line with Boris's patch.
I understand that exposing the IAudioClient::GetCurrentPadding() value is useful, but a solution there needs to take into account what of that data is silence inserted by SDL and what is actual data queued by the user with SDL_QueueAudio(). Until that happens, I think the best approach is to remove the GetPendingBytes() call until SDL is able to keep track of queued data to make sense of it. This would make SDL_GetQueuedAudioSize() possible to use accurately with WASAPI.
SDL now builds with gcc 7.2 with the following command line options:
-Wall -pedantic-errors -Wno-deprecated-declarations -Wno-overlength-strings --std=c99