Nia Alarie
The NetBSD kernel's audio resampling code is much simpler and lower quality than libsamplerate.
Presumably, if SDL always performs I/O on the audio device in its native frequency, we can avoid resampling audio in the kernel and let SDL do it with libsamplerate instead.
If we fail to connect to the the pa server, we have an assigned context
and mainloop that isn't connected. So, when PULSEAUDIO_pa_context_disconnect
is called, pa asserts and crashes the application.
Assertion 'pa_atomic_load(&(c)->_ref) >= 1' failed at pulse/context.c:1055, function pa_context_disconnect(). Aborting.
When converting audio from signed to unsigned values of vice-versa
the silence value chosen by SDL was the value of the device, not
of the stream that the data was being put into. After conversion
this would lead to a very high or low value, making the speaker
jump to a extreme positon, leading to an audible noise whenever
creating, destroying or playing scilence on a device that reqired
such conversion.
Original code assigned MCIMixSetup.ulSamplesPerSec value to it, but it
is just the freq... We now change spec->samples only either if it is 0
or we changed the frequency, by picking a default of ~46 ms at desired
frequency (code taken from SDL_audio.c:prepare_audiospec()).
With this, the crashes I have been experiencing are gone.
I _think_ this is a right thing to do; it fixes a .wav file I have here that
has blockalign==2 when channels==2 and bitspersample==16, which otherwise
would fail.
This is only supported on PulseAudio. You can set a description when opening
your audio device that will show up in pauvcontrol, which lets you set
per-stream volume levels.
Fixes Bugzilla #4801.
Anthony Pesch's notes on his patch:
"Currently, the WASAPI backend creates a stream in shared mode and sets the
device's callback size to be half of the shared stream's total buffer size.
This works, but doesn't coordinate will with the actual hardware. The hardware
will raise an interrupt after every period which in turn will signal the
object being waited on inside of WaitDevice. From my empirical testing, the
callback size was often larger than the period size and not a multiple of it,
which resulted in poor latency when trying to time an application based on the
audio callback. The reason for this looked something like:
* The device's callback would be called and and the audio buffer was filled.
* WaitDevice would be called.
* The hardware would raise an interrupt after one period.
* WaitDevice would resume, see that a a full callback had not been played and
then wait again.
* The hardware would raise an interrupt after another period.
* WaitDevice would resume, see that a full callback + some extra amount had
been played and then it would again call our callback and this process would
repeat.
The effect of this is that the pacing between subsequent callbacks is poor -
sometimes it's called very quickly, sometimes it's called very late.
By matching the callback's size to the stream's period size, the pacing of
calls to the user callback is improved substantially. I didn't write an actual
test for this, but my use case for this was my Dreamcast emulator
(https://redream.io) which uses the audio callback to help drive the emulation
speed. Without this change and with the default shared stream buffer (which
has a period of ~10ms) I would get frame times that were between ~3-30
milliseconds; after this change I get frame times of ~11-22 milliseconds.
Note, this patch also has a change that removes passing a duration to the
Initialize call. It seems that the default duration used (when 0 is passed)
does typically match up with the duration returned by GetDevicePeriod, however
the Initialize docs say:
> To set the buffer to the minimum size required by the engine thread, the
> client should call Initialize with the hnsBufferDuration parameter set to 0.
> Following the Initialize call, the client can get the size of the resulting
> buffer by calling IAudioClient::GetBufferSize.
This change isn't strictly required, but I made it to hopefully rule out
another source of unexpected latency."
Fixes Bugzilla #4592.
So if you go into System Preferences on a MacBook and toggle between a pair of
connected bluetooth headphones and built-in internal speakers, SDL will
switch the device it is playing sound through, to match this setting, on the
fly.
Likewise if the default output device is a USB thing and is unplugged; as the
default device changes at the system level, SDL will pick this up and carry
on with the new default. This is different from our unplug detection for
specific devices, as in those cases we want to send the app a disconnect
notification, instead of migrating transparently as we now do for default
devices.
Note that this should also work for capture devices; if the device changes,
SDL will start recording from the new default.
Fixes Bugzilla #4851.