Don't rely on checking __clang_major__ since it is not comparable
between different vendors. Don't use "#pragma clang attribute" since it
is only available in relatively recent versions, there's no obvious way
to check if it's supported, and just using __attribute__ directly (for
gcc as well) results in simpler code anyway.
Without this change, driver names don't get matched correctly;
for example "a" can get matched with "alsa" since it only checks
whether the string matches up to the length of the requested
driver name.
This is needed to support CHERI, and thus Arm's experimental Morello
prototype, where pointers are implemented using unforgeable capabilities
that include bounds and permissions metadata to provide fine-grained
spatial and referential memory safety, as well as revocation by sweeping
memory to provide heap temporal memory safety.
On most systems (anything with a flat memory hierarchy rather than using
segment-based addressing), size_t and uintptr_t are the same type.
However, on CHERI, size_t is just an integer offset, whereas uintptr_t
is still a capability as described above. Casting a pointer to size_t
will strip the metadata and validity tag, and casting from size_t to a
pointer will result in a null-derived capability whose validity tag is
not set, and thus cannot be dereferenced without faulting.
The audio and cursor casts were harmless as they intend to stuff an
integer into a pointer, but using uintptr_t is the idiomatic way to do
that and silences our compiler warnings (which our build tool makes
fatal by default as they often indicate real problems). The iconv and
egl casts were true positives as SDL_iconv_t and iconv_t are pointer
types, as is NativeDisplayType on most OSes, so this would have trapped
at run time when using the round-tripped pointers. The gles2 casts were
also harmless; the OpenGL API defines this argument to be a pointer type
(and uses the argument name "pointer"), but it in fact represents an
integer offset, so like audio and cursor the additional idiomatic cast
is needed to silence the warning.
WASAPI_WaitDevice is used for audio playback and capture, but needs to
behave slighty different.
For playback `GetCurrentPadding` returns the padding which is already
queued, so WaitDevice should return when buffer length falls below the
buffer threshold (`maxpadding`).
For capture `GetCurrentPadding` returns the available data which can be
read, so WaitDevice can return as soon as any data is available.
In the old implementation WaitDevice could suddenly hang. This is
because on many capture devices the buffer (`padding`) wasn't filled
fast enough to surpass `maxpadding`. But if at one point (due to unlucky
timing) more than maxpadding frames were available, WaitDevice would not
return anymore.
Issue #3234 is probably related to this.
On modern CPUs, there's no penalty for using the unaligned instruction on
aligned memory, but now it can vectorize unaligned data too, which even if
it's not optimal, is still going to be faster than the scalar fallback.
Fixes#4532.
While we should normally expect _something_ from the stream based on the
AudioStreamAvailable check, it's possible for a device change to flush the
stream at an inconvenient time, causing this function to return 0.
Thing is, this is harmless. Either data will be NULL and the result won't matter
anyway, or the data buffer will be zeroed out and the output will just be
silence for the brief moment that the device change is occurring. Both scenarios
work themselves out, and testing on Windows shows that this behavior is safe.
Some of the SDL_AudioDevice struct members aren't initialized until after returning from the OpenDevice function. Since Pipewire uses it's own processing threads, the callbacks can be entered before all members of SDL_AudioDevice are initialized, such as work_buffer, callbackspec and the processing stream, which creates a race condition. Don't use these members when in the paused state to avoid potentially using uninitialized values and memory.
This prevents the dsp target from stealing the audio subsystem but not
being able to produce sound, so other audio targets further down the list
can make an attempt instead.
Thanks to Frank Praznik who did a lot of the research on this problem!
A user reported that the mpv video player hangs after attempting to
set an unsupported number of channels with the SDL audio output,
because it thinks it's successfully opened the device. This makes
the failure graceful.
Removes the node nickname from sink/source nodes as it doesn't provide any useful information and names now match those used in Pulseaudio, so any stored configuration data will be compatible between the two audio backends.
Constify the min/max period variables, use a #define for the base clock rate used in the calculations and note that changing the upper limit can have dire side effects as it's a hard limit in Pipewire.
Replace "magic numbers" with #defines, explain the requirements when using the userdata pointer in the node_object struct and a few other minor code and comment cleanups.
Use the 'R' (rear) prefixed designations for the rear audio channels instead of 'S' (surround). Surround designated channels are only used in the 8 channel configuration.
Further refactor the device enumeration code to retrieve the default sink/source node IDs from the metadata node. Use the retrieved IDs to sort the device list so that the default devices are at the beginning and thus are the first reported to SDL.
The latency of source nodes can change depending on the overall latency of the processing graph. Incoming audio must therefore always be buffered to ensure uninterrupted delivery.
The SDL_AudioStream path was removed in the input callback as the only thing it was used for was buffering audio outside of Pipewire's min/max period sizes, and that case is now handled by the omnipresent buffer.
Extend device enumeration to retrieve the channel count and default sample rate for sink and source nodes. This required a fairly significant rework of the enumeration procedure as multiple callbacks are involved now. Sink/source nodes are tracked in a separate list during the enumeration process so they can be cleaned up if a device is removed before completion. These changes also simplify any future efforts that may be needed to retrieve additional configuration information from the nodes.
This uses the mechanism added in emscripten-core/emscripten#10843
which was applied to SDL1 and OpenAL. This adds the same for SDL2.
This also reverts commit 865eaddffed50dbd13e6564c3f73902472cf74e8
which did something similar, but the new mechanism is more effective.
The DJGPP compiler emits many warnings for conflicts between print
format specifiers and argument types. To fix the warnings, I added
`SDL_PRIx32` macros for use with `Sint32` and `Uint32` types. The macros
alias those found in <inttypes.h> or fallback to a reasonable default.
As an alternative, print arguments could be cast to plain old integers.
I opted slightly for the current solution as it felt more technically correct,
despite making the format strings more verbose.
Nia Alarie
The NetBSD kernel's audio resampling code is much simpler and lower quality than libsamplerate.
Presumably, if SDL always performs I/O on the audio device in its native frequency, we can avoid resampling audio in the kernel and let SDL do it with libsamplerate instead.
If we fail to connect to the the pa server, we have an assigned context
and mainloop that isn't connected. So, when PULSEAUDIO_pa_context_disconnect
is called, pa asserts and crashes the application.
Assertion 'pa_atomic_load(&(c)->_ref) >= 1' failed at pulse/context.c:1055, function pa_context_disconnect(). Aborting.
When converting audio from signed to unsigned values of vice-versa
the silence value chosen by SDL was the value of the device, not
of the stream that the data was being put into. After conversion
this would lead to a very high or low value, making the speaker
jump to a extreme positon, leading to an audible noise whenever
creating, destroying or playing scilence on a device that reqired
such conversion.
Original code assigned MCIMixSetup.ulSamplesPerSec value to it, but it
is just the freq... We now change spec->samples only either if it is 0
or we changed the frequency, by picking a default of ~46 ms at desired
frequency (code taken from SDL_audio.c:prepare_audiospec()).
With this, the crashes I have been experiencing are gone.
I _think_ this is a right thing to do; it fixes a .wav file I have here that
has blockalign==2 when channels==2 and bitspersample==16, which otherwise
would fail.
This is only supported on PulseAudio. You can set a description when opening
your audio device that will show up in pauvcontrol, which lets you set
per-stream volume levels.
Fixes Bugzilla #4801.
Anthony Pesch's notes on his patch:
"Currently, the WASAPI backend creates a stream in shared mode and sets the
device's callback size to be half of the shared stream's total buffer size.
This works, but doesn't coordinate will with the actual hardware. The hardware
will raise an interrupt after every period which in turn will signal the
object being waited on inside of WaitDevice. From my empirical testing, the
callback size was often larger than the period size and not a multiple of it,
which resulted in poor latency when trying to time an application based on the
audio callback. The reason for this looked something like:
* The device's callback would be called and and the audio buffer was filled.
* WaitDevice would be called.
* The hardware would raise an interrupt after one period.
* WaitDevice would resume, see that a a full callback had not been played and
then wait again.
* The hardware would raise an interrupt after another period.
* WaitDevice would resume, see that a full callback + some extra amount had
been played and then it would again call our callback and this process would
repeat.
The effect of this is that the pacing between subsequent callbacks is poor -
sometimes it's called very quickly, sometimes it's called very late.
By matching the callback's size to the stream's period size, the pacing of
calls to the user callback is improved substantially. I didn't write an actual
test for this, but my use case for this was my Dreamcast emulator
(https://redream.io) which uses the audio callback to help drive the emulation
speed. Without this change and with the default shared stream buffer (which
has a period of ~10ms) I would get frame times that were between ~3-30
milliseconds; after this change I get frame times of ~11-22 milliseconds.
Note, this patch also has a change that removes passing a duration to the
Initialize call. It seems that the default duration used (when 0 is passed)
does typically match up with the duration returned by GetDevicePeriod, however
the Initialize docs say:
> To set the buffer to the minimum size required by the engine thread, the
> client should call Initialize with the hnsBufferDuration parameter set to 0.
> Following the Initialize call, the client can get the size of the resulting
> buffer by calling IAudioClient::GetBufferSize.
This change isn't strictly required, but I made it to hopefully rule out
another source of unexpected latency."
Fixes Bugzilla #4592.
So if you go into System Preferences on a MacBook and toggle between a pair of
connected bluetooth headphones and built-in internal speakers, SDL will
switch the device it is playing sound through, to match this setting, on the
fly.
Likewise if the default output device is a USB thing and is unplugged; as the
default device changes at the system level, SDL will pick this up and carry
on with the new default. This is different from our unplug detection for
specific devices, as in those cases we want to send the app a disconnect
notification, instead of migrating transparently as we now do for default
devices.
Note that this should also work for capture devices; if the device changes,
SDL will start recording from the new default.
Fixes Bugzilla #4851.
Anthony Pesch
The previous code first configured the period size using snd_pcm_hw_par-
ams_set_period_size_near. Then, it further narrowed the configuration
space by calling snd_pcm_hw_params_set_buffer_size_near using a buffer
size of 2 times the _requested_ period size in order to try and get a
configuration with only 2 periods. If the configured period size was
larger than the requested size, the second call could inadvertently
narrow the configuration space to contain only a single period.
Rather than fixing the call to snd_pcm_hw_params_set_buffer_size_near
to use a size of 2 times the configured period size, the code has been
changed to use snd_pcm_hw_params_set_periods_min in order to more
clearly explain the intent.
(technically, this function never returns an error at this point, but since
it _does_ have an "uhoh, is this corrupt data?" comment that it ignores, we
should probably make sure we handle error cases in the future. :) )
LinGao
We build SDL with Visual studio 2017 compiler on Windows Server 2016, but it failed to build due to error LNK2019: unresolved external symbol _memset referenced in function _IMA_ADPCM_Decode on latest default branch. And we found that it can be first reproduced on ca7283111ad0 changeset. Could you please help have a look about this issue? Thanks in advance!
Simon Hug
Attached is a minor cleanup patch. It changes the option name of one hint to something better, puts one or two more checks in, and adds explicit casting where warnings could appear otherwise.
I hope the naming of the hints and their options is acceptable. It would be kind of awkward to change them after they get released with an official SDL version.
Simon Hug
I had a look at this and made some additions to SDL_wave.c.
The attached patch adds many checks and error messages. For some reason I also added A-law and ?-law decoders. Forgot exactly why... but hey, they're small.
The WAVE format is seriously underspecified (at least by the documents that are publicly available on the internet) and it's a shame Microsoft never put something better out there. The language used in them is so loose at times, it's not surprising the encoders and decoders behave very differently. The Windows Media Player doesn't even support MS ADPCM correctly.
The patch also adds some hints to make the decoder more strict at the cost of compatibility with weird WAVE files.
I still think it needs a bit of cleaning up (Not happy with the MultiplySize function. Don't like the name and other SDL code may want to use something like this too.) and some duplicated code may be folded together. It does work in this state and I have thrown all kinds of WAVE files at it. The AFL files also pass with it and some even play (obviously just noise). Crafty little fuzzer.
Any critique would be welcome. I have a fork of SDL with a audio-loadwav branch over here if someone wants to use the commenting feature of Bitbucket:
https://bitbucket.org/ChliHug/SDL
I also cobbled some Lua scripts together to create WAVE test files:
https://bitbucket.org/ChliHug/gendat
janisozaur
There are many cases which are not able to be handled by SDL's audio conversion routines, including too low (negative) rate, too high rate (impossible to allocate).
This patch aims to report such issues early and handle others in a graceful manner. The "INT32_MAX / RESAMPLER_SAMPLES_PER_ZERO_CROSSING" value is the conservative approach in terms of what can _technically_ be supported, but its value is 4'194'303, or just shy of 4.2MHz. I highly doubt any sane person would use such rates, especially in SDL2, so I would like to drive this limit further down, but would need some assistance to do that, as doing so would have to introduce an arbitrary value. Are you OK with such approach? What would a good value be? Wikipedia (https://en.wikipedia.org/wiki/High-resolution_audio) lists 96kHz as the highest sampling rate in use, even if I quadruple it for a good measure, to 384kHz it's still an order of magnitude lower than 4MHz.
If IMA ADPCM format chunk was too short, InitIMA_ADPCM() parsing it
could read past the end of chunk data. This patch fixes it.
CVE-2019-7578
https://bugzilla.libsdl.org/show_bug.cgi?id=4494
Signed-off-by: Petr P?sa? <ppisar@redhat.com>
Matt Brocklehurst
We've noticed that if you are playing audio on Windows via the WASAPI interface and you unplug and reconnect the device a few times the program hangs.
We've debugged the problem down to
static void
WASAPI_WaitDevice(_THIS)
{
... snip ...
if (WaitForSingleObjectEx(this->hidden->event, INFINITE, FALSE) == WAIT_OBJECT_0) {
... snip ...
}
This WaitForSingleObjectEx does not havbe a time out defined, so it hangs there forever.
Our suggested fix we found was to include a time out of say 200mSec
We have done quite a bit of testing with this fix in place on various hardware configurations and it seems to have resolved the issue.
Nia Alarie
The NetBSD audio driver has a few problems. Lots of obsolete code, and extremely bad performance and stuttering.
I have a patch in NetBSD's package system to improve it. This is my attempt to upstream it.
The changes include:
* Removing references to defines which are never used.
* Using the correct structures for playback and recording, previously they were the wrong way around.
* Using the correct types ('struct audio_prinfo' in contrast to 'audio_prinfo')
* Removing the use of non-blocking I/O, as suggested in #3177.
* Removing workarounds for driver bugs on systems that don't exist or use this driver any more.
* Removing all usage of SDL_Delay(1)
* Removing pointless use of AUDIO_INITINFO and tests that expect AUDIO_SETINFO to fail when it can't.
These changes bring its performance in line with the DSP audio driver.
Cameron Gutman
I was trying to use SDL_GetQueuedAudioSize() to ensure my audio latency didn't get too high while streaming data in from the network. If I get more than N frames of audio queued, I know that the network is giving me more data than I can play and I need to drop some to keep latency low.
This doesn't work well on WASAPI out of the box, due to the addition of GetPendingBytes() to the amount of queued data. As a terrible hack, I loop 100 times calling SDL_Delay(10) and SDL_GetQueuedAudioSize() before I ever call SDL_QueueAudio() to get a "baseline" amount that I then subtract from SDL_GetQueuedAudioSize() later. However, because this value isn't actually a constant, this hack can cause SDL_GetQueuedAudioSize() - baselineSize to be < 0. This means I have no accurate way of determining how much data is actually queued in SDL's audio buffer queue.
The SDL_GetQueuedAudioSize() documentation says: "This is the number of bytes that have been queued for playback with SDL_QueueAudio(), but have not yet been sent to the hardware." Yet, SDL_GetQueuedAudioSize() returns > 0 value when SDL_QueueAudio() has never been called.
Based on that documentation, I believe the current behavior contradicts the documented behavior of this function and should be changed in line with Boris's patch.
I understand that exposing the IAudioClient::GetCurrentPadding() value is useful, but a solution there needs to take into account what of that data is silence inserted by SDL and what is actual data queued by the user with SDL_QueueAudio(). Until that happens, I think the best approach is to remove the GetPendingBytes() call until SDL is able to keep track of queued data to make sense of it. This would make SDL_GetQueuedAudioSize() possible to use accurately with WASAPI.
This was meant to migrate CoreAudio onto the same SDL_RunAudio() path that
most other audio drivers are on, but it introduced a bug because it doesn't
deal with dropped audio buffers...and fixing that properly just introduces
latency.
I might revisit this later, perhaps by reworking SDL_RunAudio to allow for
this sort of API better, or redesigning the whole subsystem or something, I
don't know. I'm not super-thrilled that this has to exist outside of the usual
codepaths, though.
Fixes Bugzilla #4481.
Anthony Pesch
Fix snd_device_name_hint return value check
According to the ALSA documentation, snd_device_name_hint returns 0 on
success, otherwise a negative error code. The code previously only
considered -1 to be an error, which let other error codes through
resulting in a segfault when hints (which was NULL) was dereferenced
SDLActivity thread priority is unchanged, by default -10 (THREAD_PRIORITY_VIDEO).
SDLAudio thread priority was -4 (SDL_SetThreadPriority was ignored) and is now -16 (THREAD_PRIORITY_AUDIO).
SDLThread thread priority was 0 (THREAD_PRIORITY_DEFAULT) and is -4 (THREAD_PRIORITY_DISPLAY).
Only __ARM_NEON is defined with Android NDK and arm64-v8a
Tested on ndk-r18, ndk-r13 and also Xcode.
(Visual Studio needs a different fix).
Fixes Bugzilla #4409.
Include guards in most changed files were missing, I added them keeping
the same style as other SDL files. In some cases I moved the include
guards around to be the first thing the header has to take advantage of
any possible improvements compiler may have for inclusion guards.
This means that if you have two devices named "Soundblaster Pro" in your
machine, one will be reported as "Soundblaster Pro" and the other as
"Soundblaster Pro (2)".
This makes it so you can't into a position where one of your devices can't
be opened because another is sitting on the same name.
Author: Anthony Pesch <inolen@gmail.com>
Date: Fri May 4 20:21:21 2018 -0400
Added SDL_AUDIO_ALLOW_SAMPLES_CHANGE flag enabling users of SDL_OpenAudioDevice to get
the sample size of the actual hardware buffer vs having a stream created to handle the
delta
This would cause problems in various ways, but specifically triggers an
assert when you close a WASAPI capture device in an app running over RDP.
Related to (but not the actual bug) in Bugzilla #3924.
At the HG state abdd17144682, 64-bit assemblies are using SSE2-based resampler, produces junk sound when converting the S32 -> Float32 -> S16 chain. The `NEED_SCALAR_CONVERTER_FALLBACKS` thing works perfectly.
If I will find a reason that caused this mistake, I'll send a patch by myself.
The previous code attempted to use set_buffer_size / set_period_size
discretely, favoring the parameters which generated a buffer size that was
exactly 2x the requested buffer size. This solution ultimately prioritizes
only the buffer size, which comes at a large performance cost on some machines
where this results in an excessive number of periods. In my case, for a 4096
sample buffer, this configured the device to use 37 periods with a period size
of 221 samples and a buffer size of 8192 samples. With 37 periods, the SDL
Audio thread was consuming 25% of the CPU.
This code has been refactored to use set_period_size and set_buffer_size
together. set_period_size is called first to attempt to set the period to
exactly match the requested buffer size, and set_buffer_size is called second
to further refine the parameters to attempt to use only 2 periods. The
fundamental change here is that the period size / count won't go to extreme
values if the buffer size can't be exactly matched, the buffer size should
instead just increase to the next closest multiple of the target period size
that is supported. After changing this, for a 4096 sample buffer, the device
is configured to use 3 periods with a period size of 4096 samples and a buffer
size of 12288 samples. With only 3 periods, the SDL Audio thread doesn't even
show up when profiling.
Fixes Bugzilla #4156.
Martin ?irokov
Launching an SDL application with SDL_AUDIODRIVER=jack, and then calling SDL_OpenAudioDevice() with whatever parameters fails with an error like this one:
SDL_OpenAudioDevice: Couldn't connect JACK ports: SDL:sdl_jack_output_0 => system:midi_playback_1
This happens because JACK_OpenDevice in src/audio/jack/SDL_jackaudio.c blindly tries to connect to all input ports without checking whether they are for audio or midi.
The fix is to check port types and ignore all non audio ports. Also I removed devports field from struct SDL_PrivateAudioData, because it's never really used and removing unused ports from it would be PITA.
Jona
The following explains why this bug was happening:
This crash was caused because the audio session was being set as active [session setActive:YES error:&err] when the audio device was actually being CLOSED. Certain cases the audio session being set to active would fail and the method would return right away. Because of the way the error was handled we never removed the SDLInterruptionListener thus leaking it. Later when an interruption was received the THIS_ object would contain a pointer to an already released device causing the crash.
The fix:
When only one device remained open and it was being closed we needed to set the audio session as NOT active and completely ignore the returned error to successfully release the SDLInterruptionListener. I think the user assumed that the open_playback_devices and open_capture_devices would equal 0 when all of them where closed but the truth is that at the end of the closing process that the open devices count is decremented.
(It gets upset at the -2147483648, thinking this should be an unsigned value
because 2147483648 is too large for an int32, so the negative sign upsets the
compiler.)