The function we currently use, IOHIDDeviceRegisterRemovalCallback(), often
fails on Catalina with a "__CFRunLoopModeFindSourceForMachPort returned NULL"
error message. Once a removal callback is missed, we will eventually crash when
the joystick is closed attempting to use the invalid IOHIDDeviceRef.
https://forums.developer.apple.com/thread/124444
Konrad
This was something rather trivial to add, but asked at least several times before (I did google about it as well).
It should be possible to dynamically change scaling mode of the texture. It is actually trivial task, but until now it was only possible with a hint before creating a texture.
I needed it for my game as well, so I took the liberty of writing it myself.
This patch adds following functions:
SDL_SetTextureScaleMode(SDL_Texture * texture, SDL_ScaleMode scaleMode);
SDL_GetTextureScaleMode(SDL_Texture * texture, SDL_ScaleMode *scaleMode);
That way you can change texture scaling on the fly.
Using Wii U GameCube USB adapter with multiple controllers attached and
restarting SDL input in a game results in extra joysticks with NULL name.
HIDAPI_CleanupDeviceDriver() shut down joysticks by iterating through
device->num_joysticks but each HIDAPI_JoystickDisconnected() decreases
device->num_joysticks and shifts joysticks array down. Resulting in only
half of controllers being shutdown. It worked with only 1 controller
attached though.
Disconnect HIDAPI device joystick 0 until there are none left.
Message in the log, when going to background:
"call to OpenGL ES API with no current context (logged once per thread)"
Because of SDL_WINDOWEVENT_MINIMIZED is sent from the Java Activity thread.
It calls SDL_RendererEventWatch(), _WindowEvent() and glFinish() without context.
Solution is to move sending of SDL_WINDOWEVENT_MINIMIZED to the SDL thread.
Added the functions SDL_JoystickFromPlayerIndex(), SDL_JoystickSetPlayerIndex(), SDL_GameControllerFromPlayerIndex(), and SDL_GameControllerSetPlayerIndex()
For some obscure reason, the order in which the libdrm/libgbm libraries
are loaded matters.
Without this fix, the first call to check_modesetting() will work and
load then unload all symbols properly, but the second call to this
function will lock up as soon as dlopen() is called on libdrm.
Swapping the order in which the libdrm and libgbm libraries are loaded
is enough to fix (or work around?) this issue.
Fixes#4891:
https://bugzilla.libsdl.org/show_bug.cgi?id=4891
Signed-off-by: Paul Cercueil <paul@crapouillou.net>
Aaron Barany
I realized I made a minor mistake in my patch: I changed the constructor prototype for SDL_DisplayData, but didn't update the declaration in the .h file. The compiler and linker don't complain, but it would probably be best to fix in case a later change runs into a problem from the mismatch. I have attached a patch to fix this.
meyraud705
On a Dualshock 4 controller using hidapi driver, calling SDL_JoystickRumble with a duration too long (SDL_HAPTIC_INFINITY for example) causes the rumble to stop immediately.
This happens because of integer overflow on line 301 of SDL_hidapi_ps4.c
(https://hg.libsdl.org/SDL/file/a3077169ad23/src/joystick/hidapi/SDL_hidapi_ps4.c#l301), which sets expiration time in the past.
When we initialize the controller it has an internal rumble sequence number, and if our rumble sequence number doesn't match that, rumble won't happen. To fix that we cycle through the range of sequence numbers, and at some point we'll match up with the controller's sequence number and it'll roll forward until it matches our next rumble sequence number.
Aaron Barany
There appears to be no way to directly access the display DPI on iOS, so as an approximation the DPI for the iPhone 1 is used as a base value and is multiplied by the screen's scale. This should at least give a ballpark number for the various screen scales. (based on https://stackoverflow.com/questions/25756087/detecting-iphone-6-6-screen-sizes-in-point-values it appears that both 2x and 3x are used)
I have updated the patch to use a table of current devices and use a computation as a fallback. I have also updated the fallback computation to be more accurate.
Aaron Barany
Add SDL_HINT_VIDEO_EXTERNAL_CONTEXT hint to notify SDL that the graphics context is external. This disables the automatic context save/restore behavior on Android and avoids using OpenGL by default when SDL_WINDOW_VUKLAN isn't set.
When the application wishes to manage the OpenGL contexts on Android, this avoids cases where SDL unbinds the context and creates new contexts, which can interfere with the application's operation.
When using Vulkan and Metal renderer implementations, this avoids SDL forcing OpenGL to be enabled on certain platforms. While using the SDL_WINDOW_VULKAN flag can be used to achieve the same thing, it also causes Vulkan to be loaded. If the application uses Vulkan directly, this is not necessary, and fails window creation when using Metal due to Vulkan not being present. (assuming MoltenVK isn't installed)
Aaron Barany
Since OpenGL is deprecated on iOS, it is advantageous to be able to remove all OpenGL related code when building SDL for iOS. This patch adds the necessary #if checks to compile in this case.
SDL_SendWindowEvent will only send a RESTORED event if the window has
the minimized or maximized flag set. However, for a SHOWN event, it
will clear the minimized flag. Since the SHOWN event was being sent
first for a MapNotify event, the RESTORED event would never be sent.
Swapping the SendWindowEvent calls around fixes this.
https://bugzilla.libsdl.org/show_bug.cgi?id=4821
Calling open() on input devices can generate device I/O which blocks
the main thread and causes dropped frames. Using stat() we can avoid
opening anything unless /dev/input has changed since we last polled.
We could have used something fancy like inotify, but it didn't seem
worth the added complexity for this uncommon non-udev case.
Eric Shepherd
Currently, SDL on Cocoa macOS creates a rudimentary menu bar programmatically if none is already present when the app is registered during setup.
SDL could be much more easily and flexibly used on macOS if upon finding that no menus are currently in place, it first looked for the application's main menu nib or xib file and, if found, loaded that instead of programmatically building the menus.
This would then let developers simply drop in a nib file with a menu bar defined in it and it would be installed and used automatically.
Attached is a patch that does just this. It changes the SDL_cocoaevents.m file to:
* In Cocoa_RegisterApp(), before calling CreateApplicationMenus(), it calls a new function, LoadMainMenuNibIfAvailable(), which attempts to load and install the main menu nib file, using the nib name fetched from the Info.plist file. If that succeeds, LoadMainMenuNibIfAvailable() returns true; otherwise false.
* If LMMNIA() returns false, CreateApplicationMenus() is called to programmatically build the menus as before.
* Otherwise, we're done, and using the menus from the nib/xib file!
I made these changes to support a project I'm working on, and felt they were useful enough to be worth offering them for uplift. They should have zero impact on existing projects' behavior, but make Cocoa SDL development miles easier.
(note from PulkoMandy on Bugzilla #4442 about why this is a desirable patch:
"The event mask: note that the window and GL view run in their own thread
which I don't expect to be too much CPU bound, and will quickly pop these
messages and forward them to the main thread in our SDL code. Therefore the
B_NO_POINTER_HISTORY should be no problem, and is the default on Haiku
anyway (it was not in BeOS, but we changed that and added a
B_FULL_POINTER_HISTORY flag to request the old behavior explicitly). So, this
seems fine.")
Partially fixes Bugzilla #4442.
Michael Roe
The mappings for keyboard scancodes on Linux do not include keypad left and right parentheses (used on some Microsoft keyboard), keypad plus/minus, LANG1 and LANG2 (used on Korean keyboards), XK86MenuKB, and F20 (remapped to Audio Mic Mute in the usual X11 config).
Solra Bizna
I have written a program that, in the event that the user requests more MSAA samples than their hardware supports, attempts to gracefully fall back to the best MSAA available. This code works with my conventional OpenGL renderer, but if I change nothing about the code except to make it request an OpenGL ES profile instead, Xlib kills the program with an error that looks like:
X Error of failed request: BadWindow (invalid Window parameter)
Major opcode of failed request: 4 (X_DestroyWindow)
Resource id in failed request: 0x5c00008
Serial number of failed request: 188
Current serial number in output stream: 193
To trigger the bug, attempt to create a window with the SDL_WINDOW_OPENGL flag, with SDL_GL_CONTEXT_PROFILE_MASK set to SDL_GL_CONTEXT_PROFILE_ES, and with SDL_GL_MULTISAMPLESAMPLES set to any unsupported value. SDL_CreateWindow properly returns NULL, but at this point the program is already doomed. Xlib will shortly terminate the program with an error. Calling SDL_CreateWindow again will immediately trigger this termination.
I have attached a skeletal program that reproduces this bug for me. Replacing SDL_GL_CONTEXT_PROFILE_ES with SDL_GL_CONTEXT_PROFILE_COMPATIBILITY avoids the bug (but, obviously, doesn't create an OpenGL ES context).
As I suspected, the problem was with XDestroyWindow being called twice on the same window. The X11_CreateWindow function in src/video/x11/SDL_x11window.c calls SetupWindowData. If initialization fails after that point, XDestroyWindow gets called on the window by a subsequent call to X11_DestroyWindow. But, later in the same function, iff a GLES context is requested and initializing it fails, X11_XDestroyWindow (which wraps XDestroyWindow) is manually called. Shortly after, the intended call to X11_DestroyWindow occurs, which attempts to destroy the same window again. Boom.
(The above confusing summary involves three separate, similarly-named functions: XDestroyWindow, X11_DestroyWindow, X11_XDestroyWindow)
I have attached a simple patch that removes the redundant X11_XDestroyWindow calls. I've tested that XDestroyWindow still gets called for the windows in question, and that it only gets called once.
- _num_clips was not set in constructor, so a NULL _clips could be
mistakenly dereferenced.
- As _clips is accessible outside the class, it is not a good idea to
free/reallocate it. Try to limit this by reallocating only when it needs to
grow.
Partially fixes Bugzilla #4442.
warning: either cast from 'int' to 'size_t' (aka 'unsigned long') is ineffective, or there is loss of precision before the conversion [bugprone-misplaced-widening-cast]
This can happen if a window is still grabbed when we try to move it, or if
the X11 ecosystem is just in a bad mood, I guess.
This makes sure that SDL will report the correct position for a window;
otherwise, SDL_GetWindowPosition will just report whatever the last
SDL_SetWindowPosition call requested, even if the window didn't actually move.
Fixes Bugzilla #4646.
Much of the heavy lifting of this optimization is lifted from the Pixman
project, which is distributed under an MIT-style license. As far as possible,
these elements have been relicensed to the zlib license.
Fixes an issue in macOS 10.15 where the displayed content would move up after entering, exiting and re-entering exclusive fullscreen when certain display modes were used (bug #4822).
Bug #3949 is also related to this change.
Use eglGetProcAddress for everything on EGL >= 1.5. Try SDL_LoadFunction first
for EGL <= 1.4 in case it's a core symbol, and as a fallback if
eglGetProcAddress fails. Finally, for EGL <= 1.4, fallback to
eglGetProcAddress to catch extensions not exported from the shared library.
(Maybe) Fixes Bugzilla #4794.
Sylvain
Seems to be a regression in this commit: https://hg.libsdl.org/SDL/rev/7fdbffd47c0e
SDL_CalculatePitch() was using format->BytesPerPixel, now it uses SDL_BYTESPERPIXEL().
The underlying issue is that "surface->format->BytesPerPixel" is *not* always the same as SDL_BYTESPERPIXEL(format);
BytesPerPixel defined as format->BytesPerPixel = (bpp + 7) / 8;
vs
#define SDL_BYTESPERPIXEL(format) ... (format & 0xff)
Because of SDL_pixels.h format definitions, one is giving a BytesPP 1, the other 0.
"This patch does the following:
* Instead of SDL_FillRects calling SDL_FillRect in a loop the opposite
happens -- SDL_FillRect (a specific case) calls SDL_FillRects (a general case)
with a count of 1
* The switch/case block is moved out of the loop -- it modifies the color
once and stores the fill routine in a pointer which is then used throughout
the loop"
Fixes Bugzilla #4674.
The 10 ms delay effectively caps input polling at 100 Hz and rendering
at 100 FPS if applications use these functions in their event loop. The
delay may also lead to dropped frames even at 60 FPS due if they are
unlucky enough to hit the delay and rendering takes longer than 6 ms.
fix building with Mesa 19.2
With Mesa 19.2 building fails with:
/include/GLES/gl.h:63:25: error: conflicting types for 'GLsizeiptr'
The same type is defined in include/SDL_opengl.h for OpenGL and the two
headers should not be included at the same time.
This was just never noticed because the same header guard '__gl_h_' was
used. This was changed in Mesa. The result is this error.
Fix this the same way GLES2 already handles this: Don't include the GLES
header when the OpenGL header was already included.
(https://hg.libsdl.org/SDL/rev/6a3670d6108d)
The X11 target sets mouse->last_x and last_y in EnterNotify and then calls
SDL_SendMouseMotion(), which throws away the new position because it matches
the mouse->last_x and last_y we just set, meaning that if the pointer is
in the window when it created, SDL_GetMouseState() will report a position of
0,0 until a MotionNotify event (the pointer moves) arrives and corrects the
mouse state.
Mostly fixes Bugzilla #1612.
The SDL_USE_LIBDBUS define is set inside SDL_debug.h, therefore the
circular dependency made it impossible for this feature to be enabled.
Instead, guard SDL_dbus.h based on the autoconf variable HAVE_DBUS_DBUS_H
Additionally, fix one of the rtkit comments. CAP_SYS_NICE isn't required
to achieve high priority. But there is some scheduler config that rtkit
needs the app to setup.
The Offscreen video driver is intended to be used for headless rendering
as well as allows for multiple GPUs to be used for headless rendering
Currently only supports EGL (OpenGL / ES) or Framebuffers
Adds a hint to specifiy which EGL device to use: SDL_HINT_EGL_DEVICE
Adds testoffscreen.c which can be used to test the backend out
Disabled by default for now
Background:
Chengdu Haiguang IC Design Co., Ltd (Hygon) is a Joint Venture
between AMD and Haiguang Information Technology Co.,Ltd., aims at
providing high performance x86 processor for China server market.
Its first generation processor codename is Dhyana, which
originates from AMD technology and shares most of the
architecture with AMD's family 17h, but with different CPU Vendor
ID("HygonGenuine")/Family series number(Family 18h).
Related Hygon kernel patch can be found on:
http://lkml.kernel.org/r/5ce86123a7b9dad925ac583d88d2f921040e859b.1538583282.git.puwen@hygon.cn
Best regards.
The LE transformation for vec_perm has an implicit assumption that the
permutation is being used to reorder vector elements (in this case 4-byte
integer word elements), not to reorder bytes within those elements. Although
this is legal behavior, it is not anticipated by the transformation performed
by the compilers.
This causes pygame-1.9.1 test failure on PPC64LE because blitted pixmaps are
corrupted there due to how SDL uses vec_perm().
From RedHat / Fedora: https://bugzilla.redhat.com/show_bug.cgi?id=1392465
Original patch was provided by: Menanteau Guy <menantea@linux.vnet.ibm.com>
If KMSDRM_drmModeSetCrtc is called when the swap interval is
set to 0, the driver behaves as though vertical sync is engaged by
limiting framerate to the refresh rate, but performance is much worse
than with vertical sync enabled.
Resolve this issue by ensuring that the Crtc is only set up once,
and KMSDRM_drmModePageFlip is called, albeit without any followup
queueing or waiting for flips.
Daniel Drake
A long time ago, it was possible to play neverball on Linux using the accelerometer found in HP laptops.
The kernel exposes the accelerometer as a joystick (/dev/input/jsX) as well as an evdev device (/dev/input/eventX). I guess it worked fine when SDL was using the js interface, but then stopped working here: http://hg.libsdl.org/SDL/rev/fdaeea9e7567
Looking at current code which uses udev to discover joysticks, it looks for the udev tag ID_INPUT_JOYSTICK.
However udev's internal input_id logic specifically tags accelerometers as ID_INPUT_ACCELEROMETER and nothing else.
This looks like a good fit for SDL_HINT_ACCELEROMETER_AS_JOYSTICK.
Ozkan Sezer
As for the issue: This bmp reports bpp=0, therefore SDL_CalculatePitch()
returns pitch==0, which is then fed to SDL_malloc() (which is malloc())
and malloc(0) returns _something_ which is not NULL but not someting
that we expect.. Then testsprite.c:LoadSprite() accesses the pixels
as *(Uint8*)pixels which valrind reports as:
==15533== Invalid read of size 1
==15533== at 0x8048C08: LoadSprite (testsprite.c:45)
==15533== by 0x80492FC: main (testsprite.c:224)
==15533== Address 0x449e588 is 0 bytes after a block of size 0 alloc'd
==15533== at 0x40072B2: malloc (vg_replace_malloc.c:270)
==15533== by 0x4045719: SDL_CreateRGBSurface (SDL_surface.c:126)
==15533== by 0x40403C1: SDL_LoadBMP_RW (SDL_bmp.c:237)
==15533== by 0x8048BB2: LoadSprite (testsprite.c:36)
==15533== by 0x80492FC: main (testsprite.c:224)
Besides, valrind also reports this:
==15533== Conditional jump or move depends on uninitialised value(s)
==15533== at 0x40403F3: SDL_LoadBMP_RW (SDL_bmp.c:247)
==15533== by 0x8048BB2: LoadSprite (testsprite.c:36)
==15533== by 0x80492FC: main (testsprite.c:224)
Easy/quick solution would be early-rejecting a bmp with 0 bpp from SDL_bmp.c:SDL_LoadBMP_RW()