Manuel Alfayate Corchete
I noticed pt2-clone had problems with it's optional hardware mouse on the KMSDRM backend: cursor had a transparent block around it.
So I was investigating and it seems that a GBM cursor needs it's pixels to be alpha-premultiplied instead of straight-alpha.
A
lso, I was previously relying on "manual testing" for the cursor size, but it's far better to use whatever the DRM driver recommends via drmGetCap(): any working driver should make a size recommendation via drmGetCap(), so that's what we use now. I took this decision because I found out that the AMDGPU driver reported working cursor sizes that would appear garbled on screen, and only the recommended cursor size works.
This fixes a case where you render to the backbuffer, then render to a render
target, set the current target back to the backbuffer, and then present
without drawing anything else; in this circumstance, the Present command
would never happen.
Fixes Bugzilla #5011.
This is handled in in the higher-level SDL_GL_LoadLibrary().
All uses of SDL_EGL_LoadLibrary (which calls the Only version) are just
target-specific wrappers for their own GL_LoadLibrary hook, with two
exceptions which now handle driver_loaded correctly (although it's
questionable if these init-if-no-one-did-it-correctly-already code blocks
should exist at all, fwiw).
Fixes Bugzilla #5190.
jackmacwindowslinux
I'm testing my application that uses SDL2 on the new Apple Silicon Macs. I set up the SDL 2.0.12 source code from the website and tried to build it. The first issue I ran into was that it was always building OpenGL ES, even if --disable-video-opengles was passed to configure. OpenGL ES headers do not seem to be present on the Apple Silicon macOS SDK, except for the iOS SDK headers. Then I had problems with the joystick driver, where some classes used on iOS were not available on macOS.
After looking through the configure.ac script a bit, I found that iOS targets are selected when the build host matches "arm*-apple-darwin*". Clang on macOS 11.0 on arm64 reports the host as "arm64-apple-darwin20.0.0", which matches the iOS target. This means that ARM Mac compilation will always be detected as iOS. Unfortunately, there doesn't seem to be an easy way to detect Mac vs. iOS targets, since they now both use the same triplet & compiler for building.
I'm not sure what the best way to fix this is, but maybe there could be an additional target flag to specify whether to build for macOS or iOS? This might break compatibility, though: with this approach, either all old scripts that used configure to build for iOS fail, or all new builds on macOS without a flag fail (silently?).
Instead of creating an X11 connection to test that X11 is available,
closing the connection, and then reconnecting for real, use the same
connection to handle both cases.
The X11 connection retry delay mechanism in the case where X11 is
dynamically loaded has been removed. It was only necessary to avoid
authetnication token reuse from the XOpenDisplay call that used to
exist in X11_Available. Now that this call is only made once, it
is no longer needed.
Also drop unused and inapplicable code from a comment.
***
The two are only ever called together, and combining them makes it possible
to eliminate redundant symbol loading and redundant attempts to connect
to a display server.
For systems without strlcpy and strlcat, just declare them as if they exist;
the analyzer possibly still knows the details of these functions and can
utilize that in its analysis.
Most of this patch was from meyraud705 at gmail and Martin Gerhardy. Thanks!
Fixes Bugzilla #5163.
Apparently the "-x objective-c" made it down to the linker, who then treats
the .o file as Objective-C source code. Apparently the -ObjC argument does
the same thing but gets ignored by the linker.
Fixes Bugzilla #4988.
Manuel Alfayate Corchete
This small patch fixes the KMSDRM_CreateSurfaces() call in KMSDRM_CreateWindow(), that was segfaulting deeper into SDL internals because the windata->viddata pointer wasn't set before the KMSDRM_CreateSurfaces() call.
So that's what this small patch does.
Now, L?VE2D works perfectly well on the Raspberry Pi 3, instead of just segfaulting.
Ellie
I just tripped over this: stb_image when requesting 3 channels with 8-bit actually returns them as 3 bytes per pixel with no alignment, so basically 4 pixels are 12 bytes with no padding (0...2, 3...5, 6...8, and 9...11). This I would have naively expected to be called RGB888 or BGR888, since there is no "dead" unused byte as I would expect for something called e.g. RGBX8888.
However, SDL2's SDL_PIXELFORMAT_BGR888 uses 4 bytes, same as SDL_PIXELFORMAT_BGRX8888, even though the latter appears to be a longer storage format - which it isn't, internally. It's just swapped, in byte order X, B, G, R (instead of BGRX). So why isn't the macro name also swapped, as "XBGR888" instead of just "BGR888"?
I find the formats therefore named inconsistently, and unless there is a reason for this I suggest these changes:
1. deprecate SDL_PIXELFORMAT_BGR888 in favor of a new SDL_PIXELFORMAT_XBGR8888
and
2. deprecate SDL_PIXELFORMAT_RGB888 in favor of a new SDL_PIXELFORMAT_XRGB8888
JackBoosY
In src/video/winrt/SDL_winrtgamebar.cpp line 55:
virtual HRESULT STDMETHODCALLTYPE add_VisibilityChanged(
__FIEventHandler_1_IInspectable *handler,
Windows::Foundation::EventRegistrationToken *token) = 0;
The macro __FIEventHandler_1_IInspectable defined in windows.fondation.h(Windows10 SDK 10.0.17763.0) line 3576:
#define __FIVector_1_Windows__CFoundation__CPoint ABI::Windows::Foundation::Collections::__FIVector_1_Windows__CFoundation__CPoint_t
but no longer exists in Windows 10 SDK 10.0.19041.0.
After searching this macro in the sdk include path, I found that it was defined in many header files. But it should be replaced in windows.system.h .
On some systems, GetClipCursor() impacts performance when called frequently, so only call it every once in a while to make sure we haven't lost our capture.