We might want to use ssize_t as @Guldoman suggested, but that's a larger internal API change, and still requires casting of the SDL_utf8strnlen() result.
Fixes https://github.com/libsdl-org/SDL/pull/5821
This allows us to handle controllers that use the Xbox protocol but look like Nintendo Switch or Playstation controllers, like the Qanba Dragon Arcade Stick in PC mode
This is done such that we can disable LTO for these 2 functions when
building with MSVC.
This is due to a limitation of Link Time Code Generation (LTCG).
Code generation might generate a new reference to memset after linking
has started. The LTCG must make assumptions about where memset is
defined which is normally the C runtime.
Some compositors will provide 'nicer' / 'human readable' output descriptions via the xdg-output protocol. Use these description strings, when available, instead of the model name provided by wl-output. On compositors such as GNOME where this is provided, the display names provided to applications by SDL will now match those in the desktop display settings panel. On compositors where this data isn't provided, the old behavior of using the model string provided by wl-output will remain unchanged.
Additionally, per the protocol spec, output data provided by xdg-output should supersede wl-output data, so this is the recommended behavior in general.
If the width is sufficiently ludicrous, then the calculated pitch or
the image size could conceivably be a signed integer overflow, which
is undefined behaviour. Calculate in the unsigned size_t domain, with
overflow checks.
Signed-off-by: Simon McVittie <smcv@collabora.com>
Adds hint "SDL_WINDOWS_DPI_SCALING" which can be set to "1" to
change the SDL coordinate system units to be DPI-scaled points, rather
than pixels everywhere.
This means windows will be appropriately sized, even when created on
high-DPI displays with scaling.
e.g. requesting a 640x480 window from SDL, on a display with 125%
scaling in Windows display settings, will create a window with an
800x600 client area (in pixels).
Setting this to "1" implicitly requests process DPI awareness
(setting SDL_WINDOWS_DPI_AWARENESS is unnecessary),
and forces SDL_WINDOW_ALLOW_HIGHDPI on all windows.
If the move results in a DPI change, we need to allow the window to resize (e.g. AdjustWindowRectExForDpi frame sizes are different).
- WM_DPICHANGED: Don't assume WM_GETDPISCALEDSIZE is always called for PMv2 awareness - it's only called during interactive dragging.
- WIN_AdjustWindowRectWithStyle: always calculate final window size including frame based on the destination rect,
not based on the current window DPI.
- Update wmmsg.h to include WM_GETDPISCALEDSIZE (for WMMSG_DEBUG)
- WIN_AdjustWindowRectWithStyle: add optional logging
- WM_GETMINMAXINFO: add optional HIGHDPI_DEBUG logging
- WM_DPICHANGED: fix potentially clobbering data->expected_resize
Together these changes fix the following scenario:
- launch testwm2 with the SDL_WINDOWS_DPI_AWARENESS=permonitorv2 environment variable
- Windows 10 21H2 (OS Build 19044.1706)
- Left (primary) monitor: 3840x2160, 125% scaling
- Right (secondary) monitor: 2560x1440, 100% scaling
- Alt+Enter, Alt+Enter (to enter + leave desktop fullscreen), Alt+Right (to move window to right monitor). Ensure the window client area stays 640x480. Drag the window back to the 125% monitor, ensure client area stays 640x480.
The hint allows setting a specific DPI awareness ("unaware", "system", "permonitor", "permonitorv2").
This is the first part of High-DPI support on Windows ( https://github.com/libsdl-org/SDL/issues/2119 ).
It doesn't implement a virtualized SDL coordinate system, which will be
addressed in a later commit. (This hint could be useful for SDL apps
that want 1 SDL unit = 1 pixel, though.)
Detecting and behaving correctly under per-monitor V2
(calling AdjustWindowRectExForDpi where needed) should fix the
following issues:
https://github.com/libsdl-org/SDL/issues/3286https://github.com/libsdl-org/SDL/issues/4712
I was looking at how errors are handled by SDL and came across this #define SDL_ERRBUFIZE which looks like a typo for SDL_ERRBUFSIZE, but either way, it looks like this isn't being used anywhere anymore because it was getting reported whenever I compile SDL with -Wunused-macros, and the last time it was mentioned in the code was from commit 09ca66b.
XDG-toplevel min/max size values are double-buffered data and must be committed before entering the fullscreen state, or a max window size value smaller than the display dimensions may cause the compositor to incorrectly configure the fullscreen window size. This fixes windowed->fullscreen transitions on GNOME, where, previously, certain combinations of window flags and min/max size values could cause entering fullscreen mode to fail with odd window sizes and/or offsets due to the new max size values not being committed before entering fullscreen, causing the compositor to clamp to the old values.
In the case of libdecor, it has its own layer of buffering on top of the xdg-toplevel surface for the min/max window dimensions, so both a frame commit and surface commit are required to set the state properly.
Previously, the surface damage region was being set in the same callback used for preventing render hangs in the GL backend when the surface was not visible. This was not ideal, as the callback was never fired in the case of using a different render backend or having a swap interval of 0. Use a separate frame callback for setting the surface damage region to ensure that it fires reliably, regardless of the backend being used or swap interval.
Some compositors (GNOME for example) don't set the transform flag when dealing with portrait mode displays, so the video modes won't have the width/height swapped in all cases where they should be. Check for both the 90/270 degree transform flag and if the display is taller than it is wide when determining whether to swap the width and height of the emulated video modes, and adjust the comparison logic when size testing against the native mode to account for this.
Add the hint "SDL_VIDEO_WAYLAND_MODE_EMULATION", which can be used to disable mode emulation under Wayland. When disabled, only the desktop and/or native display resolution is exposed.
Previously, scale values used by the displays and surfaces were always integers, with fractional scale values only being calculated when the backbuffer and viewport sizes were being determined. Now, if xdg-output is available, the fractional scale of output displays is calculated when the displays are enumerated and the true scale values of the output devices are used whenever possible.
This unifies the integer and fractional scaling systems, allows for the use of more accurate scale values that minimize overdraw when windows straddle multiple outputs, and lays the groundwork for the pending Wayland scaling protocols that will report the preferred scale values per-surface instead of per-output.
Compartmentalize the fullscreen mode emulation code blocks, unify the windowed/fullscreen viewport logic, consolidate all window geometry code into a central function to eliminate blocks of duplicate code and rename related variables and functions to more explicitly reflect their purpose.
Because we were sending multiple chunks of preedit strings,
`SDL_SendEditingText` was using the old `SDL_TEXTEDITING` event only.
Now if `SDL_HINT_IME_SUPPORT_EXTENDED_TEXT` is enabled, we send the full
string and correctly set the cursor position and selection size.
- SDL_JoystickGUID -> SDL_GUID (though we retain a type alias)
- Operations for GUID <-> String ops are now in
src/SDL_guid.c and include/SDL_guid.h
- The corresponding Joystick operations delegate to SDL_guid.c
- Added test/testguid.c
As of #5703, we call libdecor_dispatch() in Wayland_WaitEventTimeout(),
but this will crash if we don't load libdecor, as
SDL_VideoData::shell.libdecor will be NULL.
Since we don't load libdecor if we don't intend to use it (i.e., if
should_use_libdecor returns false), this results in a crash under KDE in
almost all circumstances.
The MinGW-64 header defines the parameters as ABI::Windows::Foundation::IReference<INT32 > **, but the Windows header defines the parameters as __FIReference_1_int**
Since #5602, SDL is intended to have the same ABI across the whole
major-version 2 cycle, so we should not check that the minor version
matches the one that was used to compile an application.
There are two checks that could make sense here.
The first check is that the major version matches the expected major
version. This is usually unnecessary and is not usually done (if we're
calling into the wrong library we'll likely crash anyway), but since we
have the information, we might as well continue to use it.
The second check is whether the version provided by the caller is
equal to or greater than a threshold version at which additional fields
were added to the struct. If it is, we should populate those fields;
if it is not, then we cannot. This is only useful on platforms where
additional fields have genuinely been added during the lifetime of
SDL 2, like Windows and DirectFB (but not X11).
This commit changes the first check to be consistent about only looking
at the minor version, while leaving the second check using SDL_VERSIONNUM
(which will be removed or widened in SDL 3, but it's fine for now).
Resolves: https://github.com/libsdl-org/SDL/issues/5711
Fixes: cd7c2f1 "Switch versioning scheme to be the same as GLib and Flatpak"
Signed-off-by: Simon McVittie <smcv@collabora.com>
Otherwise we ignore the Configure/etc events when they come in because
the window is already in an identical state as far as SDL is concerned.
Fixes#5593.
May also fix:
Issue #5572.
Issue #5595.
This reverts commit 3fcc2cb500.
Button4 and Button5 are for the scrollwheel, not the extended buttons.
I don't know of a way to query the state of the extended buttons using X11.
These try to pull from the .pdf files that are installed with
macOS, which fit our needs better, and fall back to the most
reasonable defaults available from NSCursor if we can't load
them.
Since these are installed under /System, they should be sandbox
accessible, and if this totally fails, it should still go on,
albeit with a less good cursor.
Reference Issue #2123.
On Gnome (and hopefully others!), this produces something that actually
matches SDL_SYSTEM_CURSOR_SIZENWSE/SDL_SYSTEM_CURSOR_SIZENESW. On
other desktop enviroments, it probably fits the spirit better than
XC_fleur in any case.
Reference Issue #2123.
Added the ability to specify a name and the product VID/PID for a virtual controller
Also added a test case to testgamecontroller, if you pass --virtual as a parameter
Don't be fooled by the diff size - this ended up being a big refactor of the
shell surface management, masked only by some helper macros I wrote for the
popup support.
This change makes it so when xdg_decoration is supported, but CSD is requested,
the system bails on xdg support entirely and resets all the windows to use
libdecor instead. This transition isn't pretty, but once it's done it will be
smooth if decorations are an OS toggle since libdecor will take things from
there.
In hindsight, we really should have designed libdecor to be passed a toplevel,
having it manage that for us keeps causing major refactors for _every_ change.
Move rendering of the assert message into a separate
function so we can remove the ugly loop construction.
Changes the logic such that allocation failure no longer
immediately returns SDL_ASSERTION_ABORT, instead we
fall back to the truncated message.
If an error is indicated from SDL_snprintf, then we do
abort with SDL_ASSERTION_ABORT.
* Add changes from code review by @ccawley2011, #5597, overall cleanup
* Update N-Gage README, minor cleanup and rephrasing
* Call SDL_SetMainReady() before calling SDL_main, return SDL_main instead of main
Change Cocoa SDL_VideoData and SDL_WindowData implementations from C structs to Objective-C objects, since bridging between C and ObjC is easier that way.
If the size to be allocated is very large and untrusted, then adding
the padding etc. might be enough to cause unsigned overflow, after
which a very small amount of memory will be allocated.
Signed-off-by: Simon McVittie <smcv@collabora.com>
If we're strict about applying something resembling semantic versioning
to the "marketing" version number, then we can mechanically generate
the ABI version from it.
This limits the range of valid micro versions (patchlevels) to 0-99.
Signed-off-by: Simon McVittie <smcv@collabora.com>
For stable releases, this gives us the ability to make bugfix-only point
releases such as 2.24.1 if we want to, and distinguish between them
programmatically. For example, this ability could have been useful after
2.0.16 to fix Xwayland regressions, and after 2.0.18 to fix event loop
regressions.
For development releases, this gives us the ability to make multiple
prereleases during the same feature cycle, and distinguish between them
programmatically. For example, this would have been useful during 2.0.22
development, which went through three prereleases before reaching the
final release.
Signed-off-by: Simon McVittie <smcv@collabora.com>
* Add initial support for the Nokia N-Gage
* N-Gage: disable clipping for the time being, issue needs to be resolved later
* Move va_copy definition to SDL_internal.h
* Move stdlib.h include to SDL_config_ngage.h, much cleaner this way
* Remove redundant include, add HAVE_STDLIB_H
* Revert "N-Gage: disable clipping for the time being, issue needs to be resolved later"
This reverts commit 4f5f0fc36cc7f34fad05e45671dfa7b8dc32fd51.
* N-Gage: fix clipping issue by providing proper math functions
Enabling GCController.shouldMonitorBackgroundEvents to read background events
for MFi controllers before receiving the first GCControllerDidConnectNotification
is apparently a no-go on macOS (12.3.1 for me), and would crash on attempt.
Apple's documentation is... not great, and doesn't point this out.
This waits for IOS_AddMFIJoystickDevice() to get called down the chain from GCControllerDidConnectNotification, and enables GCController.shouldMonitorBackgroundEvents
if it hadn't been already.
On iOS and tvOS, GCController.shouldMonitorBackgroundEvents is ignored, so
there's no need to check their versions.
Ensure that we're not trying to call SDL_small_alloc()
with a count of zero.
Transforming the code like this fixes a
-Wmaybe-uninitialized warning from GCC 12.0.1
For short messages, use a stack buffer that is
significantly smaller than SDL_MAX_LOG_MESSAGE.
The rationale for this is that we don't want to risk
blowing the stack, while at the same time we would
like to not put pressure on the memory allocator unless
absolutely necessary.
This is the one that splits the "left wing" into two for loops to
bubble out the conditional that decides if it should read from the
left padding or the input buffer.
I still believe the optimization is good, but the basic logic of it
was incorrect, and needs to be reexamined and fixed before going
back into revision control.
We can get _some_ of the info we need out of standard Xlib and report a
single display (which might actually be multiple physical displays mushed
into a single desktop). This is better than nothing, but you should really
just build with XRandR support and get a better X server. :)
- Calculate `j * RESAMPLER_SAMPLES_PER_ZERO_CROSSING` once per loop
iteration since we use it multiple times.
- Do the left-wing loop in two sections: while `srcframe < 0` and then
the remaining calculations when `srcframe >= 0`. This bubbles a conditional
out of every iteration of a tight loop, giving us a boost. We could
_probably_ do this to the right-wing loop too, but it's less straightforward
there.
- The real win: Use floats instead of doubles. This almost doubles the speed
of the entire function on Intel CPUs, and for embedded things without
hardware-level support for doubles, the speedup is enormous. This in
theory might reduce audio quality, though, and I had to put a check in
place to avoid a division-by-zero that we avoided at higher precision, but
this is likely to be worth keeping for at least the Sony PSP and other
smaller platforms, if not everyone.
Instead of waiting until the entire buffer from the SDL callback is ready
to be accepted by PulseAudio, we use pa_stream_set_write_callback and
feed some portion of the buffer as callbacks come in asking for more.
This lets us remove the halving of the buffer size during device open,
and also (hopefully) solves several strange hangs that happen in unusual
circumstances.
Fixes#4387Fixes#2262
* Read IMU scale data from Switch controllers. Up until now, SDL has used hard-coded scaling which isn't correct with some supported controllers.
* Moved declarations to beginning of code blocks to better fit with SDL style requirements
Every other backend does this, so this should match, now.
It's possible this was harmless, but we can avoid the system call
and the (likely?) debug message when it fails, though!
When mode switching is disabled in a video backend, fullscreen windows are basically just fullscreen desktop windows with different internal scaling. As no mode switching occurs, there's no need to minimize them on focus loss by default. This can still be overridden by explicitly setting the internal hint for minimizing on focus loss.
This has the side effect of fixing a bug on GNOME, where, when a fullscreen Wayland window has it's focus lost and restored via alt+tab followed by switching back to windowed mode, the top portion of the window won't end up being obstructed by GNOME's top bar.
This reverts commit 8ceba27d62.
SDL Wayland support is stable, but there are a number of issues with third-party software (NVIDIA drivers, libwayland event overflow, libdecor not handling plugin load failures, Steam overlay not working with Wayland, etc.) that make it better to default to X11 at this time.
Games which would like to prefer wayland when available can use the following code before SDL_Init():
SDL_SetHint(SDL_HINT_VIDEODRIVER, "wayland,x11");
Fixes https://github.com/libsdl-org/SDL/issues/5527
This hint allows libdecor to be used even when xdg-decoration is
available. It's mostly useful for debugging libdecor, but could in
theory be used by applications which want to (for example) bundle their
own libdecor plugins.
When using emulated display modes, the output size is often larger than the drawable buffer. As the surface damage region is automatically calculated from the smaller drawable buffer size, the damage region needs to be manually set to cover the entire viewport region or visual repaint artifacts can result.
I kind of thought it'd be nice to have it in the center, but this is an issue
for applications that still assume global mouse and window positions are
accessible. For example, this fixes cursor offset issues in UE5.
It's possible that an external component (probably a GL/VK context) committed, so we need to cover our bases and detach in both HideWindow and ShowWindow.
Fixes a crash in UE5 editor's pop-ups.
Partially fixes the mouse cursor in UE5 editor. Imperfect because UE5 uses window position and global mouse state to get position, but of course we don't have global mouse and this is just to get the right display index so this still fails overall. We really need to make global mouse support a feature query...
So if Gnome/KDE/etc have a keyboard shortcut or titlebar decoration to
make any window go fullscreen (with the _NET_WM_FULLSCREEN flag on the
_NET_WM_STATE property), we update the SDL window flag.
Fixes#5390.
This makes sure the window doesn't have outdated values if you try to access
them (or call something that does, like SDL_SetWindowMinimumSize).
Fixes#5233.
On Wine, when a window is programmatically minimized in response
to losing focus, we receive a WM_ACTIVATE for the deactivation,
but GetForegroundWindow still indicates that our window is focused.
This causes an incorrect SDL_WINDOWEVENT_FOCUS_GAINED.
This is probably a Wine bug, but it may take a while to fix and
then for the fix to make its way to users.
libGL.so may register callbacks that can be invoked upon XCloseDisplay().
If XCloseDisplay() is called after libGL.so is unloaded, the callback pointer
will point at freed memory and invoking it will crash.
The texture framebuffer check optimized out in f37e4a9 was causing libGL.so to
never be unloaded as a side-effect. Skipping it exposed this bug by allowing
libGL.so to actually unload.
We were returning the report size from HIDAPI_DriverPS5_RumbleJoystick() rather
than 0 upon success, causing SDL_JoystickRumble() (and callers) to think that
rumbling failed.
This didn't cause major problems until 1868c5b, when it started preventing
rumble state from being persisted in the joystick core, even though it was
successfully sent to the hardware.
This led to all sorts of strangeness, including broken rumble duration and
attempts to stop rumble being discarded.
The functions can go south if other operations are in progress, like
X11_SetWindowBordered, which might be doing something traumatic behind the
scenes of the window manager.
We can't make these tasks totally synchronous, which would fix the problem,
because not only can the window manager block however long it wants, it might
also decide to deny our requests without any notification, so we'd be waiting
forever for a window change that isn't coming. :(
Fixes#5274.
Use viewports for non-fullscreen windows when the desktop uses fractional scaling and the window is flagged as DPI-aware to provide a backbuffer mapped as close to 1:1 output as possible. In the cases of odd window sizes the backbuffer may be a pixel off of scaling perfectly into the window size due to its scaled size being rounded off, but a minute amount of scaling during output is likely preferable to the large amounts of overdraw needed with integer scaled buffers.
Expose as many emulated display modes as possible. They will currently display stretched to the display's native desktop aspect, but if an application requires a hardcoded resolution, it will work at minimum.
Aside from the change in the emulated display mode list, the Wayland event handling code had to be updated to support separate scaling for the x and y axes, as square pixels are no longer guaranteed.
Wayland doesn't support mode switching, however Wayland compositors can support the wp_viewporter protocol, which allows for the mapping of arbitrarily sized buffer regions to output surfaces. Use this functionality, when available, for fullscreen output when using non-native display modes and/or when dealing with scaled desktops, which can incur significant overdraw without this extension.
This also allows for the exposure of arbitrarily sized, emulated display modes, which can be useful for legacy compatability.
1. Mod index values are (mostly) constant, so can be done with xkb_state_new
2. Mods can change without the group changing, avoid remap events if possible
Lastly, as a bonus, I added braces to the locale check, because I was nearby.
When using shared linking (linking in the normal way with
-lwayland-client) rather than loading Wayland libraries dynamically at
runtime, listing symbols that don't exist in the current version results
in a build failure. We don't actually call wl_proxy_marshal_flags() or
wl_proxy_marshal_array_flags() directly; the reason we need them is
that they're called by the code generated by wayland-scanner >= 1.20.
If we're building against an older Wayland library, then we'll have its
corresponding version of wayland-scanner (mismatched versions are not
supported), so we won't need those two symbols, and can avoid generating
a dependency on them.
Conversely, if we're building against a newer Wayland library, the
generated code will call them unconditionally, so we cannot treat them as
optional and gracefully fall back: that would result in a crash. Instead,
treat them as a mandatory part of the Wayland library, so that if they
are not found at runtime, we can fall back to X11 without crashing.
libwayland 1.18 is in several LTS distributions (Ubuntu 20.04,
Debian 11, RHEL 8) so avoiding a hard dependency on 1.20 is quite
useful.
Signed-off-by: Simon McVittie <smcv@collabora.com>
Resolves: https://github.com/libsdl-org/SDL/issues/5376
Using wl-output to get the desktop display dimensions and dividing by the integer scale factor will not return the correct result when using a desktop with fractional scaling (e.g. a 3840x2160 display at 150% will incorrectly report the scaled desktop area as 1920x1080 instead of 2560x1440). Use the xdg-output protocol, if available, to retrieve the correct desktop dimensions and offset.
Versions 1 through 3 of the protocol are supported.
- which also enable/disable the orientation lock status.
This is only provided when the window is not SDL_WINDOW_FULLSCREEN (see SDL_video.c).
Final orientation also depends on SDL_HINT_ORIENTATIONS.
The Java code needs the native functions to be implemented, even if
they're not surfaced via the C API, therefore, a stub version of
functions were made only to the purpose of "fill the gaps" when
SDL_HIDAPI_DISABLED set to 1.
if SDL_EnumUnixAudioDevices() fails to find any devices,
set an error message on the exit path. Without this,
SDL_Init() could fail without any message available
in SDL_GetError().
This isn't obvious, but makes sense when thinking about how games actually use it. This is also in line with how Windows mouse relative mode is implemented.
Fixes https://github.com/libsdl-org/SDL/issues/5340
Here's an IRC dump that hopefully explains the issue this fixes:
> I'm debugging something odd where, for a libre game,
unvanquished.net (a FPS), relative mouse input in fullscreen is
buggy
> it's like, working mostly ok, but it has a weird
performance/cleanup bug
> after some time in relative mouse input mode, some time as low
as 15s, usually more, the SDL sends A LOT of relative mouse
input per frame
> almost all of which have xrel==0 && yrel==0
> by A LOT, I mean that after ~1min, it's usually in the
thousands per frame
> each frame, a while ( SDL_PollEvent( &e)) loop reads the
inputs, but it seems SDL is not clearing the list.
> one way to clear the list is to open the in-game console or
menu, which switches the input mode to absolute, then close it
which gets a working relative input mode (for some time at least)
> I've shown the issue to be present with SDL2.0.20 but not with
2.0.14 on my system
> some other players on Arch Linux (SDL2.0.20) report a possibly
related issue, where some keys seem to be pressed at random
> I've did some bisection on SDL master, and I've found that
there are actually two commits involved, one breaking it
totally (no input at all), and one fixing it partially (with
the problem described above)
First related commit that breaks it totally:
commit 82793ac279
Author: Sam Lantinga <slouken@libsdl.org>
Date: Thu Oct 14 14:26:21 2021 -0700
Fixed mouse warping while in relative mode
We should get a mouse event with an absolute position and no relative motion and shouldn't change the OS cursor position at all
Second related commit, that halfway fixes it:
commit 31f8c3ef44
Author: Sam Lantinga <slouken@libsdl.org>
Date: Thu Jan 6 11:27:44 2022 -0800
Fixed event pump starvation if the application frequently pushes its own events
Reverting the first commit did fix the issue for me, but would
probably reintroduce the bug it was fixing(?). This patch should
fix it for everyone hopefully.
https://github.com/DaemonEngine/Daemon/issues/600 is the upstream
bug, and contains some early investigation.
According to MSDN, we can also get SIZE_MAXHIDE and SIZE_MAXSHOW,
based on state changes to other windows. It's not clear under
what circumstances this will happen (I saw some docs indicating
it may require multiple application windows), but it doesn't seem
right to treat them as RESTORED.
- no need to keep the error in a static variable
- always print the error code
- reduce the required stack-size
- reduce the number of snprintf calls (and code size)
* Fixes for IME Composition Truncation + Addition of SDL_ClearComposition, SDL_IsTextInputShown
* Fixed: Documentation and code style issues raised during code review.
The name that the Raw Input joystick driver pulls from the HID stack comes
from USB string descriptors contained on the device. For official wireless
receivers, this always contains "Xbox 360 Wireless Receiver for Windows"
which matches the friendly name that WGI provides.
3rd party Xbox 360 wireless receivers may have different strings in their
USB string descriptors (one uses "XBOX 360 For Windows" instead). This
fails to match WGI's name and causes Raw Input and WGI to both report the
same gamepad.
Since wireless Xbox 360 controllers seem to have a consistent VID/PID
regardless of the adapter enumerating them, we can also match on that to
catch these.
The duplicate case reported to me was:
Controller (XBOX 360 For Windows) - 030000005e040000a102000000007200
Xbox 360 Wireless Receiver for Windows - 030000005e0400000000000000007701
This adds support for all 3 of the gamecube controller's rumble modes
Rumble: 1
Stop: 0
StopHard: 2
This is useful for applications that need the full range of support
This also adds a hint to control rumble behavior, defaults 0 to maintain compatibility
If the app requested a specific renderer, even if it's not the optimal path,
let them have it, because they might want to render with a specific GPU API
on top of the framebuffer pixels.
This fixes DosBox-X crashing on startup, which forces the hint to "opengl".
Now we see if we can create an SDL_Renderer, and if that renderer reports
itself as "accelerated," and added some initial heuristics to the OpenGL
renderer to make better decisions about what qualifies as "accelerated."
This adds some FIXMEs that might be merely hypothetical, and removes the
old OpenGL checks from the video subsystem that probably weren't meaningful
in modern times. This will definitely need to improve the existing list
in the GL renderer, to catch things like llvmpipe, etc.
Reference issue #4624.
The io_list_check_add() and io_list_remove() functions are only ever called from within the Pipewire thread loop, so the locks are redundant. io_list_sort() is called from within a lock in the device detection function, so those additional locks are redundant as well.
Remove the hard upper limit of 8192 samples and instead use the buffer sizes provided by Pipewire to determine the size of the intermediate input buffer and whether double buffering is required for output streams. This allows for higher latency streams to potentially avoid double-buffering in the output case, and we can guarantee that the intermediate input buffer will always be large enough to handle whatever Pipewire may deliver.
As the buffer size calculations occur in a callback in the Pipewire processing thread itself, the stream readiness check has been modified to wait on two distinct flags set when the buffers have been configured and when the stream is ready and running.
- use the result of SDL_ConvertPixels to propagate error
- get rid of the verbose error message of D3D11_RenderReadPixels in case SDL_ConvertPixels failed
- always set error message in SDL_EGL_ChooseConfig / SDL_EGL_CreateContext
- assume SDL_EGL_DeleteContext does not alter the error message
- sync generic error message of SDL_EGL_MakeCurrent with SDL_EGL_Get/SetSwapInterval
- do not overwrite error message of SDL_EGL_ChooseConfig in WINRT_CreateWindow
- add missing error-message in SDL_LoadBMP_RW
- check return value of SDL_RWtell in SDL_LoadBMP_RW
- use standard SDL_EFREAD error instead of custom strings
+ adjust return type of readRlePixels
- allocate ime_candidates on demand
- allow write to the whole allocated memory of ime_candidates
- ensure ime_candcount is set to zero in case the candidates can not be queried for any reason
- move IME_ShowCandidateList, ImmGetContext and ImmReleaseContext to this function
- set ime_candpgsize to MAX_CANDLIST if dwPageSize is zero
- comment out deselection of ime_candsel in case of korean language for the moment (LANG_CHT does not work anyway)
The context and stream creation functions will destroy the passed properties object on failure, so no need to do it manually.
The pw_properties_free() function pointer is no longer needed, so it can be removed.
This reverts commit ca36cdb185f2f26241598068927821896f36b904.
The older Windows SDK's headers are wrong, and this change would crash if
you hotplug a device.
- drop unnecessary hascapture check
- call SDL_InvalidParamError and return -1 in case the index is out of range
- do not zfill SDL_AudioSpec
- adjust documentation to reflect the behavior
- reorganize the loop which checks for the right wave-format
- use the return value of UpdateAudioStream
- ensure SetError is called in SDL_NewAudioStream
- use assert instead of a check (it is a static function with constant parameter)
- assume it is called with 0 first (simplifies the logic)
- reuse dwLang value instead of a new 'call' to LANG()
- use SDL_bool if possible
- assume NULL/SDL_FALSE filled impl
- skip zfill of current_audio at the beginning of SDL_AudioInit (done before the init() calls)
If we get a SDL_SetWindowSize() call right after SDL_SetWindowFullscreen() but
before we've gotten a new configure event from the compositor, the attempt to
set our window size will silently fail (when libdecor is enabled).
Fix this by remembering that we need to commit a new size, so we can do that
in decoration_frame_configure().
This prevents SDL from making an OpenGL context and maybe throwing it away
immediately by default. It will now only do it when trying to request a
window framebuffer directly, or creating an SDL_Renderer with the "software"
backend, which makes that request itself.
The way SDL decides if it should use a "texture framebuffer" needs dramatic
updating, but this solves the immediate problem.
Reference Issue #4624.
First window is created and it triggers and 'EnterNotify' event
which calls SDL_SetMouseFocus() and X11_ShowCursor() while the second
windows hasn't finished to be created (eg window->driverdata isn't set)
Just check for a valid 'driverdata'
The issue is that MS Windows synthesizes a mouse-move event in response
to touch-move events, and those mouse-move events are NOT labeled as
coming from a touch (e.g. GetMouseMessageSource() will not return
SDL_MOUSE_EVENT_SOURCE_TOUCH for those synthesized mouse-move events).
In addition, there seems to be no way to prevent this from happening;
https://gist.github.com/vbfox/1339671 claims to demonstrate a technique
to prevent it, but in my experience, it doesn't work.
Because of this, the "fallthrough" case can't test that the synthesized
mouse-move came from a touch-move, and starts erroneously pressing down
the mouse-button, leading to massive confusion in the client
application.
If a touch-down event is received for an existing touch-ID, that
probably means the operating system lost it, and that the missing
touch-up should be synthesized, to keep the client state coherent.
This caused some weird stuff to happen in the libdecor path, probably because
the window hasn't actually been mapped yet. It ends up calling stuff that
should not yet apply, and so fullscreen in particular would have a really
messed up titlebar.
The good news is, libdecor is good about tracking fullscreen state, so we can
let the callback do this for us. Keep this for xdg_shell because we actually
map the window ourselves, so we know this call is valid for that path.
Pipewire, as of 0.3.22, uses client config files to load modules instead of explicitly specifying them (PW_KEY_CONTEXT_PROFILE_MODULES is deprecated). Use the new method to load the realtime module to boost the audio thread priority.
From e6cc4e7f4b8189be55dd3b0e13e54e59f73d7672 Mon Sep 17 00:00:00 2001
From: X512 <danger_mail@list.ru>
Date: Thu, 30 Jan 2020 04:01:58 +0900
Subject: libsdl2: Remove BDirectWindow, fix OpenGL handling.
* BDirectWindow changed to BWindow.
* Implemented fullscreen.
* Introduced view for non-OpenGL drawing.
* Drawing thread removed, window thread is used instead.
* Use BGLView as OpenGL context. Implement proper context switching and OpenGL
locking. Only one context per window is supported. BGLView should be not
deleted when window is closed, it deleted when deleting context.
Previous to this commit, key repeats events were typically generated when
pumping events, based on the time of when the events are pumped. However,
if an application doesn't call `SDL_PumpEvents` for some seconds, this time
can be multiple seconds in the future compared to the actual key up event time,
and generates key repeats even if a key was pressed only for an instant.
In practice, this can happen when the user presses a key which causes the
application to do something without pumping events (e.g. load a level).
In Crispy Doom & PrBoom+, when the user presses the key bound to "Restart
level/demo", the game doesn't pump events during the "screen melt" effect,
and the level is restarted multiple times due to spurious repeats.
To fix this, if the key up event is among the events to be pumped, we generate
the key repeats there, since in the Wayland callback we receive the time when
the key up event happened. Otherwise, we know no key up event happened and we
can generate as many repeats as necessary after pumping.
Signed-off-by: Joan Bruguera <joanbrugueram@gmail.com>
Refactorization with no functional changes.
Instead of `next_repeat_ms` containing a timestamp based on SDL ticks, we make
it zero-based relative to the key press time, and we store the key press time in
SDL ticks in a new field.
This refactorization is groundwork for future commits which need to use the
key press and release timestamps provided by the Wayland API, which are also
expressed in milliseconds, but whose base does not match the one for SDL ticks.
Signed-off-by: Joan Bruguera <joanbrugueram@gmail.com>
If `repeat_info->next_repeat_ms` overflows, many key presses will be generated.
In the worst case, `now = 0xFFFFFFFFU` and the loop will never terminate.
Rearrange the comparison in order to gracefully handle the overflow case.
Signed-off-by: Joan Bruguera <joanbrugueram@gmail.com>