If we're strict about applying something resembling semantic versioning
to the "marketing" version number, then we can mechanically generate
the ABI version from it.
This limits the range of valid micro versions (patchlevels) to 0-99.
Signed-off-by: Simon McVittie <smcv@collabora.com>
For stable releases, this gives us the ability to make bugfix-only point
releases such as 2.24.1 if we want to, and distinguish between them
programmatically. For example, this ability could have been useful after
2.0.16 to fix Xwayland regressions, and after 2.0.18 to fix event loop
regressions.
For development releases, this gives us the ability to make multiple
prereleases during the same feature cycle, and distinguish between them
programmatically. For example, this would have been useful during 2.0.22
development, which went through three prereleases before reaching the
final release.
Signed-off-by: Simon McVittie <smcv@collabora.com>
Now that we've said this will be removed from SDL 3, we're free to use
any encoding that is compatible with existing SDL versions and will still
compare correctly for all SDL 2 version numbers. This allows the SDL 2
minor version to go beyond 1 digit, limited only by the size of
SDL_version.minor (which is 8 bits), making the largest possible version
number 2.255.99.
The patchlevel (micro version) is still limited to 2 digits.
Signed-off-by: Simon McVittie <smcv@collabora.com>
The encoding used in SDL_VERSIONNUM (e.g. 2.0.22 -> 2022) cannot
represent 2-digit minor versions without overflowing from the hundreds
digit into the thousands digit, which produces confusing version
numbers that will compare incorrectly when the major version is increased
to 3.
However, we can sidestep this problem by declaring that SDL_VERSIONNUM
will no longer be present in SDL 3, which means it only needs to be able
to represent SDL 2 version numbers losslessly.
Signed-off-by: Simon McVittie <smcv@collabora.com>
This comparison normally happens at compile-time, not at runtime, so
it doesn't matter if it isn't optimal. This avoids incorrect comparison
if the minor version in SDL_COMPILEDVERSION and SDL_VERSIONNUM has more
than one digit, which would cause it to overflow from the hundreds place
into the thousands place.
Signed-off-by: Simon McVittie <smcv@collabora.com>
This is usually desirable for batch processing: it lets us see exactly
what is happening in the logs.
Signed-off-by: Simon McVittie <smcv@collabora.com>
It looks as though something in the test subproject "leaks" into the
main build system, causing us to try to install ${builddir}/test/sdl2.pc
instead of the correct ${builddir}/sdl2.pc. Moving the tests subproject
further down avoids this.
Resolves: https://github.com/libsdl-org/SDL/issues/5604
Signed-off-by: Simon McVittie <smcv@collabora.com>
* Add initial support for the Nokia N-Gage
* N-Gage: disable clipping for the time being, issue needs to be resolved later
* Move va_copy definition to SDL_internal.h
* Move stdlib.h include to SDL_config_ngage.h, much cleaner this way
* Remove redundant include, add HAVE_STDLIB_H
* Revert "N-Gage: disable clipping for the time being, issue needs to be resolved later"
This reverts commit 4f5f0fc36cc7f34fad05e45671dfa7b8dc32fd51.
* N-Gage: fix clipping issue by providing proper math functions
Enabling GCController.shouldMonitorBackgroundEvents to read background events
for MFi controllers before receiving the first GCControllerDidConnectNotification
is apparently a no-go on macOS (12.3.1 for me), and would crash on attempt.
Apple's documentation is... not great, and doesn't point this out.
This waits for IOS_AddMFIJoystickDevice() to get called down the chain from GCControllerDidConnectNotification, and enables GCController.shouldMonitorBackgroundEvents
if it hadn't been already.
On iOS and tvOS, GCController.shouldMonitorBackgroundEvents is ignored, so
there's no need to check their versions.
Ensure that we're not trying to call SDL_small_alloc()
with a count of zero.
Transforming the code like this fixes a
-Wmaybe-uninitialized warning from GCC 12.0.1
For short messages, use a stack buffer that is
significantly smaller than SDL_MAX_LOG_MESSAGE.
The rationale for this is that we don't want to risk
blowing the stack, while at the same time we would
like to not put pressure on the memory allocator unless
absolutely necessary.
This is the one that splits the "left wing" into two for loops to
bubble out the conditional that decides if it should read from the
left padding or the input buffer.
I still believe the optimization is good, but the basic logic of it
was incorrect, and needs to be reexamined and fixed before going
back into revision control.