This allows using a much smaller (1.5 KB) lookup table, in exchange for a small amount of extra work per frame.
The extra work (a few extra loads/mul/adds) is negligible, and can execute in parallel.
The reduction in cache misses almost certainly outweighs any added cost.
The table is generated at runtime, and takes less than 0.02ms on my computer.
The specific cases here were SDL_size_mul_overflow_builtin and
SDL_size_add_overflow_builtin, which are forced-inline symbols in
SDL_stdinc.h that have to exist, but aren't really part of the public API,
and thus shouldn't be exported as documentation.
(if there's a newline in the output, it won't print the line in the Perl
script where the failure happened, so fix this in places where it's used
more like an assert than error reporting.)
SDL_strcasecmp (even when calling into a C runtime) does not work with
Unicode chars, and depending on the user's locale, might not work with
even basic ASCII strings.
This implements the function from scratch, using "case-folding,"
which is a more robust method that deals with various languages. It
involves a hashtable of a few hundred codepoints that are "uppercase" and
how to map them to lowercase equivalents (possibly increasing the size of
the string in the process). The vast majority of human languages (and
Unicode) do not have letters with different cases, but still, this static
table takes about 10 kilobytes on a 64-bit machine.
Even this will fail in one known case: the Turkish 'i' folds differently
if you're writing in Turkish vs other languages. Generally this is seen as
unfortunate collateral damage in cases where you can't specify the language
in use.
In addition to case-folding the codepoints, the new functions also know how
to decode the various formats to turn them into codepoints in the first
place, instead of blindly stepping by one byte (or one wchar_t) per
character.
Also included is casefolding.txt from the Unicode Consortium and a perl
script to generate the hashtable from that text file, so we can trivially
update this if new languages are added in the future.
A simple test using the new function:
```c
#include <SDL3/SDL.h>
int main(void)
{
const char *a = "α ε η";
const char *b = "Α Ε Η";
SDL_Log(" strcasecmp(\"%s\", \"%s\") == %d\n", a, b, strcasecmp(a, b));
SDL_Log("SDL_strcasecmp(\"%s\", \"%s\") == %d\n", a, b, SDL_strcasecmp(a, b));
return 0;
}
```
Produces:
```
INFO: strcasecmp("α ε η", "Α Ε Η") == 32
INFO: SDL_strcasecmp("α ε η", "Α Ε Η") == 0
```
glibc strcasecmp() fails to compare a Greek lowercase string to its uppercase
equivalent, even with a UTF-8 locale, but SDL_strcasecmp() works.
Other SDL_stdinc.h functions are changed to be more consistent, which is to
say they now ignore any C runtime and often dictate that only English-based
low-ASCII works with them.
Fixes Issue #9313.
- Always use internal qsort and bsearch implementation.
- add "_r" reentrant versions.
The reasons for always using the internal versions is that the C runtime
versions' callbacks are not mark STDCALL, so we would have add bridge
functions for them anyhow, The C runtime qsort_r/qsort_s have different
orders of arguments on different platforms, and most importantly: qsort()
isn't a stable sort, and isn't guaranteed to give the same ordering for
two objects marked as equal by the callback...as such, Visual Studio and
glibc can give different sort results for the same data set...in this
sense, having one piece of code shared on all platforms makes sense here,
for reliabillity.
bsearch does not have a standard _r version at all, and suffers from the
same SDLCALL concern. Since the code is simple and we would have to work
around the C runtime, it's easier to just go with the built-in function
and remove all the CMake C runtime tests.
Fixes#9159.