We either need to explicitly test using the sRGB colorspace or update the tests for HDR10 color conversion. We'll just disable them for now, as these formats aren't commonly used in games.
The renderer will always use the sRGB colorspace for drawing, and will default to the sRGB output colorspace. If you want blending in linear space and HDR support, you can select the scRGB output colorspace, which is supported by the direct3d11 and direct3d12
These originally checked for expected ± EPSILON as logged, but since
commit 880c6939 they check for expected ± max_err, where max_err may
need to be greater than EPSILON for very large expected results like
the ones in exp_regularCases().
Also, EPSILON is so small that the default precision of the %f format
(6 decimal places) would never actually have shown its effect, so log
it in scientific notation instead.
Fixes: 880c6939 "testautomation_math: do relative comparison + more precise correct trigonometric values"
Signed-off-by: Simon McVittie <smcv@collabora.com>
While looking at the other tests in this file, I noticed that instead
of checking for a result in the range of expected ± FLT_EPSILON as I
would have expected, these tests would accept any result strictly less
than expected + FLT_EPSILON, for example a wrong result that is very
large and negative. This is presumably not what was intended, so add
the SDL_fabs() that I assume was meant to be here.
Fixes: 474c8d00 "testautomation: don't do float equality tests"
Signed-off-by: Simon McVittie <smcv@collabora.com>
If the magnitude of the expected result is small, then we can safely
assume that the actual calculated result matches it to 10 decimal
places.
However, if the magnitude is very large, as it is for some of our exp()
tests, then 10 decimal places represents an unrealistically high level
of precision, for example 24 decimal digits for the test that is
expected to return approximately 6.6e14. IEEE 754 floating point only
has a precision of about 16 decimal digits, causing test failure on
x86 compilers that use an i387 80-bit extended-precision register for
the result and therefore get a slightly different answer.
To avoid this, scale the required precision with the magnitude of the
expected result, so that we accept a maximum error of either 10 decimal
places or 1 part in 1e10, whichever is greater.
[smcv: Added longer commit message explaining why we need this]
(cherry picked from commit 880c69392ae10c726fc97f17b6e5e2173f70b62f)
In the Steam Runtime 1 'scout' environment, when compiling for i386
using the default gcc-4.6, Exp(34.125) matches the desired value to the
precision shown in the log (6 decimal places) but is not an exact match
for the desired value.
Signed-off-by: Simon McVittie <smcv@collabora.com>
You can't do blending directly in PQ space, which means you have to create a scene render target in linear space and use shaders to convert PQ texture data to linear, etc. All of this is out of scope for the SDL 2D renderer at the moment.
This allows color operations to happen in linear space between sRGB input and sRGB output. This is currently supported on the direct3d11, direct3d12 and opengl renderers.
This is a good resource on blending in linear space vs sRGB space:
https://blog.johnnovak.net/2016/09/21/what-every-coder-should-know-about-gamma/
Also added testcolorspace to verify colorspace changes
The usable fullscreen bounds need to be queried after window creation, as Wayland can send different usable bounds depending on the focused window's scaling mode.
The drawing uses the origin of the viewport as the coordinate origin, so we only need to clip against the size of the viewport.
Also added a unit test to catch this case in the future
These functions historically didn't set the error indicator on overflow.
Before commit 447b508a "error: SDL's allocators now call SDL_OutOfMemory
on error", their callers would call SDL_OutOfMemory() instead, which was
assumed to be close enough in meaning: "that's a silly amount of memory
that would overflow size_t" is similar to "that's more memory than
is available". Now that responsibility for calling SDL_OutOfMemory()
has been pushed down into SDL_calloc() and friends, the functions that
check for overflows might as well set more specific errors.
Signed-off-by: Simon McVittie <smcv@collabora.com>
A surface of width (0x7fff'ffff) / 2 = 0x3fff'ffff is not quite large
enough to make the pitch overflow in the way we wanted to test here:
with a 32-bit format, that makes each row 0xffff'fffc bytes, which
(just) fits in a 32-bit unsigned size_t. Increasing it to 0x4000'0000
pixels per row is enough to trigger the overflow we intended to test.
In SDL 2, this test bug was hidden by the fact that allocating
0xffff'fffc bytes on a 32-bit platform is very likely to fail, and SDL 2
reported both "malloc() failed" and "this amount of memory is too large
for a size_t" with the same error code.
Signed-off-by: Simon McVittie <smcv@collabora.com>
Adding 3 bytes of alignment to 0x7fff'ffff is not enough to make it
overflow a 4-byte unsigned size_t, so this test was not exercising
the intended failure mode. We cannot actually make this overflow
with a signed 32-bit width and an 8-bit format: the maximum width is
not enough to achieve that. However, if we switch to a 24-bit format,
we can make the calculation overflow.
In SDL 2, this test bug was hidden by the fact that allocating
0x7fff'ffff bytes on a 32-bit platform will usually fail, and SDL 2
reported both "malloc() failed" and "this amount of memory is too large
for a size_t" with the same error code.
Signed-off-by: Simon McVittie <smcv@collabora.com>