- Always use internal qsort and bsearch implementation.
- add "_r" reentrant versions.
The reasons for always using the internal versions is that the C runtime
versions' callbacks are not mark STDCALL, so we would have add bridge
functions for them anyhow, The C runtime qsort_r/qsort_s have different
orders of arguments on different platforms, and most importantly: qsort()
isn't a stable sort, and isn't guaranteed to give the same ordering for
two objects marked as equal by the callback...as such, Visual Studio and
glibc can give different sort results for the same data set...in this
sense, having one piece of code shared on all platforms makes sense here,
for reliabillity.
bsearch does not have a standard _r version at all, and suffers from the
same SDLCALL concern. Since the code is simple and we would have to work
around the C runtime, it's easier to just go with the built-in function
and remove all the CMake C runtime tests.
Fixes#9159.
This pull request adds an implementation of a Vulkan Render backend to SDL. I have so far tested this primarily on Windows, but also smoke tested on Linux and macOS (MoltenVK). I have not tried it yet on Android, but it should be usable there as well (sans any bugs I missed). This began as a port of the SDL Direct3D12 Renderer, which is the closest thing to Vulkan as existed in the SDL codebase. The shaders are more or less identical (with the only differences being in descriptor bindings vs root descriptors). The shaders are built using the HLSL frontend of glslang.
Everything in the code is pure Vulkan 1.0 (no extensions), with the exception of HDR support which requires the Vulkan instance extension `VK_EXT_swapchain_colorspace`. The code could have been simplified considerably if I used dynamic rendering, push descriptors, extended dynamic state, and other modern Vulkan-isms, but I felt it was more important to make the code as vanilla Vulkan as possible so that it would run on any Vulkan implementation.
The main differences with the Direct3D12 renderer are:
* Having to manage renderpasses for performing clears. There is likely some optimization that would still remain for more efficient use of TBDR hardware where there might be some unnecessary load/stores, but it does attempt to do clears using renderpasses.
* Constant buffer data couldn't be directly updated in the command buffer since I didn't want to rely on push descriptors, so there is a persistently mapped buffer with increasing offset per swapchain image where CB data gets written.
* Many more resources are dependent on the swapchain resizing due to i.e. Vulkan requiring the VkFramebuffer to reference the VkImageView of the swapchain, so there is a bit more code around handling that than was necessary in D3D12.
* For NV12/NV21 textures, rather than there being plane data in the texture itself, the UV data is placed in a separate `VkImage`/`VkImageView`.
I've verified that `testcolorspace` works with both sRGB and HDR linear. I've tested `testoverlay` works with the various YUV/NV12/NV21 formats. I've tested `testsprite`. I've checked that window resizing and swapchain out-of-date handling when minimizing are working. I've run through `testautomation` with the render tests. I also have run several of the tests with Vulkan validation and synchronization validation. Surely I will have missed some things, but I think it's in a good state to be merged and build out from here.
This better reflects how HDR content is actually used, e.g. most content is in the SDR range, with specular highlights and bright details beyond the SDR range, in the HDR headroom.
This more closely matches how HDR is handled on Apple platforms, as EDR.
This also greatly simplifies application code which no longer has to think about color scaling. SDR content is rendered at the appropriate brightness automatically, and HDR content is scaled to the correct range for the display HDR headroom.
This does something a little weird, in that it doesn't care what
`__ANDROID_API__` is set to, but will attempt to dlopen the system
libraries, like we do for many other platform-specific pieces of SDL.
This allows us to a) not bump the minimum required Android version, which is
extremely ancient but otherwise still working, doing the right thing on old
and new hardware in the field, and b) not require the app to link against
more libraries than it previously did before the feature was available.
The downside is that it's a little messy, but it's okay for now, I think.
- Simplified public API, simplified backend interface.
- Camera device hotplug events.
- Thread code is split up so it backends that provide own threads can use it.
- Added "dummy" backend.
Note that CoreMedia (Apple) and Android backends need to be updated, as does
the testcamera app (testcameraminimal works).
It is becoming necessary to enable additional features as libdecor continues to evolve, and checking against a single base version will no longer be adequate. Libdecor doesn't provide versioning defines in its headers, so split the version string into parts to allow for discrete version detection and feature enablement at build time.
Allow the Visual Studio project to define HAVE_LIBC=0 to enable building without a C runtime on Windows entirely through Visual Studio project changes.
Renamed the following property define names to have a type suffix to
match other property names.
SDL_PROP_TEXTURE_OPENGL_TEXTURE_TARGET (number)
SDL_PROP_TEXTURE_OPENGLES2_TEXTURE_TARGET (number)
SDL_PROP_WINDOW_CREATE_WAYLAND_SCALE_TO_DISPLAY (boolean)
SDL_PROP_WINDOW_RENDERER (pointer)
SDL_PROP_WINDOW_TEXTUREDATA (pointer)
Eventually we can re-add a fast path for that data down to the individual renderers. Setting color scale would still require converting to float, and most hardware accelerated renderers prefer to consume colors as float, so this requires some thought and performance testing.
Fixes https://github.com/libsdl-org/SDL/issues/9009
The renderer will always use the sRGB colorspace for drawing, and will default to the sRGB output colorspace. If you want blending in linear space and HDR support, you can select the scRGB output colorspace, which is supported by the direct3d11 and direct3d12
This allows color operations to happen in linear space between sRGB input and sRGB output. This is currently supported on the direct3d11, direct3d12 and opengl renderers.
This is a good resource on blending in linear space vs sRGB space:
https://blog.johnnovak.net/2016/09/21/what-every-coder-should-know-about-gamma/
Also added testcolorspace to verify colorspace changes
Add a mode that forces Wayland windows to output with scaling that forces 1:1 pixel mapping.
This is intended to allow legacy applications to be displayed without desktop scaling being applied, and may have issues with some display configurations, as this forces the window to behave in a way that Wayland desktops were not designed to accommodate (rounding errors can result from certain combinations of window/scale values, the window may be unusably small, jump in size at times, or appear to be larger than the desktop space, and cursor precision may be reduced).
Windows flagged as DPI-aware are not affected by this.
The automated video test suite passes with the hint turned on.