The removal of a wl_output may not be accompanied by leave events for the surfaces present on it. Ensure that no window continues to hold a reference to a removed output.
The VULKAN_InvalidateCachedState() function seems to be meant to
invalidate any _cached_ state, i.e. global state of the API which may
have been modified outside the renderer.
However, at the moment, the Vulkan renderer also resets a number of
internal variables which track buffers, offsets, etc, in use. As a
result, the renderer can get into an inconsistant state and/or lose
data.
For example, if VULKAN_InvalidateCachedState() is called in between two
calls to VULKAN_UpdateVertexBuffer(), the data from the first call will
be overwritten by that from the second, as the number of the next vertex
buffer to use will be reset to 0. This can result in rendering errors,
as the same vertex data is used incorrectly for several calls.
By no longer resetting this 'internal' state here, those glitches
disappear. However, I haven't tested this with any applications which
mix the Vulkan renderer with their own Vulkan code (do any such
applications exist?), so this may be insufficient in case a full flush
of the renderer state -- and possibly a wait on the appropriate fence
-- could be required.
Signed-off-by: David Gow <david@ingeniumdigital.com>
When selecting a memory type, there are some property flags we need
(e.g., VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT, without which we cannot
vkMapMemory), and others we'd simply prefer (e.g.,
VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT, which may have a performance
impact, but otherwise shouldn't be required).
By specifying these separately, we can fall back to a memory type which
doesn't have everything we want, but which should still work, rather
than giving up.
Signed-off-by: David Gow <david@ingeniumdigital.com>
VULKAN_FindMemoryTypeIndex() tries first to get a perfectly matching
memory type, then falls back to selecting any memory type which overlaps
with the requested flags.
However, some of the flags requested are actually required, so if -- for
example -- we request a memory type with
VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT, but get one without it, all future
calls to vkMapMemory() will fail with:
```
vkMapMemory(): Mapping memory without VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT set. Memory has type 0 which has properties VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT. The Vulkan spec states:
memory must have been created with a memory type that reports VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT.
```
(This occurs, for instance, on the totally non-conformant hasvk driver
for Intel Haswell integrated GPUs, which otherwise works fine.)
Instead, make sure that any memory type found has a superset of the
requested flags, so it'll always be appropriate.
Of course, this makes it _less_ likely for a memory type to be found, so
it does make #9130 worse in some ways. See the next patch for details.
Signed-off-by: David Gow <david@ingeniumdigital.com>
When enabling the Vulkan validation layers, the 'validationLayerName'
variable technically went out of scope before vkCreateInstance() was
called. While most compilers won't clean up stack variables after random
'if' statements, some will, particularly when optimisation or memory
sanitizers are enabled.
This can lead to vkCreateInstance() segfaulting when
SDL_HINT_RENDER_VULKAN_DEBUG is enabled.
Instead, make the validationLayerName visible throughout the entire
VULKAN_CreateDeviceResources() function.
While we're at it, extract the validation layer name out into a
preprocessor #define, so that we are definitely using the same name in
VULKAN_ValidationLayersFound().
Signed-off-by: David Gow <david@ingeniumdigital.com>
This pull request adds an implementation of a Vulkan Render backend to SDL. I have so far tested this primarily on Windows, but also smoke tested on Linux and macOS (MoltenVK). I have not tried it yet on Android, but it should be usable there as well (sans any bugs I missed). This began as a port of the SDL Direct3D12 Renderer, which is the closest thing to Vulkan as existed in the SDL codebase. The shaders are more or less identical (with the only differences being in descriptor bindings vs root descriptors). The shaders are built using the HLSL frontend of glslang.
Everything in the code is pure Vulkan 1.0 (no extensions), with the exception of HDR support which requires the Vulkan instance extension `VK_EXT_swapchain_colorspace`. The code could have been simplified considerably if I used dynamic rendering, push descriptors, extended dynamic state, and other modern Vulkan-isms, but I felt it was more important to make the code as vanilla Vulkan as possible so that it would run on any Vulkan implementation.
The main differences with the Direct3D12 renderer are:
* Having to manage renderpasses for performing clears. There is likely some optimization that would still remain for more efficient use of TBDR hardware where there might be some unnecessary load/stores, but it does attempt to do clears using renderpasses.
* Constant buffer data couldn't be directly updated in the command buffer since I didn't want to rely on push descriptors, so there is a persistently mapped buffer with increasing offset per swapchain image where CB data gets written.
* Many more resources are dependent on the swapchain resizing due to i.e. Vulkan requiring the VkFramebuffer to reference the VkImageView of the swapchain, so there is a bit more code around handling that than was necessary in D3D12.
* For NV12/NV21 textures, rather than there being plane data in the texture itself, the UV data is placed in a separate `VkImage`/`VkImageView`.
I've verified that `testcolorspace` works with both sRGB and HDR linear. I've tested `testoverlay` works with the various YUV/NV12/NV21 formats. I've tested `testsprite`. I've checked that window resizing and swapchain out-of-date handling when minimizing are working. I've run through `testautomation` with the render tests. I also have run several of the tests with Vulkan validation and synchronization validation. Surely I will have missed some things, but I think it's in a good state to be merged and build out from here.