This better reflects how HDR content is actually used, e.g. most content is in the SDR range, with specular highlights and bright details beyond the SDR range, in the HDR headroom.
This more closely matches how HDR is handled on Apple platforms, as EDR.
This also greatly simplifies application code which no longer has to think about color scaling. SDR content is rendered at the appropriate brightness automatically, and HDR content is scaled to the correct range for the display HDR headroom.
Eventually we can re-add a fast path for that data down to the individual renderers. Setting color scale would still require converting to float, and most hardware accelerated renderers prefer to consume colors as float, so this requires some thought and performance testing.
Fixes https://github.com/libsdl-org/SDL/issues/9009
The renderer will always use the sRGB colorspace for drawing, and will default to the sRGB output colorspace. If you want blending in linear space and HDR support, you can select the scRGB output colorspace, which is supported by the direct3d11 and direct3d12
You can't do blending directly in PQ space, which means you have to create a scene render target in linear space and use shaders to convert PQ texture data to linear, etc. All of this is out of scope for the SDL 2D renderer at the moment.
This allows color operations to happen in linear space between sRGB input and sRGB output. This is currently supported on the direct3d11, direct3d12 and opengl renderers.
This is a good resource on blending in linear space vs sRGB space:
https://blog.johnnovak.net/2016/09/21/what-every-coder-should-know-about-gamma/
Also added testcolorspace to verify colorspace changes