Blending algorithms - what is the goal?
Traditionally PC software has been blending sRGB encoded numbers together due to it being cheap. When alpha-blending came to window systems, it was done in display servers. Application developers tuned the color and alpha values in their applications and toolkits to produce the results they liked.
When we move towards explicit color management and HDR, that old "sRGB blending" method starts to break down. It already had some problems before.
Hence, the next candidate for a blending method is to take the physical interpretation of alpha as coverage (not translucency) and fix the blending equation to match it. This gives us blending in optical space. Naturally the blending result will be different, and old applications that expected sRGB blending will look unexpected.
As a side note, alpha as coverage is about window shape, not translucency. As explained in the above link, the physical interpretation of using alpha-coverage for window transparency is perhaps surprising. Yet, it is widely used, because nothing else has been available. It might be best to separate coverage and semi-transparency as different things. We cannot use the same alpha channel in an image for both purposes. The optical blending method is correct for window shaping.
- How to handle window semi-transparency then?
- How should semi-transparency be defined?
- What is its goal?
- Should it have a physical interpretation, or should it be something not physically based?
These questions came to my mind after reading wlroots/wlroots!4634 where @jlindgren90 proposes a method of approximating the results of sRGB blending inside an optical blending setup, in order to not lose readability of semi-transparent surfaces when window systems move on from the implicit sRGB-only world.
Since Wayland does not (yet) define how blending should work, this is our opportunity to think what goals window system blending should have and how to reach them.
Maintaining some degree of backward compatibility with existing applications is another question. There are examples where things "break", but also opinions that such problems are not common, or at least people don't tend to complain about them. Possibly a Wayland extension for blending methods is needed, so that applications can know what to expect. The choice of blending methods probably should not define the blending results with mathematical precision but give some vague expectations. Or maybe we would need a Wayland extension for window transparency masking, separate from today's alpha channel. Maybe with effects like background blur?