Due to an influx of spam, we have had to impose restrictions on new accounts. Please see this wiki page for instructions on how to get full permissions. Sorry for the inconvenience.
Admin message
The migration is almost done, at least the rest should happen in the background. There are still a few technical difference between the old cluster and the new ones, and they are summarized in this issue. Please pay attention to the TL:DR at the end of the comment.
Architecture: split backends into input, output, and local parts
We have the screen-share plugin which uses some libweston internal APIs because it needs to deliver input events to the libweston core.
We have the remoting plugin and the pipewire plugin using the DRM-backend virtual output API and libweston internal API. The virtual output API exists only because libweston (the backends) cannot handle multiple kinds of outputs at the same time.
A proposition. Split the backend architecture into three pieces:
Input backends deliver input to libweston core.
Output backends implement outputs and call into renderers.
Local backends (in lack of a better name) handle the local setup, e.g. VT setup.
The point here is that one could have more than one input backend or output backend active at the same time. Local backends can only be active one at a time, since they manage singular things like the active VT.
E.g. the DRM-backend would be all three.
Then screen-share plugin could act as an additional input backend, the remoting plugin and pipewire plugins could act as additional output backends, and the need for DRM-backend virtual output API would disappear. This means that these plugins could be used with any old backend.
Furthermore, e.g. the RDP-backend could be turned into a combined input/output backend that could be loaded in addition to e.g. the DRM-backend. That would make the screen-share plugin obsolete.
Things like weston_seat, weston_head and weston_output would need to be dynamically typed, so that backends only touch their own bits.
Just a rough idea, not really thought through. I'm not sure if this goes too far, or what the split would actually look like in code: probably it would happen at API level, and not splitting modules into more modules, but there would be some provision to load together modules that are not possible today.
This would probably take a good few months to design. The primary aim would be to allow additional inputs and outputs.
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Child items
...
Show closed items
Linked items
0
Link issues together to show that they're related.
Learn more.
The point here is that one could have more than one input backend or output backend active at the same time
In my experience (wlroots), this is complicated. Multiple output backends imply multiple renderers, multiple EGL contexts. Correctly transferring buffers coming from clients to the right EGL context isn't straightforward.
I think we have to rule out multiple renderers. So a backend does not imply a renderer anymore, you first pick a renderer and then the backends that work with it. Therefore we should be able to deal with just one EGLDisplay, probably just one EGLContext as well.
However the renderer needs some backend-specific data (platform name + GBM device/Wayland display/etc) to initialize the renderer. For instance a GBM device needs to be created from the backend's DRM FD for the DRM backend's renderer. So we have a chicken-and-egg problem here.
In wlroots it's the backend's job to initialize the renderer. Who would be responsible for this in Weston?
Good points. We will probably end with something like primary and secondary backends instead. The primary backend would pick the renderer and the rendering device.
Primary backends:
DRM
headless
wayland
X11
fbdev
Secondary backends:
RDP
screen-share (if still exists)
pipewire
remoting
What the RDP-backend offers right now would be achieved with headless+RDP in the new architecture.
A Weston instance can have only one primary backend, but optionally multiple secondary backends. This means you could combine e.g. DRM+RDP. The main point is that a weston_compositor instance could have weston_output objects from different backends simultaneously. The secondary backends might not even be "backends", they could be called additional input and/or output plugins.
X11 and wayland-backends might be secondary backends as well. They could run with headless for the normal use-case, or with DRM for... some really funny use cases I don't know yet. Hence I put X11 and wayland as primary backends for now.
I came to the same conclusion (have a primary backend with optional secondary backends, most of the remote/virtual ones should be secondary) the last time I was thinking about remoting and multiple backends FWIW. I don't really know how DRM + X11/Wayland would work though, given they have totally different input paths. Which is one of the main differences between something like PipeWire (output-only streaming, no input, no resizing, etc) and the nested backends.
Hmm, I'm not sure if such a strict split into primary and secondary backends is a good idea. Combining for example DRM and wayland probably makes no sense. But I do see value in using pipewire as the only backend and I am using headless as a secondary backend.
Unfortunately the strict separation seems technically necessary, otherwise the design becomes considerably more complicated, and I do not see what we would lose with the strict separation.
If you want pipewire as the only outputs, then what's the harm in combining that with the headless-backend?
How is it even possible to use headless-backend as a secondary backend? What do you gain from that?
How is it even possible to use headless-backend as a secondary backend? What do you gain from that?
wlroots supports this for screen-sharing a headless output while running with the DRM backend. Users need this feature to extend their desktop with a remote VNC output displayed by a Raspberry Pi, for instance.
How is it even possible to use headless-backend as a secondary backend? What do you gain from that?
There are obviously patches involved as the multi-backend stuff is still very much WIP. And I have two use-cases:
I need applications to make progress even if no monitor is attached to HDMI. A secondary view on a headless output takes over when the DRM output is removed. I've used "force-on=true" in the past and that causes all kinds of issues.
I need to do some tricks with input: I have a fullscreen app that gets the input of some devices but needs to be invisible to others. So I'm using a headless output with mostly the same layout, except for the one surface and route the relevant input there.
And by the way, the current WIP implementation in !578 (merged) does allow most backends to be primary or secondary backends.
wlroots supports this for screen-sharing a headless output while running with the DRM backend. Users need this feature to extend their desktop with a remote VNC output displayed by a Raspberry Pi, for instance.
In the current Weston plans both primary and secondary backends can create outputs of their own, so they don't need the headless-backend for creating outputs. This way whatever backend creates the output is also in full control of the output, e.g. timings and pixel format.
OTOH, the story for sharing a real output is more complicated than it should right now: it spawns another Weston instance with e.g. RDP-backend and copies framebuffers there.
I need applications to make progress even if no monitor is attached to HDMI. A secondary view on a headless output takes over when the DRM output is removed. I've used "force-on=true" in the past and that causes all kinds of issues.
When we are considering upstream Weston design, I have to say that that is an application bug. Sorry.
No amount of real or virtual outputs is going to fix that properly, as something else could always occlude the application window anyway, which ideally leads to the same end result.
I need to do some tricks with input: I have a fullscreen app that gets the input of some devices but needs to be invisible to others. So I'm using a headless output with mostly the same layout, except for the one surface and route the relevant input there.
An app needs to be invisible to input devices? Do you mean that the app needs to get all input from, and input only from, a very specific subset of input devices? And that no other app must get input from those devices? Or something else?
Input routing is the window manager's job. Again, playing with outputs here is not what I'd do, but it was probably the quickest hack you could come up with. I have no sympathy towards such hacks when we are discussing fundamental design of Weston in upstream.
Well, I live in the real world and some things are outside of my control. Using headless outputs it my way of not messing with the fundamental design of how input or the client throttling works in Weston...
And it's not quite clear to my why you actually need the strict separation of primary and secondary backends. What's the benefit here?
Yes, of course you do what you need to do to deliver. We just don't drive upstream architecture design by the workarounds needed previously but instead try to make the workarounds unnecessary in the long run.
The strict separation is a result from clearly defining the architecture and the responsibilities and capabilities of each module. Without clear definitions backends could be stepping on each other when used in a "wrong" combination and there would no sign of which combinations do not work quite right.
For example, taking over a VT or a logind session is something only one component can do at a time. The same with input and DRM KMS devices of a seat. Likewise supporting multiple renderers would be really difficult for no apparent benefit. We also need to consider Wayland clients to whom the compositor looks like a single entity which needs to act consistently, e.g. a compositor must expose at most one presentation-time global in the registry or applications cannot know which to bind to, since the protocol design never prepared for multiple.
To allow heterogeneous outputs to exist, libweston and backends need to start recognizing the type of weston_head and weston_output instead of assuming they always belongs to the one backend.
Type recognition can be implemented by checking a particular vfunc belongs to the specific backend: weston_output::destroy. weston_head does not have vfuncs, so it needs a const void * member to act as a similar opaque identification field. (Or, maybe it needs to gain a vfunc like create_output.)
An alternative would be adding enum weston_output_backend to both weston_output and weston_head, but then the list of possible output backends (which might be plugins) would need to be hardcoded. Currently the list of backends is hardcoded and the backend API is not public, so choosing this approach would be ok until we decide to allow foreign output plugins.
It may not be necessary (at first) to expose the type in public API, because head names will likely imply enough.
All backend functions that take weston_output or weston_head as arguments, or iterate over lists of them, will need to insert type checks and ignore outputs and heads not of their own.
The head argument of weston_compositor_create_output_with_head() determines the type of the output created. weston_compositor_create_output() must be removed, because it cannot know which output backend to call to create an output.
Adding a head of a different type to an output must fail. It is possible that in the future we might want to support heterogeneous heads in a single output so that all output backends could access the same renderer rendered buffer (presumably needs to disable assign_planes), but that requires creating an internal head API that supports backend-specifics, is likely complex, and is not needed for making heterogeneous heads cloned (#333 is the solution instead).
output_repaint_timer_handler() will somehow need to start supporting heterogeneous outputs with the repaint_begin, repaint_flush, repaint_cancel sequence.