Due to an influx of spam, we have had to impose restrictions on new accounts. Please see this wiki page for instructions on how to get full permissions. Sorry for the inconvenience.
Admin message
Equinix is shutting down its operations with us on April 30, 2025. They have graciously supported us for almost 5 years, but all good things come to an end. We are expecting to transition to new infrastructure between late March and mid-April. We do not yet have a firm timeline for this, but it will involve (probably multiple) periods of downtime as we move our services whilst also changing them to be faster and more responsive. Any updates will be posted in freedesktop/freedesktop#2011 as it becomes clear, and any downtime will be announced with further broadcast messages.
To make life easier for people shipping embedded devices, and reduce the proliferation of desktop-shell hacks, it would be great to have a single-purpose shell which would pretty much just run a single client.
This shell would:
offer no UI, chrome, decor, or furniture, of any kind
start a single client named in the config file, either restarting it or shutting down when the client went away
make all top-level surfaces fullscreen
attach dialogs to their parents
raise and focus any new top-level/dialog surface created
support click⁰-to-move and click-to-raise
I don't think it needs much anything more complex than that.
This would also serve as a useful example of how to drive Weston's window management.
⁰: 'Click' means both pointer buttons as well as touch.
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Child items
...
Show closed items
Linked items
0
Link issues together to show that they're related.
Learn more.
A possible starting point would be forking tests/weston-test-desktop-shell.c. I'd assume building on that is much easier than stripping desktop-shell down.
Thanks, I didn't know about this. I ended up basically recreating something like this (i.e., a mostly empty libweston-desktop implementation), to use as my base for the kiosk-shell.
There are a few things missing (notably can't move dialogs), and some WM behavior (e.g. focus) is incomplete. Some questions:
Do we care about multiple output support? Most kiosk-like scenarios have a single output, although I could easily imagine cases with more than one screen. The current WIP branch has partial support for multiple outputs.
It seems that the standard way of dealing with dialogs in the wayland world is to never give keyboard/pointer focus away to parents. The current WIP branch actually allows parents to get the focus while the dialog is on top, which I think is not expected in the new way of things. I wonder, though, if wayland has a solution for always on top (meaning on top of their parents, not the whole desktop) dialogs that can give away focus. The scenario I am thinking is some kind of drawing app where the palette is a child dialog window which is always on top.
Multiple outputs: if you can figure out what behavior is usually appropriate, then sure. If not, put just a wallpaper on all extra outputs so they don't show garbage.
If something for dialogs exists, you should be able to find it in xdg-shell.xml in wayland-protocols. I'm not sure what "attach dialogs to their parents" means here.
Actually, I can kind of see it tbf, especially for multi-display screens. Perhaps you could add a key to the [output] section, kiosk-app-match=xdg.shell.app.id, which would route a specific app ID to a specific display?
/cc @marius.vlad0 who has been looking at something related for AGL
Concerning (2), after some investigation and experimentation it seems that, in fact, many wayland shells (e.g., gnome, sway) implement modeless children xdg_toplevels ("dialogs" from now on for brevity) by default. There seems a way to make such dialogs modal, but this seem to involve toolkit mechanisms external to xdg_shell (AFAICT).
BTW, xdg-shell doesn't mention anything about input focus for dialogs, only about visual stacking ordering. Do we know if a modal hint for xdg_shell/xdg_toplevel was ever discussed?
Interestingly, desktop-shell makes dialogs modal in terms of keyboard input (you can't type into the parent), but non-modal in terms of pointer input (you can click buttons in the parent). Also, see #231 (closed) for the issue that led to the current behavior.
For kiosk-shell I am inclined to keep the current approach in the WIP, which is basically the same as gnome/sway (dialogs are always on top of parent, but modeless). Perhaps that would be a more sensible approach for desktop-shell, too?
Regarding modal-vs modeless, I thought on adding a user-tweakable knob to switch between allowing input to reach the parent surface (while the dialog one in stacked on top) and not allowing that. Guess it largely depends on the use-cases, but the majority assumes modal from what I've seen. Maybe is that worth adding here as well to tweak that, and by default to be as you said? IDK maybe some might disagree and it is not worthwhile doing it.
Haven't tried yet the migration of views to another output but it shouldn't be hard once you have the app_id (which can retrieve from the desktop_surface). I would try doing that before mapping the surface, that is, in the commit callback and set the output of the view to the one found in the ini file (this assumes it already been changed previously to reflect that, probably when the surface is created). The same would be needed for normal/dialog types of surfaces.
FWIW, I have personally been considering modal dialogs a fundamental mis-design. So many times I have had to cancel a modal dialog just to re-check what I want type into it on X11. Sometimes I cannot even move it to reveal the parent window from underneath it where I need to see something.
From what I understand, sometimes you make a dialog modal so that no other application/client can steal focus from it, e.g. ensuring the password you type cannot accidentally go to a different client. However, focus stealing is something that should not exist to begin with. On X11 window managers implement various focus stealing prevention mechanisms because focus is free-for-all, but a Wayland window manager simply shouldn't be switching focus by client requests at all unless they come with a valid and current input event serial to indicate that the focus change is really originating from the end user and nothing else has happened since.
If you want to make a dialog modal to prevent input to your own app's main window or such, that you can enforce in your app. At least you can do that for the input part, not sure about the window "active" state. It is the responsibility of the client to reject or re-route input on the parent window if a child window is supposed to be modal.
Hence I believe there has been no need to make a standard protocol for modal dialogs for the desktop. IMO global modals shouldn't exist.
Note that in my mind, "modal" means "forced to receive all input and stay active", which could be global (the mis-design) or per-client (ok). A window that is merely always-on-top globally or of its parent is not "modal" IMO.
If there is no explicit protocol defined, there is no way for the compositor to know if a child window should be modal or not. It cannot be a compositor setting, apps can rely on either behavior (Gimp toolbox windows vs. a "overwrite file? yes/no" dialog). Since the client can implement modality itself (can it?), there is no need for the compositor to know or enforce modality.
Note that in my mind, "modal" means "forced to receive all input and stay active", which could be global (the mis-design) or per-client (ok). A window that is merely always-on-top globally or of its parent is not "modal" IMO.
I agree with this definition. Note that I have been using "modal" in this discussion to refer strictly the per-client variant (or rather a per-parent-xdg_toplevel variant).
If you want to make a dialog modal to prevent input to your own app's main window or such, that you can enforce in your app. At least you can do that for the input part, not sure about the window "active" state. It is the responsibility of the client to reject or re-route input on the parent window if a child window is supposed to be modal.
From what I have observed, both GTK and Qt use this approach to implement some level of modality on wayland even without some special wayland server support. For example, when running gedit on sway (or even the WIP kiosk-shell), when opening the "About" dialog (which is modal) I cannot interact with the contents of the main window anymore. When opening the "Preferences" dialog (modeless) I can still interact normally with the main window while the dialog remains on top.
Note that even for modal dialogs we can still "activate" the parent window, since the compositor doesn't know anything about modality (on sway and others). GNOME prohibits this "activation" with its own special protocol as mentioned by @emersion.
Since the client can implement modality itself (can it?), there is no need for the compositor to know or enforce modality.
It's not strictly needed, but there are benefits, like the ability to have special/clearer compositor-side modal dialog behavior (e.g., GNOME's attach-modal-dialogs). If people are interested, I can draft an update/extension to xdg-shell with the modal hint, and we can continue the discussion about the (non-)necessity and pros/cons of such a hint there (unless there have already been discussions about this in the past where this was rejected?).
In terms of desktop-shell, I am now even more inclined to think that it should follow what other compositors are doing, and make dialogs modeless (instead of the current partial keyboard modality), leaving modality to the toolkits (given that we have not relevant protocol hint).