The way we handled this in the equivalent Mir API is that we would always send the created
¹ event:
created
event contains a new session ID$ID
and $ID
is not currently in use, the created
event contains $ID
$ID
and $ID
is currently in use, the created
event contains a new session ID.This also means you don't need a failed
event; if the client asked to restore $ID
but the session has a different ID then the client knows there's a conflicting session.
¹: Or, rather, the equivalent event.
An alternative for layout-restore is !18 ; the explicit “please save and restore layout state” MR.
Simply adding a z-coordinate to this extension isn't going to resolve the problem, right? Because the compositor is highly likely to restack whenever focus changes, changing your z-order? Unless the intent is that z-order is not stacking, but is stacking layers, with potentially multiple surfaces in each layer? If that is the intent, it's not clear from the protocol as-written.
(Additionally, this behaviour is basically the utility archetype in our in-progress Mir WM protocol playground. It's nice to see that there are applications which want this sort of behaviour! [Relatedly, we really should get around to actually dumping the descriptions of what those archetypes mean from the internal document into that PR
This is useful and looks sensible. This might even help with work on on our immediate roadmap. Mir team ACK.
Presumably a client would want to call this before mapping a newly-created xdg_toplevel, right? So it doesn't appear somewhere and then blink to the cursor?
How should this be specified to interact with the initial xdg_surface configure (if at all)? Is it expected that clients which want to create a window and attach it to the drag will call xdg_toplevel_drag_v1.attach
during the initial buffer-less configure dance?
If it's expected to be done before the initial commit then
Maybe I'm not doing a good job at showing that, but while the API for absolute window positioning looks simpler, it is so much more complicated in practice - for both applications and compositors.
I think this is a key point - while “add global-coördinate preferred window placement” is only a few lines of text to the protocol, the actual effect is much more complicated and complex.
“But X11 allows clients to globally position their windows and it works fine” is (approximately) true, but also X11 clients and window manager have a whole bunch of extra complexity in order to make that work right - the ICCCM and EWMH specs, among other things - so that clients can position their windows in vaguely sensible positions and with vaguely sensible sizes.
I think this is why “support this thing that X11 clients do, but adapted for Wayland” protocols tend to be contentious and unsatisfying - there's a whole bunch of architectural context that X11 clients have that rarely translates, and so the “simple” approach ends up being asymptotically “implement the X11 window management architecture in Wayland”.
So, in Mir we've long had opinions on window management for complex apps, and we've just now managed to get “do something about that” to the top of our queue. There's an experiment happening here.
It's absolutely true that there's no good support for these complex applications in existing Wayland protocols (partially driven, I think, by some toolkit devs finding these sorts of complex applications mildly distasteful, so there's less toolkit pressure to make them work). Throwing a “just trust me” request onto xdg_shell
is a simple way of getting some sort of support for this, but for complex multi-window apps there's a lot that a good window manager with some semantic knowledge of the window relationships could do. I think it's worth aiming at making it possible to write good window managers.
Mir team ACK. We currently are not planning to prioritise implementing this, but the idea is sensible, which I think is sufficient to ACK.
This seems like a broadly sensible idea to me.
I don't have any strong concerns, but a few small queries (as well as the spelling mistake already identified above
I'm not sure if this is something we'll want to implement in Mir in the near-term - particularly since it seems likely to suffer the same sort of problems as MPX did wrt client support - but it's the sort of feature that is appealing. I'll bring it up with the rest of the team.
I know why it's this way around, but isn't this exactly the opposite order that a client would want?
If it could be wrangled around the other way then we wouldn't need the global_name
parameter.
It's not clear to me how compositors could implement the other order (at least, without libwayland changes), but I'll raise this just in case I'm missing something obvious.
Is there a particular reason why this event doesn't send the wl_seat
directly? As far as I can tell it would work fine technically, and my initial instinct is to send objects rather than handles-to-objects. You'd lose the ability to select the version used to bind to the wl_seat
, I guess (but could get that back by adding the version-to-bind to ext_transient_seat_manager_v1.create
).
I don't have a particularly strong opinion here, so feel free to ignore this.
Should there be the ability to set wl_seat.name
here?
This is exactly opposite, right? The lifetime of the seat is equal to the lifetime of the transient seat handle (ie: the wl_seat
goes away as once this does), as there's no other way(?) for a wl_seat
to be removed by the client.
Christopher James Halse Rogers (f88888dc) at 04 May 13:24
util/xmlconfig: Allow adding paths to check for driconf from the en...
... and 4484 more commits
This adds each directory from the colon-delimited list in MESA_ADD_DRICONF_PATH
environment
variable to the list of directories searched for driconf files, in addition to the default
path of $DATADIR/drirc.d
.
Signed-off-by: Christopher James Halse Rogers christopher.halse.rogers@canonical.com
Christopher James Halse Rogers (7f05f12f) at 04 May 13:13
util/xmlconfig: Allow adding paths to check for driconf from the en...
... and 44720 more commits
I think the documentation could be improved, and maybe also the semantics?
First, it's not just for optimisation purposes - clients will get incorrect rendering if they don't submit correct damage (cf: wl_surface.damage, where clients will get correct rendering if they don't submit damage). Maybe the default should be specified to be full-damage, rather than no damage?
It's also not stated anywhere that the compositor must render to at least the union of (input) damage_buffer
and the (output) damage
event. It wasn't clear to me what this was intended to be used for - I guessed (correctly, but still a guess) that it was for partial updates, but my normal understanding of "damage" is "this is the region I changed", so it wasn't clear to me how a screencopy client (which I assume would be using the buffers purely read-only) could generate damage.
Both of those things are sensible, neither are explicit in the protocol
There could be a theoretical case where a modifier is supported for sampling but not for rendering/blitting.
Ah! So this is expected to piggyback on the set of format/modifiers sent from zwp_linux_dmabuf
; this event is meant to select the subset of the format/modifier pairs published by the base extension. That wasn't initially clear to me.
Might still be worth publishing the modifiers here, in the case of a multi-GPU setup where different GPUs have different modifiers available?
Just for clarity, it is expected that before the first ready
event is sent a damage
event covering the whole buffer will be sent (because there has not been a previous ready
event)?