Skip to content

WIP: GLX Delay (accelerated GLX for Xwayland with the NVIDIA driver)

Adam Jackson requested to merge ajax/mesa:glx-delay into master

(below text from docs/delay.rst)

GLX Delay

"Delay" is a hack to enable direct GLX contexts under Xwayland when using the NVIDIA binary driver. It works by creating an EGL context on the client side, running GL rendering through that, and translating GLX commands to either EGL or X protocol as necessary. The library that performs this translation is a GLVND vendor library, which Xwayland configures as the vendor responsible for the screen.

That sounds unpleasant.

It sure is!

How does it work?

Xwayland's glamor code is extended to also enable DRI3 when using the EGLStreams (i.e. NVIDIA) backend. We use DRI3 for this because it's an already-existing path for associating a file descriptor with an X resource. We leverage EGL_EXT_device_drm to pass the file descriptor associated with the device at connection setup, and EGL_KHR_stream_cross_process_fd to associate surfaces with file descriptors.

Having done this, Xwayland sets the GLX vendor name for the screen to 'delay'. libglvnd uses this via the GLX_EXT_libglvnd extension to select a new GLX provider, namely libGLX_delay.so.0.

The Delay library is a variant build of the Mesa libGLX code with only a single direct-rendering backend, so we can assume once it's loaded that the server on the other end is Delay-capable. Mesa's GLX code has separate phases for display and screen setup, which are based on the X terminology. "Display" setup for Delay involves dlopen()-ing libEGL, querying the library for some mandatory extensions, verifying the server's DRI3 support, and filling in a table of function pointers. In "screen" setup we acquire an fd for the device from DRI3, create an EGLDisplay from that device, and build the list of visuals, fbconfigs, and GLX extensions from the available EGL features of the display.

A Delay GLX context is backed by an EGL context, created on the screen's EGLDisplay. Context attributes (robustness, forward-compatibility, etc.) are translated from GLX to EGL as appropriate. The EGL_KHR_no_config_context extension is required, mostly for the convenience of the implementor (me).

GLX drawables are backed by EGL objects, specifically by an EGLStream whose EGLConfig is compatible with the GLXFBConfig associated with the drawable. We use EGL_KHR_stream_producer_eglsurface on the client side and EGL_KHR_stream_consumer_gltexture on the server side. glXSwapBuffers thus corresponds to a simple eglSwapBuffers on the EGLStream, which is then handed off to the X server to complete. Xwayland consumes this stream as a texture and blits it into the EGLStream associated with the Wayland surface for the window.

Note that this means a GLXWindow corresponds to two EGLStreams: one from the X client to Xwayland, and one from Xwayland to the Wayland server. Any excessive blitting generated by this indirection is your EGL vendor's bug, because this is the API we have to work with.

glXWaitX is done by posting an actual GLXWaitX request to Xwayland. Xwayland handles this by flushing its (glamor's) rendering for the window. The client follows this by requesting a new fd for the surface so it can reset its view of the buffer from the X rendering that just completed.

glXWaitGL is simply eglWaitClient on the client side to complete GL rendering, followed by a GLXWaitGL request to the server to indicate that the wait has completed. Xwayland interprets this as a cue to consume the EGLStream's frame and queue it for display to the Wayland server.

How do I build it?

Enable glvnd and delay in meson:

% meson configure build -Dglvnd=true -Ddelay=true

Install carefully, you may not want to overwrite your system libGL with this, there's a few "make it build" hacks still in the code and I've not verified that the resulting traditional Mesa libraries work.

What other components do I need?

You need a version of Xwayland that implements the server side of all this. You can find a branch here:

https://gitlab.freedesktop.org/ajax/xserver/-/tree/glx-delay

You also need a version of libglvnd that can handle the GLX implementation being backed with EGL, because libglvnd is the thing that manages current context state and Delay is doing a two-context shell game. You can find the proposed merge request here:

glvnd/libglvnd!234 (closed)

Also, obviously, you need NVIDIA's binary driver. This was developed against NVIDIA driver 440.100 and a GTX 1650. YMMV.

What's good?

glxgears works, glxinfo works, what else is there?

More seriously, this is currently very much a work in progrss. GLX demo-style or fullscreen apps maybe work okay. Due to the design of this approach the actual GL rendering part should be about as fast as it is against Xorg, or against EGL on the bare metal, so in principle this can eventually be just as performant as it is with Xorg.

What's bad?

Quite a bit, though most of this is just work that hasn't been done yet.

  • Resizing the window is unimplemented.
  • GLXPixmaps and GLXPbuffers are unimplemented.
  • WaitX and WaitGL are unimplemented.
  • None of the SwapBuffers extras (like buffer age, or vsync) are hooked up.
  • We're definitely leaking resources along error and normal-cleanup paths.
  • We're blitting way more than I'd like to get the actual bits on the screen.
  • Probably more. Let me know what you find.

There's also some things that simply aren't present in EGL and so can't be reflected into GLX in this model. If you need accumulation buffers, I'm very sorry.

There's also some things that, even if they existed in EGL, don't have any corresponding Wayland protocol. Stereoscopic rendering is probably the most obvious example.

What's ugly?

I'd have liked not to need to modify libglvnd, and my first cut at this tried to use dlmopen to load the libEGL half of the solution into its own link map. That wouldn't have been portable beyond glibc and Solaris even if it worked, but also, it doesn't work with glibc due to dynamic linker bugs. Interested parties can read more here:

https://sourceware.org/bugzilla/show_bug.cgi?id=15271

The convention we're using to associate the stream with the GLX drawable is icky. You can't do DRI3FdFromPixmap on things that aren't actually pixmaps, which includes GLXDrawables of all types. So the client does GLXCreateWindow, then CreatePixmap, then DRI3FdFromPixmap from that. The client handles the lifetime of this "shadow" pixmap, and internally converts SwapBuffers into both eglSwapBuffers (to post the drawing to the stream) and then a PresentPixmap request to the server. It's somewhat gross to need a shadow pixmap for this, it might be better to do this as a new GLX extension, assuming you're willing to teach the GLX protocol code about fd passing. Alternatively it might be nice to teach DRI3 to handle things besides just pixmaps, perhaps going as far as naming specific framebuffer attachments for a GLXDrawable.

The way we're using streams fairly aggressively assumes that you're using GLX as a write-mostly channel. And that's usually correct, but there's nothing really preventing an app from making the drawable current and using glReadPixels to screen-scrape the GL drawing from a sibling context. It's difficult to see how to reconcile with the streams producer/consumer model.

Why is this in Mesa?

Mesa's GLX code already implements most of what you need for this:

  • GLX protocol marshalling
  • A vtable for handling direct contexts specially
  • A lot of boring glue code to interface with glvnd

This code did start out in a separate project, but I got maybe 100 lines in before I got tired of copying code out of Mesa.

Is this a good idea?

I think so? I want the xfree86 code out of my life, and this approach seems like it'll eliminate a large class of reasons why you might need to use Xorg and NVIDIA's driver. Certainly it's better than what you currently get for GLX clients in that scenario, which is llvmpipe.

On the other hand, I can see the argument that this entrenches the position of NVIDIA's libEGL, since we've only made it more useable. But I think, on balance, that this reduces the binary driver footprint, and I think that's a good direction to go.

What's with the name?

Another thing with the acronym "GLX" is the Green Line Extension project, which, when completed, will extend the Boston MBTA's Green Line from Lechmere in East Cambridge through Somerville and out to Tufts University in Medford. This happens to be near my part of town, and the project has been repeatedly delayed. In 2006 the court-mandated completion date was 2014; it's now 2020, and while construction has happened no new service is in operation yet.

Getting accelerated direct GLX working under Xwayland with NVIDIA has also been repeatedly delayed. One of these GLXes I can fix, but neither delay do I like.

Adam Jackson (ajax@nwnk.net / ajax@redhat.com), August 2020.

Edited by Adam Jackson

Merge request reports