va: Scrambled image due to not passing DRM modifiers downstream
Hi, recently I got myself a hold on an Intel based CPU (with iGPU, using iHD driver) after using mainly AMD for years. Unfortunately trying to play video with HW acceleration provided by vah264dec
(not old VAAPI plugin!) all I see is this:
So I decided to believe in myself and looked around for the cause. Using GST_DEBUG=*vamem*:7
I found:
0:00:02.072273440 41903 0x1aa9300 LOG vamemory gstvaallocator.c:620:gst_va_dmabuf_allocator_setup_buffer_full:<vadmabufallocator0> buffer 0x7f7f0416d5a0: new dmabuf 23 / surface 0x4000000 [1920x1088] size 3110400 drm mod 0x100000000000002
Quick google search shows that DRM modifier 0x100000000000002 is a I915_Y_TILED
format.
So I went to check DMABuf import code, only to see that it blindly assumes LINEAR modifier. So for testing purposes I did a quick dirty change of LINEAR modifier to the one that is actually used:
diff --git a/subprojects/gst-plugins-base/gst-libs/gst/gl/egl/gsteglimage.c b/subprojects/gst-plugins-base/gst-libs/gst/gl/egl/gsteglimage.c
index 906f56e381..cf1a78e3c4 100644
--- a/subprojects/gst-plugins-base/gst-libs/gst/gl/egl/gsteglimage.c
+++ b/subprojects/gst-plugins-base/gst-libs/gst/gl/egl/gsteglimage.c
@@ -906,9 +906,9 @@ gst_egl_image_from_dmabuf_direct_target (GstGLContext * context,
attribs[atti++] = in_info->stride[0];
if (with_modifiers) {
attribs[atti++] = EGL_DMA_BUF_PLANE0_MODIFIER_LO_EXT;
- attribs[atti++] = DRM_FORMAT_MOD_LINEAR & 0xffffffff;
+ attribs[atti++] = I915_FORMAT_MOD_Y_TILED & 0xffffffff;
attribs[atti++] = EGL_DMA_BUF_PLANE0_MODIFIER_HI_EXT;
- attribs[atti++] = (DRM_FORMAT_MOD_LINEAR >> 32) & 0xffffffff;
+ attribs[atti++] = (I915_FORMAT_MOD_Y_TILED >> 32) & 0xffffffff;
}
}
@@ -922,9 +922,9 @@ gst_egl_image_from_dmabuf_direct_target (GstGLContext * context,
attribs[atti++] = in_info->stride[1];
if (with_modifiers) {
attribs[atti++] = EGL_DMA_BUF_PLANE1_MODIFIER_LO_EXT;
- attribs[atti++] = DRM_FORMAT_MOD_LINEAR & 0xffffffff;
+ attribs[atti++] = I915_FORMAT_MOD_Y_TILED & 0xffffffff;
attribs[atti++] = EGL_DMA_BUF_PLANE1_MODIFIER_HI_EXT;
- attribs[atti++] = (DRM_FORMAT_MOD_LINEAR >> 32) & 0xffffffff;
+ attribs[atti++] = (I915_FORMAT_MOD_Y_TILED >> 32) & 0xffffffff;
}
}
@@ -938,9 +938,9 @@ gst_egl_image_from_dmabuf_direct_target (GstGLContext * context,
attribs[atti++] = in_info->stride[2];
if (with_modifiers) {
attribs[atti++] = EGL_DMA_BUF_PLANE2_MODIFIER_LO_EXT;
- attribs[atti++] = DRM_FORMAT_MOD_LINEAR & 0xffffffff;
+ attribs[atti++] = I915_FORMAT_MOD_Y_TILED & 0xffffffff;
attribs[atti++] = EGL_DMA_BUF_PLANE2_MODIFIER_HI_EXT;
- attribs[atti++] = (DRM_FORMAT_MOD_LINEAR >> 32) & 0xffffffff;
+ attribs[atti++] = (I915_FORMAT_MOD_Y_TILED >> 32) & 0xffffffff;
}
}
Recompiled, and then trying to play video again. Here is the result:
To be honest, until modifiers passing downstream is added. It would be much better if the gst_egl_image_from_dmabuf_direct_target
did NOT describe modifiers at all. Then auto detection would be used with higher chances of being right then blindly assuming LINEAR (tried and it also produces correct picture). If you agree, I can submit MR against 1.20 branch where (I assume) modifiers support will not be added, so using auto/default would probably by better (fixes my issue).