mesa issueshttps://gitlab.freedesktop.org/mesa/mesa/-/issues2019-09-18T20:22:05Zhttps://gitlab.freedesktop.org/mesa/mesa/-/issues/960gallium/draw: segment fault when running polystip test of ogl conformance, pa...2019-09-18T20:22:05ZBugzilla Migration Usergallium/draw: segment fault when running polystip test of ogl conformance, patch is attached## Submitted by Fei Jiang
Assigned to **mes..@..op.org**
**[Link to original bug (#23401)](https://bugs.freedesktop.org/show_bug.cgi?id=23401)**
## Description
Created attachment 28771
patch_for_polygon_stipple, removing the pipe-...## Submitted by Fei Jiang
Assigned to **mes..@..op.org**
**[Link to original bug (#23401)](https://bugs.freedesktop.org/show_bug.cgi?id=23401)**
## Description
Created attachment 28771
patch_for_polygon_stipple, removing the pipe->flush() call in pstip_update_texture
I encountered a segment fault error when running polystip test of ogl conformance.
In function pstip_update_texture, I think pipe->flush() should not be called. pstip_update_texture is called from st_validate_state, at the stage of validating state, while the scene has not been created.
**Patch 28771**, "patch_for_polygon_stipple, removing the pipe->flush() call in pstip_update_texture":
[patch_for_polygon_stipple](/uploads/1afe591e6818d66a86a356930486a448/patch_for_polygon_stipple)https://gitlab.freedesktop.org/mesa/mesa/-/issues/959Feature request: Configurable pixel origin2019-09-18T20:22:02ZBugzilla Migration UserFeature request: Configurable pixel origin## Submitted by Stefan Dösinger
Assigned to **mes..@..op.org**
**[Link to original bug (#21867)](https://bugs.freedesktop.org/show_bug.cgi?id=21867)**
## Description
In OpenGL and Direct3D 10 a pixel coordinate specifies the cente...## Submitted by Stefan Dösinger
Assigned to **mes..@..op.org**
**[Link to original bug (#21867)](https://bugs.freedesktop.org/show_bug.cgi?id=21867)**
## Description
In OpenGL and Direct3D 10 a pixel coordinate specifies the center of a pixel on the screen. In Direct3D9 and earlier however, the origin is the top left corner of a pixel.
This causes problems for Wine, because we have to move the geometry by half a pixel to correct this difference. This is relatively painless with fixed function vertex processing, but in vertex shaders we have to insert extra instructions and occupy a shader constant / uniform to load these private constants. This causes problems with apps using all 256 constants on dx9 cards, and / or hitting the instruction limitation.
I have been told that most 3D hardware has a switch to switch between D3D and OpenGL mode. An OpenGL extension that allows Wine to switch between these modes would be very helpful. This switch could work as a glEnable / glDisable flag, or if the hardware needs it, a pixel format flag or flag at context creation would work too.
Version: githttps://gitlab.freedesktop.org/mesa/mesa/-/issues/958Executing display lists containing smooth multi-color shading with translucen...2019-09-18T20:21:58ZBugzilla Migration UserExecuting display lists containing smooth multi-color shading with translucency causes assertion failure if color is vertex delivered before glBegin block## Submitted by Michael Saunders
Assigned to **mes..@..op.org**
**[Link to original bug (#18599)](https://bugs.freedesktop.org/show_bug.cgi?id=18599)**
## Description
Created attachment 20420
Pseudo code from our OpenGL log showin...## Submitted by Michael Saunders
Assigned to **mes..@..op.org**
**[Link to original bug (#18599)](https://bugs.freedesktop.org/show_bug.cgi?id=18599)**
## Description
Created attachment 20420
Pseudo code from our OpenGL log showing code that fails and code that doesn't fail.
Overview:
---------
Our product uses OpenGL but attempts to buffer the glBegin statements until we are sure that a normal/vertex will be delivered. We do this because to prevent creating empty begin/end blocks because some of our polygons are collapsed but it isn't know at the time the block is first opened. We cache the glBegin request and then When the first normal/vertex is to be delivered we open the block and deliver the first normal/vertex. This introduced an interesting problem with multi-color translucent smooth shading. Because we buffer the glBegin the color we set on the first vertex occurs outside the block instead of inside. In other words we effectively send instructions as follows:
glBegin
glColor4fv
glNormal3d
glVertex3d
glColor4fv
glNormal3d
glVertex3d
...
glEnd
but because of our buffering of glBegin we actually deliver the following to OpenGL:
glColor4fv
glBegin
glNormal3d
glVertex3d
glColor4fv
glNormal3d
glVertex3d
...
glEnd
Note the swapped glBegin with the glColor4fv at the beginning of the delivered instructions. This shouldn't matter because OpenGL is a state engine and in fact produces the correct output if display lists are not used however if we build display lists and then execute them Mesa asserts on line 391 of tnl/t_vertex.c: assert(a[j].inputstride == vptr->stride); I'm not sure if the assertion is correct or not. Unfortunately/fortunately (depending on how you look at it) the default compilation for Mesa's release build has assertions enabled so that GLX servers using Mesa instead of native OpenGL drivers will assert with our product unless we supply our own build of the Mesa libraries with assertions disabled. We can work around the problem by making sure color gets delivered before beginning the block but I thought I should point out the problem if it is indeed a problem.
Steps to reproduce:
-------------------
Attached is a file that contains a log of "pseudo" code (from a dump of our OpenGL instruction log) that is the smallest example I could build. The example had to have one four node polygon with all the colors having the same value and the two other four node polygons having gradient colors The direction of the gradient seemed to matter as well (the gradient in my case varies laterally across the cell width instead of longitudinally across the shared cell edge. The first half of the attachment our applications OpenGL log from the pseudo code that asserts and the second half is the output from the pseudo code that does not assert. The only difference between the two sets is the order of the glColor4fv and the opening glBegin.
Sorry that the code probably doesn't compile its just an approximation of the code we send to OpenGL. Look for the first glColor4fv call. Note that in the first "BAD" block the glColor4fv call comes before the glBegin(GL_POLYGON) whereas in the second example it comes after. The first case causes Mesa to assert and the second does not.
Actual Results:
---------------
Assertion failure, tnl/t_vertex.c:391, assert(a[j].inputstride == vptr->stride);
Expected Results:
-----------------
Since OpenGL is a state engine the delivery of the glColor4fv before the glBegin shouldn't have made any difference and indeed when display lists are not used it doesn't seem to make any visual difference. Also when display lists are used but assertions disabled it doesn't seem to make any visual difference.
Build Date & Platform:
----------------------
The bug report version label above did not give me an option to enter 7.2 so I put CVS as the version number. This occurs at least on Linux 64 and 32 bit platforms running Mesa 7.2 and Mesa 5.0.
**Attachment 20420**, "Pseudo code from our OpenGL log showing code that fails and code that doesn't fail.":
[OpenGLDebug.log](/uploads/7d1d81b1272e5efda3f2e8d1020d9feb/OpenGLDebug.log)
Version: githttps://gitlab.freedesktop.org/mesa/mesa/-/issues/957Invocation of the interface glGetString between glBegin and glEnd does not ge...2019-09-18T20:21:56ZBugzilla Migration UserInvocation of the interface glGetString between glBegin and glEnd does not generate GL_INVALID_OPERATION error## Submitted by Artak Petrosyan
Assigned to **mes..@..op.org**
**[Link to original bug (#17407)](https://bugs.freedesktop.org/show_bug.cgi?id=17407)**
## Description
The standard states:
"GL_INVALID_OPERATION is generated if glGet...## Submitted by Artak Petrosyan
Assigned to **mes..@..op.org**
**[Link to original bug (#17407)](https://bugs.freedesktop.org/show_bug.cgi?id=17407)**
## Description
The standard states:
"GL_INVALID_OPERATION is generated if glGetString is executed between the execution of glBegin and the corresponding execution of glEnd.", but invocation of the interface glGetString(GL_VERSION) between glBegin and glEnd does not generate GL_INVALID_OPERATION error.
Version: 6.4https://gitlab.freedesktop.org/mesa/mesa/-/issues/956Incorrect line clipping with very large coordinates2019-09-18T20:21:52ZBugzilla Migration UserIncorrect line clipping with very large coordinates## Submitted by Brian Paul
Assigned to **mes..@..op.org**
**[Link to original bug (#8701)](https://bugs.freedesktop.org/show_bug.cgi?id=8701)**
## Description
Mesa's line clipping code fails when a vertex coordinate is of very lar...## Submitted by Brian Paul
Assigned to **mes..@..op.org**
**[Link to original bug (#8701)](https://bugs.freedesktop.org/show_bug.cgi?id=8701)**
## Description
Mesa's line clipping code fails when a vertex coordinate is of very large magnitude.
Version: 6.5https://gitlab.freedesktop.org/mesa/mesa/-/issues/955GLX_USE_TLS breaks -fPIC build2019-09-18T20:21:44ZBugzilla Migration UserGLX_USE_TLS breaks -fPIC build## Submitted by Lukáš Turek
Assigned to **mes..@..op.org**
**[Link to original bug (#7459)](https://bugs.freedesktop.org/show_bug.cgi?id=7459)**
## Description
When Mesa is compiled with -DGLX_USE_TLS and -fPIC, libGL.so.1.2 is no...## Submitted by Lukáš Turek
Assigned to **mes..@..op.org**
**[Link to original bug (#7459)](https://bugs.freedesktop.org/show_bug.cgi?id=7459)**
## Description
When Mesa is compiled with -DGLX_USE_TLS and -fPIC, libGL.so.1.2 is not a valid
PIC library. Prelink says:
prelink: /usr/bin/glxgears: Cannot prelink against non-PIC shared library
/usr/lib/opengl/xorg-x11/lib/libGL.so.1
The bug is present in 6.5 and current CVS (7.7.2006), not in 6.4.2.
See this report in Gentoo Bugzilla:
http://bugs.gentoo.org/show_bug.cgi?id=136115
I'm using x86 with GCC 3.4.6, but others report the same problem with GCC 4.1.1.
Version: 6.5https://gitlab.freedesktop.org/mesa/mesa/-/issues/954Wrong normals with GL_AUTO_NORMAL2019-09-18T20:21:41ZBugzilla Migration UserWrong normals with GL_AUTO_NORMAL## Submitted by Thomas Zimmermann `@tzimmermann`
Assigned to **mes..@..op.org**
**[Link to original bug (#6098)](https://bugs.freedesktop.org/show_bug.cgi?id=6098)**
## Description
Hi,
i use 2d evaluators to draw a grid. The grid...## Submitted by Thomas Zimmermann `@tzimmermann`
Assigned to **mes..@..op.org**
**[Link to original bug (#6098)](https://bugs.freedesktop.org/show_bug.cgi?id=6098)**
## Description
Hi,
i use 2d evaluators to draw a grid. The grid points are arranged, so that the
parameter u walks along +x and v walks along -z. Calling glEvalMesh() renders a
grid with front faces pointing upwards. (I tested this by explicitly culling
back faces.)
The grid patches normals where generated with GL_AUTO_NORMAL. I guess normals
should also point upwards, but the lighting in the scene behaves, as if normals
where pointing downwards.
Calling glFrontFace(GL_CW) reverses normals as expected, but they still point
into the wrong direction.
The system is Linux 2.6.13, Mesa 6.4.2 SW-Renderer on Xorg 6.7
Regards,
Thomas Zimmermann
tdz@users.sourceforge.net
Version: 6.4https://gitlab.freedesktop.org/mesa/mesa/-/issues/953indirect rendering of glDrawArrays() to an NVidia machine is broke.2019-09-18T20:21:37ZBugzilla Migration Userindirect rendering of glDrawArrays() to an NVidia machine is broke.## Submitted by tom
Assigned to **mes..@..op.org**
**[Link to original bug (#5002)](https://bugs.freedesktop.org/show_bug.cgi?id=5002)**
## Description
Running an app that uses glDrawArrays() from a machine without graphics to a
m...## Submitted by tom
Assigned to **mes..@..op.org**
**[Link to original bug (#5002)](https://bugs.freedesktop.org/show_bug.cgi?id=5002)**
## Description
Running an app that uses glDrawArrays() from a machine without graphics to a
machine with an NVidia card with NVidia's driver, produces a blank window.
If I set LIBGL_NO_DRAWARRAYS=1 before running the app, everything is drawn ok.
Given that the NVidia driver exports EXT_vertex_arrays and supports vbo, I guess
DrawArrays_old is sending the old opcode and the nvidia driver isn't supporting
it and is instead expecting the new opcode. (if I understand that all correctly)https://gitlab.freedesktop.org/mesa/mesa/-/issues/948g++: error: unrecognized command line option ‘-std=c++14’2019-09-18T20:19:39ZBugzilla Migration Userg++: error: unrecognized command line option ‘-std=c++14’## Submitted by Vinson Lee
Assigned to **mes..@..op.org**
**[Link to original bug (#111458)](https://bugs.freedesktop.org/show_bug.cgi?id=111458)**
## Description
Build error with GCC 4.8 and older LLVM.
Compiling src/gallium/a...## Submitted by Vinson Lee
Assigned to **mes..@..op.org**
**[Link to original bug (#111458)](https://bugs.freedesktop.org/show_bug.cgi?id=111458)**
## Description
Build error with GCC 4.8 and older LLVM.
Compiling src/gallium/auxiliary/gallivm/lp_bld_debug.cpp ...
g++: error: unrecognized command line option ‘-std=c++14’
commit 1abe87383e1529a14498d70a0cf445728b9c338d
Author: Kai Wasserbäch <kai@dev.carbon-project.org>
Date: Sat Aug 17 10:59:43 2019 +0200
build: Bump C++ standard requirement to C++14 to fix FTBFS with LLVM 10
When building Mesa against a recent LLVM 10 with C++11, the build fails
if the AMD common code is built as well due to "std::index_sequence"
being undeclared.
LLVM requires a minimum of C++14.
Signed-off-by: Kai Wasserbäch <kai@dev.carbon-project.org>
Acked-by: Eric Engestrom <eric@engestrom.ch>
Version: 19.2https://gitlab.freedesktop.org/mesa/mesa/-/issues/946Vulkan overlay layer - async compute not supported, making overlay disappear ...2020-02-07T17:36:19ZBugzilla Migration UserVulkan overlay layer - async compute not supported, making overlay disappear in Doom## Submitted by tem..@..il.com
Assigned to **mes..@..op.org**
**[Link to original bug (#111401)](https://bugs.freedesktop.org/show_bug.cgi?id=111401)**
## Description
The Vulkan overlay layer doesn't work with async compute on GCN...## Submitted by tem..@..il.com
Assigned to **mes..@..op.org**
**[Link to original bug (#111401)](https://bugs.freedesktop.org/show_bug.cgi?id=111401)**
## Description
The Vulkan overlay layer doesn't work with async compute on GCN/RDNA and probably neither Nvidia Turing GPUs, it simply disappears.
To reproduce, start Doom on any Radeon (driver shouldn't matter) in Steam Play and set it to Vulkan with Ultra preset and 8xTSSAA: When in the actual game, the Mesa overlay will simply disappear. When changing anti aliasing to FXAA, async compute is turned off and the Mesa overlay gets visible again.
On windows, overlays like that of Steam (the overlay of the Linux version for some reason shares the traits of Mesa overlay) or RTSS disable presenting frames from a compute queue. This makes them work, but degrades performance substantially.
The windows open source tool OCAT supports an overlay for Vulkan that is compatible with async compute and doesn't degrade performance:
https://ocat.readthedocs.io/en/latest/index.html
Tested with mesa-git some weeks ago.
Version: githttps://gitlab.freedesktop.org/mesa/mesa/-/issues/945Imported GBM BO released with DESTROY_DUMB2019-09-19T09:17:52ZBugzilla Migration UserImported GBM BO released with DESTROY_DUMB## Submitted by Pekka Paalanen `@pq`
Assigned to **mes..@..op.org**
**[Link to original bug (#111316)](https://bugs.freedesktop.org/show_bug.cgi?id=111316)**
## Description
Because grepping for GEM_CLOSE in Mesa GBM did not yield ...## Submitted by Pekka Paalanen `@pq`
Assigned to **mes..@..op.org**
**[Link to original bug (#111316)](https://bugs.freedesktop.org/show_bug.cgi?id=111316)**
## Description
Because grepping for GEM_CLOSE in Mesa GBM did not yield the results I would have expected, I wrote a small test program: https://gitlab.freedesktop.org/pq/gbm-test/blob/master/gbm-test.c
I was hypothesizing that a display-only kernel driver (with no driver at all in Mesa) doing dmabuf imports from GBM might be leaking GEM handles in Mesa. The program shows that it is not leaking, but there is another issue: the ioctl to close the handle does not seem right.
The program uses two DRM devices: one device to allocate a GBM BO with gbm_bo_create_with_modifiers() and export the buffer as dmabuf, and another display-only device to import the dmabuf and get the GEM handle. (This is a similar pattern to what a display server supporting display-only secondary DRM devices would do for zero-copy, except it would use a gbm_surface with EGL instead of gbm_bo_create.)
Doing an 'strace -e ioctl' of the test program, one allocate-export-import cycle looks like this:
```
ioctl(3, DRM_IOCTL_I915_GEM_CREATE, 0x7ffe128116b0) = 0
ioctl(3, DRM_IOCTL_I915_GEM_SET_TILING, 0x7ffe12811600) = 0
ioctl(3, DRM_IOCTL_I915_GEM_SET_DOMAIN, 0x7ffe128116a4) = 0
ioctl(3, DRM_IOCTL_PRIME_HANDLE_TO_FD, 0x7ffe1281190c) = 0
ioctl(5, DRM_IOCTL_PRIME_FD_TO_HANDLE, 0x7ffe1281163c) = 0
GEM handle of imported buffer: 1
ioctl(5, DRM_IOCTL_MODE_DESTROY_DUMB, 0x7ffe12811904) = 0
ioctl(3, DRM_IOCTL_GEM_CLOSE, 0x7ffe128118a0) = 0
```
Is it ok to use `DESTROY_DUMB` here?
This is all purely by inspection, I have not hit actual problems so far.
```
< ickle> pq: cut-and-paste drivers/gpu/drm/drm_dumb_buffers.c drm_mode_destroy_dumb() into the report
int drm_mode_destroy_dumb(struct drm_device *dev, u32 handle,
struct drm_file *file_priv)
{
if (!dev->driver->dumb_create)
return -ENOSYS;
if (dev->driver->dumb_destroy)
return dev->driver->dumb_destroy(file_priv, dev, handle);
else
return drm_gem_dumb_destroy(file_priv, dev, handle);
}
```
Version: 18.3https://gitlab.freedesktop.org/mesa/mesa/-/issues/941[regression][bisected] Android build test fails to include libmesa_winsys_vir...2019-09-18T20:19:25ZBugzilla Migration User[regression][bisected] Android build test fails to include libmesa_winsys_virgl_common## Submitted by clayton craft `@craftyguy`
Assigned to **Alexandros Frantzis**
**[Link to original bug (#110922)](https://bugs.freedesktop.org/show_bug.cgi?id=110922)**
## Description
Output from Android build:
[985/985] includin...## Submitted by clayton craft `@craftyguy`
Assigned to **Alexandros Frantzis**
**[Link to original bug (#110922)](https://bugs.freedesktop.org/show_bug.cgi?id=110922)**
## Description
Output from Android build:
[985/985] including vendor/intel/utils/Android.mk ...
vendor/intel/external/project-celadon/mesa/src/gallium/winsys/virgl/drm/Android.mk: error: libmesa_winsys_virgl (STATIC_LIBRARIES android-x86_64) missing libmesa_winsys_virgl_common (STATIC_LIBRARIES android-x86_64)
You can set ALLOW_MISSING_DEPENDENCIES=true in your environment if this is intentional, but that may defer real problems until later in the build.
vendor/intel/external/project-celadon/mesa/src/gallium/winsys/virgl/drm/Android.mk: error: libmesa_winsys_virgl (STATIC_LIBRARIES android-x86) missing libmesa_winsys_virgl_common (STATIC_LIBRARIES android-x86)
You can set ALLOW_MISSING_DEPENDENCIES=true in your environment if this is intentional, but that may defer real problems until later in the build.
vendor/intel/external/project-celadon/mesa/src/gallium/winsys/virgl/vtest/Android.mk: error: libmesa_winsys_virgl_vtest (STATIC_LIBRARIES android-x86_64) missing libmesa_winsys_virgl_common (STATIC_LIBRARIES android-x86_64)
You can set ALLOW_MISSING_DEPENDENCIES=true in your environment if this is intentional, but that may defer real problems until later in the build.
vendor/intel/external/project-celadon/mesa/src/gallium/winsys/virgl/vtest/Android.mk: error: libmesa_winsys_virgl_vtest (STATIC_LIBRARIES android-x86) missing libmesa_winsys_virgl_common (STATIC_LIBRARIES android-x86)
You can set ALLOW_MISSING_DEPENDENCIES=true in your environment if this is intentional, but that may defer real problems until later in the build.
build/make/core/main.mk:833: error: exiting from previous errors.
This has been bisected to:
commit 801753d4b34f41625487c24a5c6ddaa912ef607a
Author: Alexandros Frantzis <alexandros.frantzis@collabora.com>
Date: Tue Jun 11 17:58:08 2019 +0300
virgl: Use virgl_resource_cache in the vtest winsys
and:
commit 13f70d3668e6392bb08805f8d6f3162905ad35f0
Author: Alexandros Frantzis <alexandros.frantzis@collabora.com>
Date: Wed Jun 12 10:30:26 2019 +0300
virgl: Use virgl_resource_cache in the drm winsys
You can view the full build log here: https://mesa-ci.01.org/mesa_master/builds/16747/group/63a9f0ea7bb98050796b649e85481845/93085/artifacts/154
Version: githttps://gitlab.freedesktop.org/mesa/mesa/-/issues/940VA-API st doesn't expose NOISE_REDUCTION/SHARPNESS, while they exposed via VD...2019-09-18T20:19:23ZBugzilla Migration UserVA-API st doesn't expose NOISE_REDUCTION/SHARPNESS, while they exposed via VDPAU st## Submitted by Andrew Randrianasulu
Assigned to **mes..@..op.org**
**[Link to original bug (#110846)](https://bugs.freedesktop.org/show_bug.cgi?id=110846)**
## Description
Well, I was playing with VA-API some more, and found few ...## Submitted by Andrew Randrianasulu
Assigned to **mes..@..op.org**
**[Link to original bug (#110846)](https://bugs.freedesktop.org/show_bug.cgi?id=110846)**
## Description
Well, I was playing with VA-API some more, and found few strange things...
Here is vdpauinfo for my system (mesa/nv50):
vdpauinfo
display: :0 screen: 0
API version: 1
Information string: G3DVL VDPAU Driver Shared Library version 1.0
Video surface:
name width height types
-------------------------------------------
420 8192 8192 NV12 YV12
422 8192 8192 UYVY YUYV
444 8192 8192 Y8U8V8A8 V8U8Y8A8
Decoder capabilities:
name level macbs width height
----------------------------------------------------
MPEG1 0 16384 2048 2048
MPEG2_SIMPLE 3 16384 2048 2048
MPEG2_MAIN 3 16384 2048 2048
H264_BASELINE --- not supported ---
H264_MAIN --- not supported ---
H264_HIGH --- not supported ---
VC1_SIMPLE --- not supported ---
VC1_MAIN --- not supported ---
VC1_ADVANCED --- not supported ---
MPEG4_PART2_SP --- not supported ---
MPEG4_PART2_ASP --- not supported ---
DIVX4_QMOBILE --- not supported ---
DIVX4_MOBILE --- not supported ---
DIVX4_HOME_THEATER --- not supported ---
DIVX4_HD_1080P --- not supported ---
DIVX5_QMOBILE --- not supported ---
DIVX5_MOBILE --- not supported ---
DIVX5_HOME_THEATER --- not supported ---
DIVX5_HD_1080P --- not supported ---
H264_CONSTRAINED_BASELINE --- not supported ---
H264_EXTENDED --- not supported ---
H264_PROGRESSIVE_HIGH --- not supported ---
H264_CONSTRAINED_HIGH --- not supported ---
H264_HIGH_444_PREDICTIVE --- not supported ---
HEVC_MAIN --- not supported ---
HEVC_MAIN_10 --- not supported ---
HEVC_MAIN_STILL --- not supported ---
HEVC_MAIN_12 --- not supported ---
HEVC_MAIN_444 --- not supported ---
Output surface:
name width height nat types
----------------------------------------------------
B8G8R8A8 8192 8192 y NV12 YV12 UYVY YUYV Y8U8V8A8 V8U8Y8A8 A4I4 I4A4 A8I8 I8A8
R8G8B8A8 8192 8192 y NV12 YV12 UYVY YUYV Y8U8V8A8 V8U8Y8A8 A4I4 I4A4 A8I8 I8A8
R10G10B10A2 8192 8192 y NV12 YV12 UYVY YUYV Y8U8V8A8 V8U8Y8A8 A4I4 I4A4 A8I8 I8A8
B10G10R10A2 8192 8192 y NV12 YV12 UYVY YUYV Y8U8V8A8 V8U8Y8A8 A4I4 I4A4 A8I8 I8A8
Bitmap surface:
name width height
------------------------------
B8G8R8A8 8192 8192
R8G8B8A8 8192 8192
R10G10B10A2 8192 8192
B10G10R10A2 8192 8192
A8 8192 8192
Video mixer:
feature name sup
------------------------------------
DEINTERLACE_TEMPORAL y
DEINTERLACE_TEMPORAL_SPATIAL -
INVERSE_TELECINE -
NOISE_REDUCTION y
SHARPNESS y
LUMA_KEY y
HIGH QUALITY SCALING - L1 y
HIGH QUALITY SCALING - L2 -
HIGH QUALITY SCALING - L3 -
HIGH QUALITY SCALING - L4 -
HIGH QUALITY SCALING - L5 -
HIGH QUALITY SCALING - L6 -
HIGH QUALITY SCALING - L7 -
HIGH QUALITY SCALING - L8 -
HIGH QUALITY SCALING - L9 -
parameter name sup min max
-----------------------------------------------------
VIDEO_SURFACE_WIDTH y 48 2048
VIDEO_SURFACE_HEIGHT y 48 2048
CHROMA_TYPE y
LAYERS y 0 4
attribute name sup min max
-----------------------------------------------------
BACKGROUND_COLOR y
CSC_MATRIX y
NOISE_REDUCTION_LEVEL y 0.00 1.00
SHARPNESS_LEVEL y -1.00 1.00
LUMA_KEY_MIN_LUMA y
LUMA_KEY_MAX_LUMA y
--------
as you can see, in theory noise reduction/sharpness (from vl layer in gallium) exposed here. Still, if I look at https://cgit.freedesktop.org/mesa/mesa/tree/src/gallium/state_trackers/va/surface.c
I can see in vlVaQueryVideoProcFilterCaps()
case VAProcFilterNoiseReduction:
case VAProcFilterSharpening:
case VAProcFilterColorBalance:
case VAProcFilterSkinToneEnhancement:
return VA_STATUS_ERROR_UNIMPLEMENTED;
---------------
so, va implementation is a bit incomplete.
I also think it ignores any quality scaling flags, and even detailed colorspace settings (hardcoded to BT601).
Version: githttps://gitlab.freedesktop.org/mesa/mesa/-/issues/938Qt bug regression in latest mesa-git unofficial Arch Repo2019-09-18T20:19:19ZBugzilla Migration UserQt bug regression in latest mesa-git unofficial Arch Repo## Submitted by Svyatoslav Timofeev
Assigned to **mes..@..op.org**
**[Link to original bug (#110705)](https://bugs.freedesktop.org/show_bug.cgi?id=110705)**
## Description
I'm using kwin-lowlatency, custom kernel (5.1 Tk-Glitch BK...## Submitted by Svyatoslav Timofeev
Assigned to **mes..@..op.org**
**[Link to original bug (#110705)](https://bugs.freedesktop.org/show_bug.cgi?id=110705)**
## Description
I'm using kwin-lowlatency, custom kernel (5.1 Tk-Glitch BKGBUILD), and my X11 config here:
Section "Device"
Identifier "RX560"
Driver "amdgpu"
Option "AccelMethod" "glamor"
Option "DRI" "3"
Option "TearFree" "on"
Option "ColorTiling" "on"
Option "ColorTiling2D" "on"
EndSection
========
With latest mesa-git Qt/Plasma going mad: Fonts distorted on Latte-dock and
sidebar in KDE control center glitches. Some other Qt5 apps affected to.
Other apps function normally as well.
Please check it!
Version: githttps://gitlab.freedesktop.org/mesa/mesa/-/issues/937Shader-based MJPEG decoding2019-09-18T20:19:17ZBugzilla Migration UserShader-based MJPEG decoding## Submitted by Andrew Randrianasulu
Assigned to **mes..@..op.org**
**[Link to original bug (#110699)](https://bugs.freedesktop.org/show_bug.cgi?id=110699)**
## Description
Hello.
While this bug may not see any implmentation in l...## Submitted by Andrew Randrianasulu
Assigned to **mes..@..op.org**
**[Link to original bug (#110699)](https://bugs.freedesktop.org/show_bug.cgi?id=110699)**
## Description
Hello.
While this bug may not see any implmentation in literally years due to shortage of manpower - it will be around for searches, at least.
Yesterday I made informal request on #nouveau channel asking if "
https://cgit.freedesktop.org/mesa/mesa/tree/src/gallium/auxiliary/vl/vl_idct.c be reused for shader-based mjpeg decoding? "
Ilia Mirkin answered:
01:53 imirkin: AndrewR: i suppose so? mpeg is basically a bunch of 8x8 JPEG's ... kinda
01:54 imirkin: why do you care about mjpeg out of curiousity?
07:31 AndrewR: imirkin, sorry, was sleeping. Recently Cinelerra-GG (NLE) gained support for vaapi/vdpau decoding and vaapi encoding ..so, having few streams played at the same time (tracks, monitors) not as uncommon as it was with just players.
07:32 AndrewR: imirkin, https://lists.cinelerra-gg.org/pipermail/cin/2019-May/thread.html (not very big list archive)
07:34 AndrewR: imirkin, as far as I understand mesa and ffmpeg can't be mixed freely (mit vs gpl?), but then having something simple for (regression) testing will not hurt?
08:02 AndrewR: imirkin, https://github.com/CESNET/GPUJPEG (CUDA, but in theory it can be implemented at least on same hw with different programming interface ...). Well, even just IDCT stage....
08:31 AndrewR: https://github.com/negge/jpeg_gpu/commits/master - I think I tested this on my openGL 3.3 card and it worked ....
08:51 AndrewR: imirkin, just retested this jpeg_gpu program - it decodes 2048x1536 jpeg photo at 11 fps for cpufreq 1.4 Ghz, and at 27 fps if I let cpu freq rise up to 3.4-3.8 Ghz
---------------
src: https://people.freedesktop.org/~cbrill/dri-log/index.php?channel=nouveau&date=2019-05-16
So, while I can't code my own feature request - i hope it will generate at least some discussion. Even if shader-based mjpeg decoding will be not very fast - it should be simpler compared to mpeg2 or h264 (!), and can serve as base for regression testing va state tracker. (there is no vaapi/vdpau component in this bugzilla)
Please note https://github.com/CESNET/GPUJPEG probably utilizes NV-specific hardware so it may be not very portable (at algo level) to amd/others hw behind OpenCL. But still they quote perf figures:
------------quote-------
OVERVIEW:
-It uses NVIDIA CUDA platform.
-Not optimized yet (it is only the first test implementation).
-Encoder and decoder use Huffman coder for entropy encoding/decoding.
-Encoder produces by default baseline JPEG codestream which consists of proper codestream
headers and one scan for each color component without subsampling and it uses
restart flags that allows fast parallel encoding. The quality of encoded
images can be specified by value 0-100.
-Optionally encoder can produce interleaved stream (all components in one scan) or/and
subsampled stream.
-Decoder can decompress only JPEG codestreams that can be generated by encoder. If scan
contains restart flags, decoder can use parallelism for fast decoding.
-Encoding/Decoding of JPEG codestream is divided into following phases:
Encoding: Decoding
1) Input data loading 1) Input data loading
2) Preprocessing 2) Parsing codestream
3) Forward DCT 3) Huffman decoder
4) Huffman encoder 4) Inverse DCT
5) Formatting codestream 5) Postprocessing
and they are implemented on CPU or/and GPU as follows:
-CPU:
-Input data loading
-Parsing codestream
-Huffman encoder/decoder (when restart flags are disabled)
-Output data formatting
-GPU:
-Preprocessing/Postprocessing (color component parsing,
color transformation RGB `<->` YCbCr)
-Forward/Inverse DCT (discrete cosine transform)
-Huffman encoder/decoder (when restart flags are enabled)
PERFORMANCE:
Following tables summarizes encoding/decoding performance using NVIDIA
GTX 580 for non-interleaved and non-subsampled stream with different quality
settings
[...]
Decoding:
| 4k (4096x2160) | HD (1920x1080)
--------+----------------------------------+---------------------------------
quality | duration | psnr | size | duration | psnr | size
--------+----------+----------+------------+---------------------------------
10 | 10.28 ms | 29.33 dB | 539.30 kB | 3.13 ms | 27.41 dB | 145.90 kB
20 | 11.31 ms | 32.70 dB | 697.20 kB | 3.59 ms | 30.32 dB | 198.30 kB
30 | 12.36 ms | 34.63 dB | 850.60 kB | 3.97 ms | 31.92 dB | 243.60 kB
40 | 12.90 ms | 35.97 dB | 958.90 kB | 4.28 ms | 32.99 dB | 282.20 kB
50 | 13.45 ms | 36.94 dB | 1073.30 kB | 4.56 ms | 33.82 dB | 319.10 kB
60 | 14.71 ms | 37.96 dB | 1217.10 kB | 4.81 ms | 34.65 dB | 360.00 kB
70 | 15.03 ms | 39.22 dB | 1399.20 kB | 5.24 ms | 35.71 dB | 422.10 kB
80 | 16.64 ms | 40.67 dB | 1710.00 kB | 5.89 ms | 37.15 dB | 526.70 kB
90 | 19.99 ms | 42.83 dB | 2441.40 kB | 7.48 ms | 39.84 dB | 768.40 kB
100 | 46.45 ms | 47.09 dB | 7798.70 kB | 16.42 ms | 47.21 dB | 2499.60 kB
----end of quotation-------
Version: githttps://gitlab.freedesktop.org/mesa/mesa/-/issues/935Xorg segfault when a web browser is opened2019-09-18T20:19:12ZBugzilla Migration UserXorg segfault when a web browser is opened## Submitted by kei..@..il.com
Assigned to **mes..@..op.org**
**[Link to original bug (#109927)](https://bugs.freedesktop.org/show_bug.cgi?id=109927)**
## Description
Created attachment 143574
Output of journalctl -b
I use the Pa...## Submitted by kei..@..il.com
Assigned to **mes..@..op.org**
**[Link to original bug (#109927)](https://bugs.freedesktop.org/show_bug.cgi?id=109927)**
## Description
Created attachment 143574
Output of journalctl -b
I use the Padoka PPA. Approximately 45 hours ago, there was an update for Bionic that moved the Mesa version from 'master git branch up to commit 6fa923a65daf1ee73c5cc763ade91abc82da7085' to commit 43f40dc7cb234e007fe612b67cc765288ddf0533, and xserver-xorg-video-amdgpu from 'master git branch up to commit 9045fb310f88780e250e60b80431ca153330e61b' to commit a2b32e72fdaff3007a79b84929997d8176c2d512. No other changes have taken place in my system.
Since then, within 1.5s of opening a web browser (Firefox, Chromium), I get a crash to graphical login prompt. I can open the Nautilus file browser, even a terminal session with transparency enabled, without issue for as long as I like. Below is the error message and my system specs, and attached are the relevant logfiles from a Chromium crash. Firefox crash produces no specific error messages prior to the segfault, but I can upload those logs too if needed. Apologies if this bug is incorrectly filed, I wasn't sure whether the issue may lie with mesa or xserver-xorg-video-amdgpu.
Mar 07 15:35:07 host chromium-browser.desktop[3259]: [3293:3293:0307/153507.139737:ERROR:sandbox_linux.cc(364)] InitializeSandbox() called with multiple threads in process gpu-process.
Mar 07 15:35:07 host gnome-keyring-daemon[1895]: couldn't allocate secure memory to keep passwords and or keys from being written to the disk
Mar 07 15:35:07 host gnome-keyring-daemon[1895]: asked to register item /org/freedesktop/secrets/collection/Default_5fkeyring/1, but it's already registered
Mar 07 15:35:07 host gnome-keyring-daemon[1895]: asked to register item /org/freedesktop/secrets/collection/Default_5fkeyring/1, but it's already registered
Mar 07 15:35:10 host /usr/lib/gdm3/gdm-x-session[1899]: (EE)
Mar 07 15:35:10 host /usr/lib/gdm3/gdm-x-session[1899]: (EE) Backtrace:
Mar 07 15:35:10 host /usr/lib/gdm3/gdm-x-session[1899]: (EE) 0: /usr/lib/xorg/Xorg (xorg_backtrace+0x4d) [0x5558457998cd]
Mar 07 15:35:10 host /usr/lib/gdm3/gdm-x-session[1899]: (EE) 1: /usr/lib/xorg/Xorg (0x5558455e1000+0x1bc669) [0x55584579d669]
Mar 07 15:35:10 host /usr/lib/gdm3/gdm-x-session[1899]: (EE) 2: /lib/x86_64-linux-gnu/libpthread.so.0 (0x7ffb48abe000+0x12890) [0x7ffb48ad0890]
Mar 07 15:35:10 host /usr/lib/gdm3/gdm-x-session[1899]: (EE)
Mar 07 15:35:10 host /usr/lib/gdm3/gdm-x-session[1899]: (EE) Segmentation fault at address 0x0
Mar 07 15:35:10 host /usr/lib/gdm3/gdm-x-session[1899]: (EE)
Mar 07 15:35:10 host /usr/lib/gdm3/gdm-x-session[1899]: Fatal server error:
Mar 07 15:35:10 host /usr/lib/gdm3/gdm-x-session[1899]: (EE) Caught signal 11 (Segmentation fault). Server aborting
Mar 07 15:35:10 host /usr/lib/gdm3/gdm-x-session[1899]: (EE)
Mar 07 15:35:10 host /usr/lib/gdm3/gdm-x-session[1899]: Please consult the The X.Org Foundation support
Mar 07 15:35:10 host /usr/lib/gdm3/gdm-x-session[1899]: at http://wiki.x.org
Mar 07 15:35:10 host /usr/lib/gdm3/gdm-x-session[1899]: for help.
Mar 07 15:35:10 host /usr/lib/gdm3/gdm-x-session[1899]: (EE) Please also check the log file at "/home/usernaym/.local/share/xorg/Xorg.0.log" for additional information.
Mar 07 15:35:10 host /usr/lib/gdm3/gdm-x-session[1899]: (EE)
System: Host: host Kernel: 5.0.0-050000-generic x86_64 bits: 64 Desktop: Gnome 3.28.3
Distro: Ubuntu 18.04.2 LTS
Machine: Device: desktop Mobo: ASUSTeK model: M4A87TD EVO v: Rev 1.xx serial: N/A
BIOS: American Megatrends v: 2001 date: 03/08/2011
CPU: Quad core AMD Phenom II X4 970
Graphics: Card: Advanced Micro Devices [AMD/ATI] Hawaii PRO [Radeon R9 290/390]
Display Server: x11 (X.Org 1.19.6 ) drivers: ati,vesa (unloaded: modesetting,fbdev,radeon)
Resolution: 2560x1440@59.95hz
OpenGL: renderer: AMD Radeon R9 390 Series (HAWAII, DRM 3.27.0, 5.0.0-050000-generic, LLVM 9.0.0)
version: 4.5 Mesa 19.1.0-devel - padoka PPA
**Attachment 143574**, "Output of journalctl -b":
[journald.log](/uploads/6df48b4cb9f1a62c3838c875594c1225/journald.log)
Version: githttps://gitlab.freedesktop.org/mesa/mesa/-/issues/934Incorrect assert in gallium/state_trackers/va/picture_mjpeg.c2019-09-18T20:19:09ZBugzilla Migration UserIncorrect assert in gallium/state_trackers/va/picture_mjpeg.c## Submitted by Andres
Assigned to **mes..@..op.org**
**[Link to original bug (#109765)](https://bugs.freedesktop.org/show_bug.cgi?id=109765)**
## Description
The assert in vlVaHandleHuffmanTableBufferType() [1] seems incorrect. B...## Submitted by Andres
Assigned to **mes..@..op.org**
**[Link to original bug (#109765)](https://bugs.freedesktop.org/show_bug.cgi?id=109765)**
## Description
The assert in vlVaHandleHuffmanTableBufferType() [1] seems incorrect. Based on the pattern of the other functions in the file, it should be:
assert(buf->size >= sizeof(VAHuffmanTableBufferJPEGBaseline) && buf->num_elements == 1);
instead of
assert(buf->size >= sizeof(VASliceParameterBufferJPEGBaseline) && buf->num_elements == 1);
[1] https://gitlab.freedesktop.org/mesa/mesa/blob/master/src/gallium/state_trackers/va/picture_mjpeg.c#L74https://gitlab.freedesktop.org/mesa/mesa/-/issues/931Regression: [bisected] dEQP-GLES31.functional.tessellation.invariance.* start...2019-09-18T20:19:03ZBugzilla Migration UserRegression: [bisected] dEQP-GLES31.functional.tessellation.invariance.* start failing on r600## Submitted by Gert Wollny `@gerddie`
Assigned to **mes..@..op.org**
**[Link to original bug (#108734)](https://bugs.freedesktop.org/show_bug.cgi?id=108734)**
## Description
The patch
5d517a599b1eabd1d5696bf31e26f16568d35770...## Submitted by Gert Wollny `@gerddie`
Assigned to **mes..@..op.org**
**[Link to original bug (#108734)](https://bugs.freedesktop.org/show_bug.cgi?id=108734)**
## Description
The patch
5d517a599b1eabd1d5696bf31e26f16568d35770
st/mesa: Don't record garbage streamout information in the non-SSO case.
breaks dEQP-GLES31.functional.tessellation.invariance.* on r600. All the tests pass without this patch, but with the patch applied
glGetQueryObjectuiv(queryObject, GL_QUERY_RESULT, &result);
returns zero in result for all the tests from this set, which is not correct.
Version: githttps://gitlab.freedesktop.org/mesa/mesa/-/issues/929Request: Control Center for AMD GPU2019-09-18T20:18:58ZBugzilla Migration UserRequest: Control Center for AMD GPU## Submitted by Ahmed Elsayed
Assigned to **mes..@..op.org**
**[Link to original bug (#108353)](https://bugs.freedesktop.org/show_bug.cgi?id=108353)**
## Description
Could you please add a control center for AMD GPUs so we can con...## Submitted by Ahmed Elsayed
Assigned to **mes..@..op.org**
**[Link to original bug (#108353)](https://bugs.freedesktop.org/show_bug.cgi?id=108353)**
## Description
Could you please add a control center for AMD GPUs so we can control settings like in AMD Proprietary driver? or any thing like Nvidia-Prime to be able to switch automatically between Intel and AMD?
I added my request here because I always use Mesa because AMD didn't release any driver for my AMD card since 2015! Also because Mesa works fine with me.
I use HD 8750M.https://gitlab.freedesktop.org/mesa/mesa/-/issues/926Compilation failure due to missing xcb_randr_lease_t2019-09-18T20:18:32ZBugzilla Migration UserCompilation failure due to missing xcb_randr_lease_t## Submitted by Danylo Piliaiev `@Danil`
Assigned to **mes..@..op.org**
**[Link to original bug (#106976)](https://bugs.freedesktop.org/show_bug.cgi?id=106976)**
## Description
Recent commit https://cgit.freedesktop.org/mesa/mesa/...## Submitted by Danylo Piliaiev `@Danil`
Assigned to **mes..@..op.org**
**[Link to original bug (#106976)](https://bugs.freedesktop.org/show_bug.cgi?id=106976)**
## Description
Recent commit https://cgit.freedesktop.org/mesa/mesa/commit/?id=7ab1fffcd2a504024b16e408de329f7a94553ecc broke Mesa compilation if version of xcb-randr is less than 1.13.
Version: git