Use default precision on struct/interface members when nothing specified
Mesa already keeps track of the GLES precision for variables and stores it in the ir_variable. When no precision is explicitly specified it takes the default precision for the corresponding type. However, when the variable is a struct or interface, the precision of each individual member is attached to the glsl_type instead. The code to make it use the default precision was missing so this branch adds it in.
Only the last patch actually makes this change. The rest of the patches are to fix regressions in Piglit and CTS. The underlying problem is that Mesa was considering types that have different precisions to be different when comparing interstage interfaces (varyings and UBOs). According to the spec it should be ignored. Presumably this problem already existed if mismatched precisions were explicitly specified but we didn’t have any tests to test it. Storing the default precision makes some tests fail because the default precision for ints is different in the vertex and fragment stages so it’s easier to accidentally make a test case that tests this.
The tests that regressed are:
- dEQP-GLES31.functional.shaders.opaque_type_indexing.* (12 tests)
- piglit.spec.ext_transform_feedback.structs_gles3 basic-struct run
- piglit.spec.ext_transform_feedback.structs_gles3 basic-struct get