Skip to content
  • Paul Berry's avatar
    Add tolerance to auto-generated built-in function tests. · b4274e75
    Paul Berry authored
    Previously, the auto-generated tests converted their outputs to pixel
    values and tested them against the expected values using
    shader_runner's "probe rgba" command--as a result, the built-in
    functions only tested to a tolerance of 1 part in 255.
    
    This patch changes the auto-generated tests so that the expected value
    is checked inside the shader itself, to an explicit tolerance.
    
    Unfortunately, the GLSL and OpenGL specs are somewhat ambiguous as to
    how accurate the built-in functions need to be.  Section 2.1.1 of the
    OpenGL 2.1 spec, for instance, says that "individual results of
    floating point operations are accurate to about 1 part in 10^5".
    However, it's not clear whether a built-in function is intended to
    constitute a single "operation" in this context.  And in experimenting
    with the systems available to me (Mesa on Intel i965, both IronLake
    and SandyBridge, and an nVidia system running nVidia's proprietary
    Linux driver), I've found that trig functions in particular fail to
    meet this strict requirement.  Considering how trig functions are
    typically used in shaders (e.g. calculating lighting angles), it seems
    like 1 part in 10^5 is an unreasonably tight limit.
    
    So I've settled for the time being on the following compromise:
    
    - Trig functions are tested to a tolerance of 1 part in 10^3 relative
      to the output of the built-in function, or an absolute tolerance of
      10^-4, whichever is larger.
    
    - The cross product is tested to a tolerance of 1 part in 10^5,
      relative to the product of the magnitudes of the input vectors.
      This avoids an unreasonably tight tolerance in cases where the terms
      of the cross product cancel out, yielding a small result.
    
    - All other functions are tested to a tolerance of 1 part in 10^5
      relative to the output of the built-in function.
    
    To avoid additional sources of error due to floating-point
    conversions, all test vectors are generated as 32-bit floating-point
    values.
    
    As an aid in review, here is the generated test for the exp()
    built-in:
    
    [require]
    GLSL >= 1.10
    
    [vertex shader]
    varying vec4 color;
    uniform float arg0;
    uniform float tolerance;
    uniform float expected;
    
    void main()
    {
      gl_Position = gl_Vertex;
      float result = exp(arg0);
      color = distance(result, expected) <= tolerance ? vec4(0.0, 1.0, 0.0, 1.0) : vec4(1.0, 0.0, 0.0, 1.0);
    }
    
    [fragment shader]
    varying vec4 color;
    
    void main()
    {
      gl_FragColor = color;
    }
    
    [test]
    uniform float arg0 -2.0
    uniform float expected 0.13533528
    uniform float tolerance 1.3533528e-06
    draw rect -1 -1 2 2
    probe rgba 0 0 0.0 1.0 0.0 1.0
    uniform float arg0 -0.66666669
    uniform float expected 0.51341712
    uniform float tolerance 5.1341713e-06
    draw rect -1 -1 2 2
    probe rgba 1 0 0.0 1.0 0.0 1.0
    uniform float arg0 0.66666669
    uniform float expected 1.9477341
    uniform float tolerance 1.9477342e-05
    draw rect -1 -1 2 2
    probe rgba 2 0 0.0 1.0 0.0 1.0
    uniform float arg0 2.0
    uniform float expected 7.3890562
    uniform float tolerance 7.3890566e-05
    draw rect -1 -1 2 2
    probe rgba 3 0 0.0 1.0 0.0 1.0
    b4274e75