Commit 22fbbb5e authored by Nicolai Hähnle's avatar Nicolai Hähnle

Initial commit

parents
project (piglit)
include(${CMAKE_ROOT}/Modules/FindOpenGL.cmake)
include(${CMAKE_ROOT}/Modules/FindTIFF.cmake)
include(${CMAKE_ROOT}/Modules/FindGLUT.cmake)
add_subdirectory (tests)
\ Initial design decisions
-------------------------
Before I started working on Piglit, I asked around for OpenGL testing methods.
There were basically two kinds of tests:
1. Glean, which is fully automatic and intends to test the letter of the
OpenGL specification (at least partially).
2. Manual tests using Mesa samples or third-party software.
The weakness of Glean is that it is not flexible, not pragmatic enough for
driver development. For example, it tests for precision requirements in
blending, where a driver just cannot reasonably improve on what the hardware
delivers. As a driver developer, one wants to consider a test successful
when it reaches the optimal results that the hardware can give, even when
these results may be non-compliant.
Manual tests are not well repeatable. They require a considerable amount of
work on the part of the developer, so most of the time they aren't done at all.
On the other hand, those manual tests have sometimes been created to test for
a particular weakness in implementations, so they may be very suitable to
detect future, similar weaknesses.
Due to these weaknesses, the test coverage of open source OpenGL drivers
is suboptimal at best. My goal for Piglit is to create a useful test system
that helps driver developers in improving driver quality.
With that in mind, my sub-goals are:
1. Combine the strengths of the two kinds of tests (Glean, manual tests)
into a single framework.
2. Provide concise, human readable summaries of the test results, with the
option to increase the detail of the report when desired.
3. Allow easy visualization of regressions.
4. Allow easy detection of performance regressions.
I briefly considered extending Glean, but then decided for creating an
entirely new project. The most important reasons are:
1. I do not want to pollute the very clean philosophy behind Glean.
2. The model behind Glean is that one process runs all the tests.
During driver development, one often gets bugs that cause tests to crash.
This means that one failed test can disrupt the entire test batch.
I want to use a more robust model, where each test runs in its own process.
This model does not easily fit onto Glean.
3. The amount of code duplication and forking overhead is minimal because
a) I can use Glean as a "subroutine", so no code is actually duplicated,
there's just a tiny amount of additional Python glue code.
b) It's unlikely that this project makes significant changes to Glean
that need to be merged upstream.
4. While it remains reasonable to use C++ for the actual OpenGL tests,
it is easier to use a higher level language like Python for the framework
(summary processing etc.)
\ Ugly Things (or: Coding style)
-------------------------------
As a rule of thumb, coding style should be preserved in test code taken from
other projects, as long as that code is self-contained.
Apart from that, the following rules are cast in stone:
1. Use tabulators for indentation
2. Use spaces for alignment
3. No whitespace at the end of a line
See http://electroly.com/mt/archives/000002.html for a well-written rationale.
Use whatever tabulator size you want:
If you adhere to the rules above, the tab size does not matter. Tab size 4
is recommended because it keeps the line lengths reasonable, but in the end,
that's purely a matter of personal taste.
Piglit
------
1. About
2. Setup
3. How to run tests
4. How to write tests
5. Todo
1. About
--------
Piglit is a collection of automated tests for OpenGL implementations.
The goal of Piglit is to help improve the quality of open source
OpenGL drivers by providing developers with a simple means to
perform regression tests.
The original tests have been taken from
- Glean ( http://glean.sf.net/ ) and
- Mesa ( http://www.mesa3d.org/ )
2. Setup
--------
First of all, you need to make sure that the following are installed:
- Python 2.4 or greater
- cmake (http://www.cmake.org)
- GL, glu and glut libraries and development packages (i.e. headers)
- X11 libraries and development packages (i.e. headers)
- libtiff
Now configure the build system:
$ ccmake .
This will start cmake's configuration tool, just follow the onscreen
instructions. The default settings should be fine, but I recommend you:
- Press 'c' once (this will also check for dependencies) and then
- Set "CMAKE_BUILD_TYPE" to "Debug"
Now you can press 'c' again and then 'g' to generate the build system.
Now build everything:
$ make
3. How to run tests
-------------------
Make sure that everything is set up correctly:
$ ./piglit-run.py tests/sanity.tests results/sanity.results
This will run some minimal tests. Use
$ ./piglit-run.py
to learn more about the command's syntax. Have a look into the tests/
directory to see what test profiles are available:
$ cd tests
$ ls *.tests
...
$ cd ..
To create some nice formatted test summaries, run
$ ./piglit-summary-html.py summary/sanity results/sanity.results
Hint: You can combine multiple test results into a single summary.
During development, you can use this to watch for regressions:
$ ./piglit-summary-html.py summary/compare results/baseline.results results/current.results
You can combine as many testruns as you want this way(in theory;
the HTML layout becomes awkward when the number of testruns increases)
Have a look at the results with a browser:
$ xdg-open summary/sanity/index.html
The summary shows the 'status' of a test:
pass This test has completed successfully.
warn The test completed successfully, but something unexpected happened.
Look at the details for more information.
fail The test failed.
skip The test was skipped.
[Note: Once performance tests are implemented, 'fail' will mean that the test
rendered incorrectly or didn't complete, while 'warn' will indicate a
performance regression]
[Note: For performance tests, result and status will be different concepts.
While status is always restricted to one of the four values above,
the result can contain a performance number like frames per second]
4. How to write tests
---------------------
Every test is run as a separate process. This minimizes the impact that
severe bugs like memory corruption have on the testing process.
Therefore, tests can be implemented in an arbitrary standalone language.
I recommend C, C++ and Python, as these are the languages that are already
used in Piglit.
All new tests must be added to the all.tests profile. The test profiles
are simply Python scripts. There are currently two supported test types:
PlainExecTest
This test starts a new process and watches the process output (stdout and
stderr). Lines that start with "PIGLIT:" are collected and interpreted as
a Python dictionary that contains test result details.
GleanTest
This is a test that is only used to integrate Glean tests
Additional test types (e.g. for automatic image comparison) would have to
be added to core.py.
Rules of thumb:
Test process that exit with a nonzero returncode are considered to have
failed.
Output on stderr causes a warning.
5. Todo
-------
Get automated tests into widespread use ;)
Automate and integrate tests and demos from Mesa
Add code that automatically tests whether the test has rendered correctly
Performance regression tests
Ideally, this should be done by summarizing / comparing a history of
test results
#!/usr/bin/python
#
# Permission is hereby granted, free of charge, to any person
# obtaining a copy of this software and associated documentation
# files (the "Software"), to deal in the Software without
# restriction, including without limitation the rights to use,
# copy, modify, merge, publish, distribute, sublicense, and/or
# sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following
# conditions:
#
# This permission notice shall be included in all copies or
# substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL ALLEN AKIN BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN
# AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF
# OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
#!/usr/bin/python
#
# Permission is hereby granted, free of charge, to any person
# obtaining a copy of this software and associated documentation
# files (the "Software"), to deal in the Software without
# restriction, including without limitation the rights to use,
# copy, modify, merge, publish, distribute, sublicense, and/or
# sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following
# conditions:
#
# This permission notice shall be included in all copies or
# substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL ALLEN AKIN BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN
# AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF
# OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
# Piglit core
import errno
import os
import re
import subprocess
import sys
import traceback
__all__ = [
'Environment',
'loadTestProfile',
'testPathToResultName',
'GroupResult',
'TestResult'
]
#############################################################################
##### Helper functions
#############################################################################
# Ensure the given directory exists
def checkDir(dirname, failifexists):
exists = True
try:
os.stat(dirname)
except OSError, e:
if e.errno == errno.ENOENT or e.errno == errno.ENOTDIR:
exists = False
if exists and failifexists:
print >>sys.stderr, "%(dirname)s exists already.\nUse --overwrite if you want to overwrite it.\n" % locals()
exit(1)
try:
os.makedirs(dirname)
except OSError, e:
if e.errno != errno.EEXIST:
raise
def testPathToResultName(path):
elems = filter(lambda s: len(s) > 0, path.split('/'))
pyname = 'testrun.results' + "".join(map(lambda s: "['"+s+"']", elems))
return pyname
#############################################################################
##### Result classes
#############################################################################
class TestResult(dict):
def __init__(self, *args):
dict.__init__(self)
assert(len(args) == 0 or len(args) == 2)
if len(args) == 2:
for k in args[0]:
self.__setattr__(k, args[0][k])
self.update(args[1])
def __repr__(self):
attrnames = set(dir(self)) - set(dir(self.__class__()))
return '%(class)s(%(dir)s,%(dict)s)' % {
'class': self.__class__.__name__,
'dir': dict([(k, self.__getattribute__(k)) for k in attrnames]),
'dict': dict.__repr__(self)
}
class GroupResult(dict):
def __init__(self, *args):
dict.__init__(self)
assert(len(args) == 0 or len(args) == 2)
if len(args) == 2:
for k in args[0]:
self.__setattr__(k, args[0][k])
self.update(args[1])
def __repr__(self):
attrnames = set(dir(self)) - set(dir(self.__class__()))
return '%(class)s(%(dir)s,%(dict)s)' % {
'class': self.__class__.__name__,
'dir': dict([(k, self.__getattribute__(k)) for k in attrnames]),
'dict': dict.__repr__(self)
}
class TestrunResult:
def __init__(self, *args):
self.name = ''
self.results = GroupResult()
#############################################################################
##### Generic Test classes
#############################################################################
class Environment:
def __init__(self):
self.file = sys.stdout
self.execute = True
self.filter = []
class Test:
ignoreErrors = []
def doRun(self, env, path):
# Filter
if len(env.filter) > 0:
if not True in map(lambda f: f.search(path) != None, env.filter):
return None
# Run the test
if env.execute:
try:
print "Test: %(path)s" % locals()
result = self.run()
if 'result' not in result:
result['result'] = 'fail'
if not isinstance(result, TestResult):
result = TestResult({}, result)
result['result'] = 'warn'
result['note'] = 'Result not returned as an instance of TestResult'
except:
result = TestResult()
result['result'] = 'fail'
result['exception'] = str(sys.exc_info()[0]) + str(sys.exc_info()[1])
result['traceback'] = '@@@' + "".join(traceback.format_tb(sys.exc_info()[2]))
print " result: %(result)s" % { 'result': result['result'] }
varname = testPathToResultName(path)
print >>env.file, "%(varname)s = %(result)s" % locals()
else:
print "Dry-run: %(path)s" % locals()
# Default handling for stderr messages
def handleErr(self, results, err):
errors = filter(lambda s: len(s) > 0, map(lambda s: s.strip(), err.split('\n')))
ignored = []
for s in errors:
ignore = False
for pattern in Test.ignoreErrors:
if type(pattern) == str:
if s.find(pattern) >= 0:
ignore = True
break
else:
if pattern.search(s):
ignore = True
break
if ignore:
ignored.append(s)
errors = [s for s in errors if s not in ignored]
if len(errors) > 0:
results['errors'] = errors
if results['result'] == 'pass':
results['result'] = 'warn'
if len(ignored) > 0:
results['errors_ignored'] = ignored
class Group(dict):
def doRun(self, env, path):
print >>env.file, "%s = GroupResult()" % (testPathToResultName(path))
for sub in self:
spath = sub
if len(path) > 0:
spath = path + '/' + spath
self[sub].doRun(env, spath)
#############################################################################
##### PlainExecTest: Simply run an executable
##### Expect one line prefixed PIGLIT: in the output, which contains a
##### result dictionary. The plain output is appended to this dictionary
#############################################################################
class PlainExecTest(Test):
def __init__(self, command):
self.command = command
def run(self):
proc = subprocess.Popen(
self.command,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
)
out,err = proc.communicate()
outlines = out.split('\n')
outpiglit = map(lambda s: s[7:], filter(lambda s: s.startswith('PIGLIT:'), outlines))
results = TestResult()
if len(outpiglit) > 0:
try:
results.update(eval(''.join(outpiglit), {}))
out = '\n'.join(filter(lambda s: not s.startswith('PIGLIT:'), outlines))
except:
results['result'] = 'fail'
results['note'] = 'Failed to parse result string'
if 'result' not in results:
results['result'] = 'fail'
if proc.returncode != 0:
results['result'] = 'fail'
results['note'] = 'Returncode was %d' % (proc.returncode)
self.handleErr(results, err)
results['info'] = "@@@Returncode: %d\n\nErrors:\n%s\n\nOutput:\n%s" % (proc.returncode, err, out)
results['returncode'] = proc.returncode
return results
#############################################################################
##### GleanTest: Execute a sub-test of Glean
#############################################################################
def gleanExecutable():
return "./tests/glean/glean"
def gleanResultDir():
return "./results/glean/"
class GleanTest(Test):
globalParams = []
def __init__(self, name):
self.name = name
self.env = {}
def run(self):
results = TestResult()
fullenv = os.environ.copy()
for e in self.env:
fullenv[e] = str(self.env[e])
checkDir(gleanResultDir()+self.name, False)
glean = subprocess.Popen(
[gleanExecutable(), "-o", "-r", gleanResultDir()+self.name,
"--ignore-prereqs",
"-v", "-v", "-v",
"-t", "+"+self.name] + GleanTest.globalParams,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
env=fullenv
)
out, err = glean.communicate()
results['result'] = 'pass'
if glean.returncode != 0 or out.find('FAIL') >= 0:
results['result'] = 'fail'
results['returncode'] = glean.returncode
self.handleErr(results, err)
results['info'] = "@@@Returncode: %d\n\nErrors:\n%s\n\nOutput:\n%s" % (glean.returncode, err, out)
return results
#############################################################################
##### Loaders
#############################################################################
def loadTestProfile(filename):
try:
ns = {
'__file__': filename,
'__dir__': os.path.dirname(filename),
'Test': Test,
'Group': Group,
'GleanTest': GleanTest,
'gleanExecutable': gleanExecutable,
'PlainExecTest': PlainExecTest
}
execfile(filename, ns)
return ns['tests']
except:
traceback.print_exc()
raise FatalError('Could not read tests profile')
def loadTestResults(filename):
try:
ns = {
'__file__': filename,
'GroupResult': GroupResult,
'TestResult': TestResult,
'TestrunResult': TestrunResult
}
execfile(filename, ns)
# BACKWARDS COMPATIBILITY
if 'testrun' not in ns:
testrun = TestrunResult()
testrun.results.update(ns['results'])
if 'name' in ns:
testrun.name = ns['name']
ns['testrun'] = testrun
# END BACKWARDS COMPATIBILITY
testrun = ns['testrun']
if len(testrun.name) == 0:
testrun.name = filename
return testrun
except:
traceback.print_exc()
raise FatalError('Could not read tests results')
#!/usr/bin/python
#
# Permission is hereby granted, free of charge, to any person
# obtaining a copy of this software and associated documentation
# files (the "Software"), to deal in the Software without
# restriction, including without limitation the rights to use,
# copy, modify, merge, publish, distribute, sublicense, and/or
# sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following
# conditions:
#
# This permission notice shall be included in all copies or
# substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL ALLEN AKIN BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN
# AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF
# OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
from getopt import getopt, GetoptError
import os
import os.path
import re
import sys
import framework.core as core
#############################################################################
##### Main program
#############################################################################
def usage():
USAGE = """\
Usage: %(progName)s [options] [profile.tests] [profile.results]