Skip to content
Snippets Groups Projects
  1. Mar 10, 2025
    • Kamil Konieczny's avatar
      runner: Parse results harder · a7465d37
      Kamil Konieczny authored
      
      Sometimes an error happens in kernel or in test that leaves
      output files in corrupted or incorrect state. While runner or
      resume will just move on to executing next test, when generating
      results it could end up with no results.json
      
        Try processing outputs a little more persistently and use any
      output file left there, even if only dmesg.txt. Also, when no
      useful output files were present, instead of breaking out add
      notrun.
      
        Inform about processing results for each test so a problem
      could be spotted more easily.
      
      v2: removed ')' from 'notrun\n)' (Kamil)
       using bool var, added more prints about errors (Ryszard)
      v3: reused open_for_reading, removed bool var (Krzysztof)
       closing only positive fds[] in close_outputs(), checking
       file sizes also if all opens succeeded (Kamil)
      v4: reverting to v2 and addresing review comments (Krzysztof)
       closing only already opened file, drop early return when empty
       output files as this changes run status (Kamil)
      v5: reverting to returning false after any out/err/dmesg output
       is missing, simplified later checks and printed logs, fixed
       closing outputs (Kamil)
      
      Cc: Ewelina Musial <ewelina.musial@intel.com>
      Cc: Lucas De Marchi <lucas.demarchi@intel.com>
      Cc: Krzysztof Karas <krzysztof.karas@intel.com>
      Cc: Ryszard Knop <ryszard.knop@intel.com>
      Cc: Petri Latvala <adrinael@adrinael.net>
      Signed-off-by: default avatarKamil Konieczny <kamil.konieczny@linux.intel.com>
      Reviewed-by: default avatarSebastian Brzezinka <sebastian.brzezinka@intel.com>
      Reviewed-by: default avatarKrzysztof Karas <krzysztof.karas@intel.com>
      a7465d37
  2. Jan 11, 2023
    • Petri Latvala's avatar
      runner: Correctly handle abort before first test · 79eb8984
      Petri Latvala authored
      
      Don't leave the execution in a "please resume me" state if bootup
      causes an abort condition. Especially handle the case of abort on
      bootup when resuming correctly, so that it doesn't attempt to run a
      test on a tainted kernel if we've explicitly configured the runner to
      not execute when there's a taint.
      
      v2: Fudge the results directory instead to get the desired results:
          runner exits with nonzero, and resuming exits with "all done" instead
          of executing anything.
      
      v3: Use faccessat instead of open+close, use less magic strings,
          remember to close fds (Chris)
      
      v4: Use GRACEFUL_EXITCODE in monitor_output, remove the 'resuming'
          field (why was it a double?!). (Ryszard)
          Stop trying to execute if all tests are already run, to avoid a
          crash in environment validation.
      
      v5: Remember to git add so the 'resuming' field really gets
          removed. (Kamil)
          Use 0.000 in the printf format directly instead of formatting 0.0
          to %.3f. (Kamil)
      
      Signed-off-by: default avatarPetri Latvala <petri.latvala@intel.com>
      Cc: Arkadiusz Hiler <arek@hiler.eu>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: Kamil Konieczny <kamil.konieczny@linux.intel.com>
      Cc: Ryszard Knop <ryszard.knop@intel.com>
      Reviewed-by: default avatarKamil Konieczny <kamil.konieczny@linux.intel.com>
      79eb8984
  3. Oct 31, 2022
  4. Jan 22, 2019
    • Petri Latvala's avatar
      runner: Implement --dry-run · 96f3a1b8
      Petri Latvala authored
      
      Actually implement --dry-run to not execute tests. With dry-run
      active, attempting to execute will figure out the list of things to
      execute, serialize them along with settings, and stop. This will be
      useful for CI that wants to post-mortem on failed test rounds to
      generate a list of tests that should have been executed and produce
      json result files (full of 'notrun') for proper statistics.
      
      Signed-off-by: default avatarPetri Latvala <petri.latvala@intel.com>
      Cc: Andi Shyti <andi.shyti@intel.com>
      Cc: Martin Peres <martin.peres@linux.intel.com>
      Cc: Arkadiusz Hiler <arkadiusz.hiler@intel.com>
      Cc: Tomi Sarvela <tomi.p.sarvela@intel.com>
      Reviewed-by: default avatarMartin Peres <martin.peres@linux.intel.com>
      96f3a1b8
  5. Nov 15, 2018
    • Petri Latvala's avatar
      runner: Implement --abort-on-monitored-error · 111593c4
      Petri Latvala authored
      
      Deviating a bit from the piglit command line flag, igt_runner takes an
      optional comma-separated list as an argument to
      --abort-on-monitored-error for the list of conditions to abort
      on. Without a list all possible conditions will be checked.
      
      Two conditions implemented:
       - "taint" checks the kernel taint level for TAINT_PAGE, TAINT_DIE and
       TAINT_OOPS
       - "lockdep" checks the kernel lockdep status
      
      Checking is done after every test binary execution, and if an abort
      condition is met, the reason is printed to stderr (unless log level is
      quiet) and the runner doesn't execute any further tests. Aborting
      between subtests (when running in --multiple-mode) is not done.
      
      v2:
       - Remember to fclose
       - Taints are unsigned long (Chris)
       - Use getline instead of fgets (Chris)
      v3:
       - Fix brainfart with lockdep
      v4:
       - Rebase
       - Refactor the abort condition checking to pass down strings
       - Present the abort result in results.json as a pseudo test result
       - Unit tests for the pseudo result
      v5:
       - Refactors (Chris)
       - Don't claim lockdep was triggered if debug_locks is not on
         anymore. Just say it's not active.
       - Dump lockdep_stats when aborting due to lockdep (Chris)
       - Use igt@runner@aborted instead for the pseudo result (Martin)
      v6:
       - If aborting after a test, generate results.json. Like was already
         done for aborting at startup.
       - Print the test that would be executed next as well when aborting,
         as requested by Tomi.
      v7:
       - Remove the resolved TODO item from commit message
      
      Signed-off-by: default avatarPetri Latvala <petri.latvala@intel.com>
      Cc: Arkadiusz Hiler <arkadiusz.hiler@intel.com>
      Cc: Tomi Sarvela <tomi.p.sarvela@intel.com>
      Cc: Martin Peres <martin.peres@linux.intel.com>
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: default avatarArkadiusz Hiler <arkadiusz.hiler@intel.com>
      111593c4
  6. Oct 19, 2018
  7. Aug 09, 2018
    • Petri Latvala's avatar
      runner: New test runner · 18c1e752
      Petri Latvala authored
      
      This is a new test runner to replace piglit. Piglit has been very
      useful as a test runner, but certain improvements have been very
      difficult if possible at all in a generic test running framework.
      
      Important improvements over piglit:
      
      - Faster to launch. Being able to make assumptions about what we're
        executing makes it possible to save significant amounts of time. For
        example, a testlist file's line "igt@somebinary@somesubtest" already
        has all the information we need to construct the correct command
        line to execute that particular subtest, instead of listing all
        subtests of all test binaries and mapping them to command
        lines. Same goes for the regexp filters command line flags -t and
        -x; If we use -x somebinaryname, we don't need to list subtests from
        somebinaryname, we already know none of them will get executed.
      
      - Logs of incomplete tests. Piglit collects test output to memory and
        dumps them to a file when the test is complete. The new runner
        writes all output to disk immediately.
      
      - Ability to execute multiple subtests in one binary execution. This
        was possible with piglit, but its semantics made it very hard to
        implement in practice. For example, having a testlist file not only
        selected a subset of tests to run, but also mandated that they be
        executed in the same order.
      
      - Flexible timeout support. Instead of mandating a time tests cannot
        exceed, the new runner has a timeout on inactivity. Activity is
        any output on the test's stdout or stderr, or kernel activity via
        /dev/kmsg.
      
      The runner is fairly piglit compatible. The command line is very
      similar, with a few additions. IGT_TEST_ROOT environment flag is still
      supported, but can also be set via command line (in place of igt.py in
      piglit command line).
      
      The results are a set of log files, processed into a piglit-compatible
      results.json file (BZ2 compression TODO). There are some new fields in
      the json for extra information:
      
      - "igt-version" contains the IGT version line. In
        multiple-subtests-mode the version information is only printed once,
        so it needs to be duplicated to all subtest results this way.
      - "dmesg-warnings" contains the dmesg lines that triggered a
        dmesg-warn/dmesg-fail state.
      - Runtime information will be different. Piglit takes a timestamp at
        the beginning and at the end of execution for runtime. The new
        runner uses the subtest output text. The binary execution time will
        also be included; The key "igt@somebinary" will have the runtime of
        the binary "somebinary", whereas "igt@somebinary@a" etc will have
        the runtime of the subtests. Substracting the subtest runtimes from
        the binary runtime yields the total time spent doing setup in
        igt_fixture blocks.
      
      v2:
       - use clock handling from igt_core instead of copypaste
       - install results binary
       - less magic numbers
       - scanf doesn't give empty strings after all
       - use designated array initialization with _F_JOURNAL and pals
       - add more comments to dump_dmesg
       - use signal in kill_child instead of bool
       - use more 'usual' return values for execute_entry
       - use signal number instead of magic integers
       - use IGT_EXIT_INVALID instead of magic 79
       - properly remove files in clear_test_result_directory()
       - remove magic numbers
       - warn if results directory contains extra files
       - fix naming in matches_any
       - construct command line in a cleaner way in add_subtests()
       - clarify error in filtered_job_list
       - replace single string fprintfs with fputs
       - use getline() more sanely
       - refactor string constants to a shared header
       - explain non-nul-terminated string handling in resultgen
       - saner line parsing
       - rename gen_igt_name to generate_piglit_name
       - clean up parse_result_string
       - explain what we're parsing in resultgen
       - explain the runtime accumulation in add_runtime
       - refactor result overriding
       - stop passing needle sizes to find_line functions
       - refactor stdout/stderr parsing
       - fix regex whitelist compiling
       - add TODO for suppressions.txt
       - refactor dmesg parsing
       - fill_from_journal returns void
       - explain missing result fields with TODO comments
       - log_level parsing with typeof
       - pass stdout/stderr to usage() instead of a bool
       - fix absolute_path overflow
       - refactor settings serialization
       - remove maybe_strdup function
       - refactor job list serialization
       - refactor resuming, add new resume binary
       - catch mmap failure correctly
      
      v3:
       - rename runner to igt_runner, etc
       - add meson option for building the runner
       - use UPPER_CASE names for string constants
       - add TODO comments for future refactoring
       - add a midding close()
       - const correctness where applicable
       - also build with autotools
      
      Signed-off-by: default avatarPetri Latvala <petri.latvala@intel.com>
      Reviewed-by: default avatarArkadiusz Hiler <arkadiusz.hiler@intel.com>
      18c1e752
Loading