1. 30 Nov, 2018 1 commit
    • Steven Rostedt (VMware)'s avatar
      tracing/fgraph: Fix set_graph_function from showing interrupts · 5cf99a0f
      Steven Rostedt (VMware) authored
      The tracefs file set_graph_function is used to only function graph functions
      that are listed in that file (or all functions if the file is empty). The
      way this is implemented is that the function graph tracer looks at every
      function, and if the current depth is zero and the function matches
      something in the file then it will trace that function. When other functions
      are called, the depth will be greater than zero (because the original
      function will be at depth zero), and all functions will be traced where the
      depth is greater than zero.
      
      The issue is that when a function is first entered, and the handler that
      checks this logic is called, the depth is set to zero. If an interrupt comes
      in and a function in the interrupt handler is traced, its depth will be
      greater than zero and it will automatically be traced, even if the original
      function was not. But because the logic only looks at depth it may trace
      interrupts when it should not be.
      
      The recent design change of the function graph tracer to fix other bugs
      caused the depth to be zero while the function graph callback handler is
      being called for a longer time, widening the race of this happening. This
      bug was actually there for a longer time, but because the race window was so
      small it seldom happened. The Fixes tag below is for the commit that widen
      the race window, because that commit belongs to a series that will also help
      fix the original bug.
      
      Cc: stable@kernel.org
      Fixes: 39eb456d ("function_graph: Use new curr_ret_depth to manage depth instead of curr_ret_stack")
      Reported-by: default avatarJoe Lawrence <joe.lawrence@redhat.com>
      Tested-by: default avatarJoe Lawrence <joe.lawrence@redhat.com>
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      5cf99a0f
  2. 16 Aug, 2018 1 commit
  3. 10 Aug, 2018 2 commits
    • Steven Rostedt (VMware)'s avatar
      tracing: More reverting of "tracing: Centralize preemptirq tracepoints and unify their usage" · 3f1756dc
      Steven Rostedt (VMware) authored
      Joel Fernandes created a nice patch that cleaned up the duplicate hooks used
      by lockdep and irqsoff latency tracer. It made both use tracepoints. But the
      latency tracer is triggering warnings when using tracepoints to call into
      the latency tracer's routines. Mainly, they can be called from NMI context.
      If that happens, then the SRCU may not work properly because on some
      architectures, SRCU is not safe to be called in both NMI and non-NMI
      context.
      
      This is a partial revert of the clean up patch c3bc8fd6 ("tracing:
      Centralize preemptirq tracepoints and unify their usage") that adds back the
      direct calls into the latency tracer. It also only calls the trace events
      when not in NMI.
      
      Link: http://lkml.kernel.org/r/20180809210654.622445925@goodmis.orgReviewed-by: default avatarJoel Fernandes (Google) <joel@joelfernandes.org>
      Fixes: c3bc8fd6 ("tracing: Centralize preemptirq tracepoints and unify their usage")
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      3f1756dc
    • Steven Rostedt (VMware)'s avatar
      tracing/irqsoff: Handle preempt_count for different configs · f27107fa
      Steven Rostedt (VMware) authored
      I was hitting the following warning:
      
      WARNING: CPU: 0 PID: 1 at kernel/trace/trace_irqsoff.c:631 tracer_hardirqs_off+0x15/0x2a
      
      Modules linked in:
      CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.18.0-rc6-test+ #13
      Hardware name: MSI MS-7823/CSM-H87M-G43 (MS-7823), BIOS V1.6 02/22/2014
      EIP: tracer_hardirqs_off+0x15/0x2a
      Code: ff 85 c0 74 0e 8b 45 00 8b 50 04 8b 45 04 e8 35 ff ff ff 5d c3 55 64 a1 cc 37 51 c1 a9 ff ff ff 7f 89 e5 53 89 d3 89 ca 75 02 <0f> 0b e8 90 fc ff ff 85 c0 74 07 89 d8 e8 0c ff ff ff 5b 5d c3 55
      EAX: 80000000 EBX: c04337f0 ECX: c04338e3 EDX: c04338e3
      ESI: c04337f0 EDI: c04338e3 EBP: f2aa1d68 ESP: f2aa1d64
      DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068 EFLAGS: 00210046
      CR0: 80050033 CR2: 00000000 CR3: 01668000 CR4: 001406f0
      Call Trace:
       trace_irq_disable_rcuidle+0x63/0x6c
       trace_hardirqs_off+0x26/0x30
       default_send_IPI_mask_allbutself_logical+0x31/0x93
       default_send_IPI_allbutself+0x37/0x48
       native_send_call_func_ipi+0x4d/0x6a
       smp_call_function_many+0x165/0x19d
       ? add_nops+0x34/0x34
       ? trace_hardirqs_on_caller+0x2d/0x2d
       ? add_nops+0x34/0x34
       smp_call_function+0x1f/0x23
       on_each_cpu+0x15/0x43
       ? trace_hardirqs_on_caller+0x2d/0x2d
       ? trace_hardirqs_on_caller+0x2d/0x2d
       ? trace_irq_disable_rcuidle+0x1/0x6c
       text_poke_bp+0xa0/0xc2
       ? trace_hardirqs_on_caller+0x2d/0x2d
       arch_jump_label_transform+0xa7/0xcb
       ? trace_irq_disable_rcuidle+0x5/0x6c
       __jump_label_update+0x3e/0x6d
       jump_label_update+0x7d/0x81
       static_key_slow_inc_cpuslocked+0x58/0x6d
       static_key_slow_inc+0x19/0x20
       tracepoint_probe_register_prio+0x19e/0x1d1
       ? start_critical_timings+0x1c/0x1c
       tracepoint_probe_register+0xf/0x11
       irqsoff_tracer_init+0x21/0xf2
       tracer_init+0x16/0x1a
       trace_selftest_startup_irqsoff+0x25/0xc4
       run_tracer_selftest+0xca/0x131
       register_tracer+0xd5/0x172
       ? trace_event_define_fields_preemptirq_template+0x45/0x45
       init_irqsoff_tracer+0xd/0x11
       do_one_initcall+0xab/0x1e8
       ? rcu_read_lock_sched_held+0x3d/0x44
       ? trace_initcall_level+0x52/0x86
       kernel_init_freeable+0x195/0x21a
       ? rest_init+0xb4/0xb4
       kernel_init+0xd/0xe4
       ret_from_fork+0x2e/0x38
      
      It is due to running a CONFIG_PREEMPT_VOLUNTARY kernel, which would trigger
      this warning every time:
      
      	WARN_ON_ONCE(preempt_count());
      
      Because on CONFIG_PREEMPT_VOLUNTARY, preempt_count() is always zero.
      
      This warning is to make sure preempt_count is set because
      tracer_hardirqs_on() does a preempt_enable_notrace() to make the
      preempt_trace() work properly, as being called by a trace event, the trace
      event code disables preemption, and the tracer wants to know what the
      preemption was before it was called.
      
      Instead of enabling preemption like this, just record the preempt_count,
      subtract PREEMPT_DISABLE_OFFSET from it (which is zero with !CONFIG_PREEMPT
      set), and pass that value to the necessary functions, which should use the
      passed in parameter instead of calling preempt_count() directly.
      
      Fixes: da5b3ebb ("tracing: irqsoff: Account for additional preempt_disable")
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      f27107fa
  4. 07 Aug, 2018 1 commit
  5. 31 Jul, 2018 1 commit
    • Joel Fernandes (Google)'s avatar
      tracing: Centralize preemptirq tracepoints and unify their usage · c3bc8fd6
      Joel Fernandes (Google) authored
      This patch detaches the preemptirq tracepoints from the tracers and
      keeps it separate.
      
      Advantages:
      * Lockdep and irqsoff event can now run in parallel since they no longer
      have their own calls.
      
      * This unifies the usecase of adding hooks to an irqsoff and irqson
      event, and a preemptoff and preempton event.
        3 users of the events exist:
        - Lockdep
        - irqsoff and preemptoff tracers
        - irqs and preempt trace events
      
      The unification cleans up several ifdefs and makes the code in preempt
      tracer and irqsoff tracers simpler. It gets rid of all the horrific
      ifdeferry around PROVE_LOCKING and makes configuration of the different
      users of the tracepoints more easy and understandable. It also gets rid
      of the time_* function calls from the lockdep hooks used to call into
      the preemptirq tracer which is not needed anymore. The negative delta in
      lines of code in this patch is quite large too.
      
      In the patch we introduce a new CONFIG option PREEMPTIRQ_TRACEPOINTS
      as a single point for registering probes onto the tracepoints. With
      this,
      the web of config options for preempt/irq toggle tracepoints and its
      users becomes:
      
       PREEMPT_TRACER   PREEMPTIRQ_EVENTS  IRQSOFF_TRACER PROVE_LOCKING
             |                 |     \         |           |
             \    (selects)    /      \        \ (selects) /
            TRACE_PREEMPT_TOGGLE       ----> TRACE_IRQFLAGS
                            \                  /
                             \ (depends on)   /
                           PREEMPTIRQ_TRACEPOINTS
      
      Other than the performance tests mentioned in the previous patch, I also
      ran the locking API test suite. I verified that all tests cases are
      passing.
      
      I also injected issues by not registering lockdep probes onto the
      tracepoints and I see failures to confirm that the probes are indeed
      working.
      
      This series + lockdep probes not registered (just to inject errors):
      [    0.000000]      hard-irqs-on + irq-safe-A/21:  ok  |  ok  |  ok  |
      [    0.000000]      soft-irqs-on + irq-safe-A/21:  ok  |  ok  |  ok  |
      [    0.000000]        sirq-safe-A => hirqs-on/12:FAILED|FAILED|  ok  |
      [    0.000000]        sirq-safe-A => hirqs-on/21:FAILED|FAILED|  ok  |
      [    0.000000]          hard-safe-A + irqs-on/12:FAILED|FAILED|  ok  |
      [    0.000000]          soft-safe-A + irqs-on/12:FAILED|FAILED|  ok  |
      [    0.000000]          hard-safe-A + irqs-on/21:FAILED|FAILED|  ok  |
      [    0.000000]          soft-safe-A + irqs-on/21:FAILED|FAILED|  ok  |
      [    0.000000]     hard-safe-A + unsafe-B #1/123:  ok  |  ok  |  ok  |
      [    0.000000]     soft-safe-A + unsafe-B #1/123:  ok  |  ok  |  ok  |
      
      With this series + lockdep probes registered, all locking tests pass:
      
      [    0.000000]      hard-irqs-on + irq-safe-A/21:  ok  |  ok  |  ok  |
      [    0.000000]      soft-irqs-on + irq-safe-A/21:  ok  |  ok  |  ok  |
      [    0.000000]        sirq-safe-A => hirqs-on/12:  ok  |  ok  |  ok  |
      [    0.000000]        sirq-safe-A => hirqs-on/21:  ok  |  ok  |  ok  |
      [    0.000000]          hard-safe-A + irqs-on/12:  ok  |  ok  |  ok  |
      [    0.000000]          soft-safe-A + irqs-on/12:  ok  |  ok  |  ok  |
      [    0.000000]          hard-safe-A + irqs-on/21:  ok  |  ok  |  ok  |
      [    0.000000]          soft-safe-A + irqs-on/21:  ok  |  ok  |  ok  |
      [    0.000000]     hard-safe-A + unsafe-B #1/123:  ok  |  ok  |  ok  |
      [    0.000000]     soft-safe-A + unsafe-B #1/123:  ok  |  ok  |  ok  |
      
      Link: http://lkml.kernel.org/r/20180730222423.196630-4-joel@joelfernandes.orgAcked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: default avatarNamhyung Kim <namhyung@kernel.org>
      Signed-off-by: default avatarJoel Fernandes (Google) <joel@joelfernandes.org>
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      c3bc8fd6
  6. 26 Jul, 2018 1 commit
  7. 10 Oct, 2017 1 commit
  8. 06 Oct, 2017 1 commit
  9. 25 Dec, 2016 1 commit
  10. 09 Dec, 2016 1 commit
    • Steven Rostedt (Red Hat)'s avatar
      tracing/fgraph: Have wakeup and irqsoff tracers ignore graph functions too · 1a414428
      Steven Rostedt (Red Hat) authored
      Currently both the wakeup and irqsoff traces do not handle set_graph_notrace
      well. The ftrace infrastructure will ignore the return paths of all
      functions leaving them hanging without an end:
      
        # echo '*spin*' > set_graph_notrace
        # cat trace
        [...]
                _raw_spin_lock() {
                  preempt_count_add() {
                  do_raw_spin_lock() {
                update_rq_clock();
      
      Where the '*spin*' functions should have looked like this:
      
                _raw_spin_lock() {
                  preempt_count_add();
                  do_raw_spin_lock();
                }
                update_rq_clock();
      
      Instead, have the wakeup and irqsoff tracers ignore the functions that are
      set by the set_graph_notrace like the function_graph tracer does. Move
      the logic in the function_graph tracer into a header to allow wakeup and
      irqsoff tracers to use it as well.
      
      Cc: Namhyung Kim <namhyung.kim@lge.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      1a414428
  11. 18 Mar, 2016 2 commits
    • Dmitry Safonov's avatar
      tracing: Remove redundant reset per-CPU buff in irqsoff tracer · 741f3a69
      Dmitry Safonov authored
        There is no reason to do it twice: from commit b6f11df2
      ("trace: Call tracing_reset_online_cpus before tracer->init()")
      resetting of per-CPU buffers done before tracer->init() call.
      
      tracer->init() calls {irqs,preempt,preemptirqs}off_tracer_init() and it
      calls __irqsoff_tracer_init(), which resets per-CPU ringbuffer second
      time.
      It's slowpath, but anyway.
      
      Link: http://lkml.kernel.org/r/1445278226-16187-1-git-send-email-0x7f454c46@gmail.comSigned-off-by: default avatarDmitry Safonov <0x7f454c46@gmail.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      741f3a69
    • Steven Rostedt (Red Hat)'s avatar
      tracing: Have preempt(irqs)off trace preempt disabled functions · cb86e053
      Steven Rostedt (Red Hat) authored
      Joel Fernandes reported that the function tracing of preempt disabled
      sections was not being reported when running either the preemptirqsoff or
      preemptoff tracers. This was due to the fact that the function tracer
      callback for those tracers checked if irqs were disabled before tracing. But
      this fails when we want to trace preempt off locations as well.
      
      Joel explained that he wanted to see funcitons where interrupts are enabled
      but preemption was disabled. The expected output he wanted:
      
         <...>-2265    1d.h1 3419us : preempt_count_sub <-irq_exit
         <...>-2265    1d..1 3419us : __do_softirq <-irq_exit
         <...>-2265    1d..1 3419us : msecs_to_jiffies <-__do_softirq
         <...>-2265    1d..1 3420us : irqtime_account_irq <-__do_softirq
         <...>-2265    1d..1 3420us : __local_bh_disable_ip <-__do_softirq
         <...>-2265    1..s1 3421us : run_timer_softirq <-__do_softirq
         <...>-2265    1..s1 3421us : hrtimer_run_pending <-run_timer_softirq
         <...>-2265    1..s1 3421us : _raw_spin_lock_irq <-run_timer_softirq
         <...>-2265    1d.s1 3422us : preempt_count_add <-_raw_spin_lock_irq
         <...>-2265    1d.s2 3422us : _raw_spin_unlock_irq <-run_timer_softirq
         <...>-2265    1..s2 3422us : preempt_count_sub <-_raw_spin_unlock_irq
         <...>-2265    1..s1 3423us : rcu_bh_qs <-__do_softirq
         <...>-2265    1d.s1 3423us : irqtime_account_irq <-__do_softirq
         <...>-2265    1d.s1 3423us : __local_bh_enable <-__do_softirq
      
      There's a comment saying that the irq disabled check is because there's a
      possible race that tracing_cpu may be set when the function is executed. But
      I don't remember that race. For now, I added a check for preemption being
      enabled too to not record the function, as there would be no race if that
      was the case. I need to re-investigate this, as I'm now thinking that the
      tracing_cpu will always be correct. But no harm in keeping the check for
      now, except for the slight performance hit.
      
      Link: http://lkml.kernel.org/r/1457770386-88717-1-git-send-email-agnel.joel@gmail.com
      
      Fixes: 5e6d2b9c "tracing: Use one prologue for the preempt irqs off tracer function tracers"
      Cc: stable@vget.kernel.org # 2.6.37+
      Reported-by: default avatarJoel Fernandes <agnel.joel@gmail.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      cb86e053
  12. 02 Nov, 2015 1 commit
  13. 30 Sep, 2015 1 commit
    • Steven Rostedt (Red Hat)'s avatar
      tracing: Move trace_flags from global to a trace_array field · 983f938a
      Steven Rostedt (Red Hat) authored
      In preparation to make trace options per instance, the global trace_flags
      needs to be moved from being a global variable to a field within the trace
      instance trace_array structure.
      
      There's still more work to do, as there's some functions that use
      trace_flags without passing in a way to get to the current_trace array. For
      those, the global_trace is used directly (from trace.c). This includes
      setting and clearing the trace_flags. This means that when a new instance is
      created, it just gets the trace_flags of the global_trace and will not be
      able to modify them. Depending on the functions that have access to the
      trace_array, the flags of an instance may not affect parts of its trace,
      where the global_trace is used. These will be fixed in future changes.
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      983f938a
  14. 29 Sep, 2015 3 commits
  15. 22 Jan, 2015 1 commit
  16. 21 Apr, 2014 3 commits
  17. 20 Feb, 2014 2 commits
  18. 14 Feb, 2014 1 commit
  19. 02 Jul, 2013 1 commit
    • Steven Rostedt (Red Hat)'s avatar
      tracing: Use flag buffer_disabled for irqsoff tracer · 10246fa3
      Steven Rostedt (Red Hat) authored
      If the ring buffer is disabled and the irqsoff tracer records a trace it
      will clear out its buffer and lose the data it had previously recorded.
      
      Currently there's a callback when writing to the tracing_of file, but if
      tracing is disabled via the function tracer trigger, it will not inform
      the irqsoff tracer to stop recording.
      
      By using the "mirror" flag (buffer_disabled) in the trace_array, that keeps
      track of the status of the trace_array's buffer, it gives the irqsoff
      tracer a fast way to know if it should record a new trace or not.
      The flag may be a little behind the real state of the buffer, but it
      should not affect the trace too much. It's more important for the irqsoff
      tracer to be fast.
      Reported-by: default avatarDave Jones <davej@redhat.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      10246fa3
  20. 15 Mar, 2013 5 commits
    • Steven Rostedt (Red Hat)'s avatar
      tracing: Add function-trace option to disable function tracing of latency tracers · 328df475
      Steven Rostedt (Red Hat) authored
      Currently, the only way to stop the latency tracers from doing function
      tracing is to fully disable the function tracer from the proc file
      system:
      
        echo 0 > /proc/sys/kernel/ftrace_enabled
      
      This is a big hammer approach as it disables function tracing for
      all users. This includes kprobes, perf, stack tracer, etc.
      
      Instead, create a function-trace option that the latency tracers can
      check to determine if it should enable function tracing or not.
      This option can be set or cleared even while the tracer is active
      and the tracers will disable or enable function tracing depending
      on how the option was set.
      
      Instead of using the proc file, disable latency function tracing with
      
        echo 0 > /debug/tracing/options/function-trace
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Clark Williams <williams@redhat.com>
      Cc: John Kacur <jkacur@redhat.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      328df475
    • Steven Rostedt (Red Hat)'s avatar
      tracing: Consolidate max_tr into main trace_array structure · 12883efb
      Steven Rostedt (Red Hat) authored
      Currently, the way the latency tracers and snapshot feature works
      is to have a separate trace_array called "max_tr" that holds the
      snapshot buffer. For latency tracers, this snapshot buffer is used
      to swap the running buffer with this buffer to save the current max
      latency.
      
      The only items needed for the max_tr is really just a copy of the buffer
      itself, the per_cpu data pointers, the time_start timestamp that states
      when the max latency was triggered, and the cpu that the max latency
      was triggered on. All other fields in trace_array are unused by the
      max_tr, making the max_tr mostly bloat.
      
      This change removes the max_tr completely, and adds a new structure
      called trace_buffer, that holds the buffer pointer, the per_cpu data
      pointers, the time_start timestamp, and the cpu where the latency occurred.
      
      The trace_array, now has two trace_buffers, one for the normal trace and
      one for the max trace or snapshot. By doing this, not only do we remove
      the bloat from the max_trace but the instances of traces can now use
      their own snapshot feature and not have just the top level global_trace have
      the snapshot feature and latency tracers for itself.
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      12883efb
    • Steven Rostedt's avatar
      tracing: Replace the static global per_cpu arrays with allocated per_cpu · a7603ff4
      Steven Rostedt authored
      The global and max-tr currently use static per_cpu arrays for the CPU data
      descriptors. But in order to get new allocated trace_arrays, they need to
      be allocated per_cpu arrays. Instead of using the static arrays, switch
      the global and max-tr to use allocated data.
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      a7603ff4
    • Steven Rostedt's avatar
      tracing: Encapsulate global_trace and remove dependencies on global vars · 2b6080f2
      Steven Rostedt authored
      The global_trace variable in kernel/trace/trace.c has been kept 'static' and
      local to that file so that it would not be used too much outside of that
      file. This has paid off, even though there were lots of changes to make
      the trace_array structure more generic (not depending on global_trace).
      
      Removal of a lot of direct usages of global_trace is needed to be able to
      create more trace_arrays such that we can add multiple buffers.
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      2b6080f2
    • Steven Rostedt (Red Hat)'s avatar
      tracing: Prevent buffer overwrite disabled for latency tracers · 613f04a0
      Steven Rostedt (Red Hat) authored
      The latency tracers require the buffers to be in overwrite mode,
      otherwise they get screwed up. Force the buffers to stay in overwrite
      mode when latency tracers are enabled.
      
      Added a flag_changed() method to the tracer structure to allow
      the tracers to see what flags are being changed, and also be able
      to prevent the change from happing.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      613f04a0
  21. 06 Dec, 2012 1 commit
  22. 31 Oct, 2012 2 commits
  23. 31 Jul, 2012 1 commit
    • Steven Rostedt's avatar
      ftrace: Add default recursion protection for function tracing · 4740974a
      Steven Rostedt authored
      As more users of the function tracer utility are being added, they do
      not always add the necessary recursion protection. To protect from
      function recursion due to tracing, if the callback ftrace_ops does not
      specifically specify that it protects against recursion (by setting
      the FTRACE_OPS_FL_RECURSION_SAFE flag), the list operation will be
      called by the mcount trampoline which adds recursion protection.
      
      If the flag is set, then the function will be called directly with no
      extra protection.
      
      Note, the list operation is called if more than one function callback
      is registered, or if the arch does not support all of the function
      tracer features.
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      4740974a
  24. 19 Jul, 2012 2 commits
  25. 07 Nov, 2011 1 commit
    • Jiri Olsa's avatar
      tracing/latency: Fix header output for latency tracers · 7e9a49ef
      Jiri Olsa authored
      In case the the graph tracer (CONFIG_FUNCTION_GRAPH_TRACER) or even the
      function tracer (CONFIG_FUNCTION_TRACER) are not set, the latency tracers
      do not display proper latency header.
      
      The involved/fixed latency tracers are:
              wakeup_rt
              wakeup
              preemptirqsoff
              preemptoff
              irqsoff
      
      The patch adds proper handling of tracer configuration options for latency
      tracers, and displaying correct header info accordingly.
      
      * The current output (for wakeup tracer) with both graph and function
        tracers disabled is:
      
        # tracer: wakeup
        #
          <idle>-0       0d.h5    1us+:      0:120:R   + [000]     7:  0:R watchdog/0
          <idle>-0       0d.h5    3us+: ttwu_do_activate.clone.1 <-try_to_wake_up
          ...
      
      * The fixed output is:
      
        # tracer: wakeup
        #
        # wakeup latency trace v1.1.5 on 3.1.0-tip+
        # --------------------------------------------------------------------
        # latency: 55 us, #4/4, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
        #    -----------------
        #    | task: migration/0-6 (uid:0 nice:0 policy:1 rt_prio:99)
        #    -----------------
        #
        #                  _------=> CPU#
        #                 / _-----=> irqs-off
        #                | / _----=> need-resched
        #                || / _---=> hardirq/softirq
        #                ||| / _--=> preempt-depth
        #                |||| /     delay
        #  cmd     pid   ||||| time  |   caller
        #     \   /      |||||  \    |   /
             cat-1129    0d..4    1us :   1129:120:R   + [000]     6:  0:R migration/0
             cat-1129    0d..4    2us+: ttwu_do_activate.clone.1 <-try_to_wake_up
      
      * The current output (for wakeup tracer) with only function
        tracer enabled is:
      
        # tracer: wakeup
        #
             cat-1140    0d..4    1us+:   1140:120:R   + [000]     6:  0:R migration/0
             cat-1140    0d..4    2us : ttwu_do_activate.clone.1 <-try_to_wake_up
      
      * The fixed output is:
        # tracer: wakeup
        #
        # wakeup latency trace v1.1.5 on 3.1.0-tip+
        # --------------------------------------------------------------------
        # latency: 207 us, #109/109, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
        #    -----------------
        #    | task: watchdog/1-12 (uid:0 nice:0 policy:1 rt_prio:99)
        #    -----------------
        #
        #                  _------=> CPU#
        #                 / _-----=> irqs-off
        #                | / _----=> need-resched
        #                || / _---=> hardirq/softirq
        #                ||| / _--=> preempt-depth
        #                |||| /     delay
        #  cmd     pid   ||||| time  |   caller
        #     \   /      |||||  \    |   /
          <idle>-0       1d.h5    1us+:      0:120:R   + [001]    12:  0:R watchdog/1
          <idle>-0       1d.h5    3us : ttwu_do_activate.clone.1 <-try_to_wake_up
      
      Link: http://lkml.kernel.org/r/20111107150849.GE1807@m.brq.redhat.com
      
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Signed-off-by: default avatarJiri Olsa <jolsa@redhat.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      7e9a49ef
  26. 22 Sep, 2011 1 commit
  27. 13 Sep, 2011 1 commit