1. 22 Nov, 2018 3 commits
    • Jiri Olsa's avatar
      perf/x86/intel: Disallow precise_ip on BTS events · 472de49f
      Jiri Olsa authored
      Vince reported a crash in the BTS flush code when touching the callchain
      data, which was supposed to be initialized as an 'early' callchain,
      but intel_pmu_drain_bts_buffer() does not do that:
      
        BUG: unable to handle kernel NULL pointer dereference at 0000000000000000
        ...
        Call Trace:
         <IRQ>
         intel_pmu_drain_bts_buffer+0x151/0x220
         ? intel_get_event_constraints+0x219/0x360
         ? perf_assign_events+0xe2/0x2a0
         ? select_idle_sibling+0x22/0x3a0
         ? __update_load_avg_se+0x1ec/0x270
         ? enqueue_task_fair+0x377/0xdd0
         ? cpumask_next_and+0x19/0x20
         ? load_balance+0x134/0x950
         ? check_preempt_curr+0x7a/0x90
         ? ttwu_do_wakeup+0x19/0x140
         x86_pmu_stop+0x3b/0x90
         x86_pmu_del+0x57/0x160
         event_sched_out.isra.106+0x81/0x170
         group_sched_out.part.108+0x51/0xc0
         __perf_event_disable+0x7f/0x160
         event_function+0x8c/0xd0
         remote_function+0x3c/0x50
         flush_smp_call_function_queue+0x35/0xe0
         smp_call_function_single_interrupt+0x3a/0xd0
         call_function_single_interrupt+0xf/0x20
         </IRQ>
      
      It was triggered by fuzzer but can be easily reproduced by:
      
        # perf record -e cpu/branch-instructions/pu -g -c 1
      
      Peter suggested not to allow branch tracing for precise events:
      
       > Now arguably, this is really stupid behaviour. Who in his right mind
       > wants callchain output on BTS entries. And even if they do, BTS +
       > precise_ip is nonsensical.
       >
       > So in my mind disallowing precise_ip on BTS would be the simplest fix.
      Suggested-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Reported-by: default avatarVince Weaver <vincent.weaver@maine.edu>
      Signed-off-by: default avatarJiri Olsa <jolsa@kernel.org>
      Acked-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: <stable@vger.kernel.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Fixes: 6cbc304f ("perf/x86/intel: Fix unwind errors from PEBS entries (mk-II)")
      Link: http://lkml.kernel.org/r/20181121101612.16272-3-jolsa@kernel.orgSigned-off-by: Ingo Molnar's avatarIngo Molnar <mingo@kernel.org>
      472de49f
    • Jiri Olsa's avatar
      perf/x86/intel: Add generic branch tracing check to intel_pmu_has_bts() · 67266c10
      Jiri Olsa authored
      Currently we check the branch tracing only by checking for the
      PERF_COUNT_HW_BRANCH_INSTRUCTIONS event of PERF_TYPE_HARDWARE
      type. But we can define the same event with the PERF_TYPE_RAW
      type.
      
      Changing the intel_pmu_has_bts() code to check on event's final
      hw config value, so both HW types are covered.
      
      Adding unlikely to intel_pmu_has_bts() condition calls, because
      it was used in the original code in intel_bts_constraints.
      Signed-off-by: default avatarJiri Olsa <jolsa@kernel.org>
      Acked-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: <stable@vger.kernel.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Link: http://lkml.kernel.org/r/20181121101612.16272-2-jolsa@kernel.orgSigned-off-by: Ingo Molnar's avatarIngo Molnar <mingo@kernel.org>
      67266c10
    • Jiri Olsa's avatar
      perf/x86/intel: Move branch tracing setup to the Intel-specific source file · ed6101bb
      Jiri Olsa authored
      Moving branch tracing setup to Intel core object into separate
      intel_pmu_bts_config function, because it's Intel specific.
      Suggested-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: default avatarJiri Olsa <jolsa@kernel.org>
      Acked-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: <stable@vger.kernel.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Link: http://lkml.kernel.org/r/20181121101612.16272-1-jolsa@kernel.orgSigned-off-by: Ingo Molnar's avatarIngo Molnar <mingo@kernel.org>
      ed6101bb
  2. 20 Nov, 2018 1 commit
  3. 12 Nov, 2018 2 commits
  4. 29 Oct, 2018 1 commit
  5. 16 Oct, 2018 1 commit
    • Jiri Olsa's avatar
      perf/x86/intel: Export mem events only if there's PEBS support · d4ae5529
      Jiri Olsa authored
      Memory events depends on PEBS support and access to LDLAT MSR, but we
      display them in /sys/devices/cpu/events even if the CPU does not
      provide those, like for KVM guests.
      
      That brings the false assumption that those events should be
      available, while they fail event to open.
      
      Separating the mem-* events attributes and merging them with
      cpu_events only if there's PEBS support detected.
      
      We could also check if LDLAT MSR is available, but the PEBS check
      seems to cover the need now.
      Suggested-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: default avatarJiri Olsa <jolsa@kernel.org>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Michael Petlan <mpetlan@redhat.com>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Link: http://lkml.kernel.org/r/20180906135748.GC9577@kravaSigned-off-by: Ingo Molnar's avatarIngo Molnar <mingo@kernel.org>
      d4ae5529
  6. 02 Oct, 2018 7 commits
    • Kan Liang's avatar
      perf/x86/intel: Add quirk for Goldmont Plus · 7c5314b8
      Kan Liang authored
      A ucode patch is needed for Goldmont Plus while counter freezing feature
      is enabled. Otherwise, there will be some issues, e.g. PMI flood with
      some events.
      
      Add a quirk to check microcode version. If the system starts with the
      wrong ucode, leave the counter-freezing feature permanently disabled.
      Signed-off-by: default avatarKan Liang <kan.liang@linux.intel.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: acme@kernel.org
      Link: http://lkml.kernel.org/r/1533712328-2834-3-git-send-email-kan.liang@linux.intel.comSigned-off-by: Ingo Molnar's avatarIngo Molnar <mingo@kernel.org>
      7c5314b8
    • Peter Zijlstra's avatar
      x86/cpu: Sanitize FAM6_ATOM naming · f2c4db1b
      Peter Zijlstra authored
      Going primarily by:
      
        https://en.wikipedia.org/wiki/List_of_Intel_Atom_microprocessors
      
      with additional information gleaned from other related pages; notably:
      
       - Bonnell shrink was called Saltwell
       - Moorefield is the Merriefield refresh which makes it Airmont
      
      The general naming scheme is: FAM6_ATOM_UARCH_SOCTYPE
      
        for i in `git grep -l FAM6_ATOM` ; do
      	sed -i  -e 's/ATOM_PINEVIEW/ATOM_BONNELL/g'		\
      		-e 's/ATOM_LINCROFT/ATOM_BONNELL_MID/'		\
      		-e 's/ATOM_PENWELL/ATOM_SALTWELL_MID/g'		\
      		-e 's/ATOM_CLOVERVIEW/ATOM_SALTWELL_TABLET/g'	\
      		-e 's/ATOM_CEDARVIEW/ATOM_SALTWELL/g'		\
      		-e 's/ATOM_SILVERMONT1/ATOM_SILVERMONT/g'	\
      		-e 's/ATOM_SILVERMONT2/ATOM_SILVERMONT_X/g'	\
      		-e 's/ATOM_MERRIFIELD/ATOM_SILVERMONT_MID/g'	\
      		-e 's/ATOM_MOOREFIELD/ATOM_AIRMONT_MID/g'	\
      		-e 's/ATOM_DENVERTON/ATOM_GOLDMONT_X/g'		\
      		-e 's/ATOM_GEMINI_LAKE/ATOM_GOLDMONT_PLUS/g' ${i}
        done
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: dave.hansen@linux.intel.com
      Cc: len.brown@intel.com
      Signed-off-by: Ingo Molnar's avatarIngo Molnar <mingo@kernel.org>
      f2c4db1b
    • Andi Kleen's avatar
      perf/x86/intel: Add a separate Arch Perfmon v4 PMI handler · af3bdb99
      Andi Kleen authored
      Implements counter freezing for Arch Perfmon v4 (Skylake and
      newer). This allows to speed up the PMI handler by avoiding
      unnecessary MSR writes and make it more accurate.
      
      The Arch Perfmon v4 PMI handler is substantially different than
      the older PMI handler.
      
      Differences to the old handler:
      
      - It relies on counter freezing, which eliminates several MSR
        writes from the PMI handler and lowers the overhead significantly.
      
        It makes the PMI handler more accurate, as all counters get
        frozen atomically as soon as any counter overflows. So there is
        much less counting of the PMI handler itself.
      
        With the freezing we don't need to disable or enable counters or
        PEBS. Only BTS which does not support auto-freezing still needs to
        be explicitly managed.
      
      - The PMU acking is done at the end, not the beginning.
        This makes it possible to avoid manual enabling/disabling
        of the PMU, instead we just rely on the freezing/acking.
      
      - The APIC is acked before reenabling the PMU, which avoids
        problems with LBRs occasionally not getting unfreezed on Skylake.
      
      - Looping is only needed to workaround a corner case which several PMIs
        are very close to each other. For common cases, the counters are freezed
        during PMI handler. It doesn't need to do re-check.
      
      This patch:
      
      - Adds code to enable v4 counter freezing
      - Fork <=v3 and >=v4 PMI handlers into separate functions.
      - Add kernel parameter to disable counter freezing. It took some time to
        debug counter freezing, so in case there are new problems we added an
        option to turn it off. Would not expect this to be used until there
        are new bugs.
      - Only for big core. The patch for small core will be posted later
        separately.
      
      Performance:
      
      When profiling a kernel build on Kabylake with different perf options,
      measuring the length of all NMI handlers using the nmi handler
      trace point:
      
      V3 is without counter freezing.
      V4 is with counter freezing.
      The value is the average cost of the PMI handler.
      (lower is better)
      
      perf options    `           V3(ns) V4(ns)  delta
      -c 100000                   1088   894     -18%
      -g -c 100000                1862   1646    -12%
      --call-graph lbr -c 100000  3649   3367    -8%
      --c.g. dwarf -c 100000      2248   1982    -12%
      Signed-off-by: default avatarAndi Kleen <ak@linux.intel.com>
      Signed-off-by: default avatarKan Liang <kan.liang@linux.intel.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: acme@kernel.org
      Link: http://lkml.kernel.org/r/1533712328-2834-2-git-send-email-kan.liang@linux.intel.comSigned-off-by: Ingo Molnar's avatarIngo Molnar <mingo@kernel.org>
      af3bdb99
    • Kan Liang's avatar
      perf/x86/intel: Factor out common code of PMI handler · ba12d20e
      Kan Liang authored
      The Arch Perfmon v4 PMI handler is substantially different than
      the older PMI handler. Instead of adding more and more ifs cleanly
      fork the new handler into a new function, with the main common
      code factored out into a common function.
      
      Fix complaint from checkpatch.pl by removing "false" from "static bool
      warned".
      
      No functional change.
      
      Based-on-code-from: Andi Kleen <ak@linux.intel.com>
      Signed-off-by: default avatarKan Liang <kan.liang@linux.intel.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: acme@kernel.org
      Link: http://lkml.kernel.org/r/1533712328-2834-1-git-send-email-kan.liang@linux.intel.comSigned-off-by: Ingo Molnar's avatarIngo Molnar <mingo@kernel.org>
      ba12d20e
    • Natarajan, Janakarajan's avatar
      perf/x86/amd/uncore: Set ThreadMask and SliceMask for L3 Cache perf events · d7cbbe49
      Natarajan, Janakarajan authored
      In Family 17h, some L3 Cache Performance events require the ThreadMask
      and SliceMask to be set. For other events, these fields do not affect
      the count either way.
      
      Set ThreadMask and SliceMask to 0xFF and 0xF respectively.
      Signed-off-by: default avatarJanakarajan Natarajan <Janakarajan.Natarajan@amd.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: H . Peter Anvin <hpa@zytor.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Suravee <Suravee.Suthikulpanit@amd.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Link: http://lkml.kernel.org/r/Message-ID:
      Signed-off-by: Ingo Molnar's avatarIngo Molnar <mingo@kernel.org>
      d7cbbe49
    • Kan Liang's avatar
      perf/x86/intel/uncore: Fix PCI BDF address of M3UPI on SKX · 9d92cfea
      Kan Liang authored
      The counters on M3UPI Link 0 and Link 3 don't count properly, and writing
      0 to these counters may causes system crash on some machines.
      
      The PCI BDF addresses of the M3UPI in the current code are incorrect.
      
      The correct addresses should be:
      
        D18:F1	0x204D
        D18:F2	0x204E
        D18:F5	0x204D
      Signed-off-by: default avatarKan Liang <kan.liang@linux.intel.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Fixes: cd34cd97 ("perf/x86/intel/uncore: Add Skylake server uncore support")
      Link: http://lkml.kernel.org/r/1537538826-55489-1-git-send-email-kan.liang@linux.intel.comSigned-off-by: Ingo Molnar's avatarIngo Molnar <mingo@kernel.org>
      9d92cfea
    • Masayoshi Mizuma's avatar
      perf/x86/intel/uncore: Use boot_cpu_data.phys_proc_id instead of hardcorded physical package ID 0 · 6265adb9
      Masayoshi Mizuma authored
      Physical package id 0 doesn't always exist, we should use
      boot_cpu_data.phys_proc_id here.
      Signed-off-by: default avatarMasayoshi Mizuma <m.mizuma@jp.fujitsu.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Masayoshi Mizuma <msys.mizuma@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Link: http://lkml.kernel.org/r/20180910144750.6782-1-msys.mizuma@gmail.comSigned-off-by: Ingo Molnar's avatarIngo Molnar <mingo@kernel.org>
      6265adb9
  7. 28 Sep, 2018 1 commit
  8. 27 Sep, 2018 1 commit
  9. 12 Sep, 2018 1 commit
  10. 10 Sep, 2018 2 commits
    • Zubin Mithra's avatar
      perf/x86: Add __ro_after_init annotations · 2766d2ee
      Zubin Mithra authored
      x86_pmu_{format,events,attr,caps}_group is written to in
      init_hw_perf_events and not modified after. This makes them suitable
      candidates for annotating as __ro_after_init.
      Signed-off-by: default avatarZubin Mithra <zsm@chromium.org>
      Reviewed-by: default avatarKees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: acme@kernel.org
      Cc: alexander.shishkin@linux.intel.com
      Cc: groeck@chromium.org
      Link: http://lkml.kernel.org/r/20180810154314.96710-1-zsm@chromium.orgSigned-off-by: Ingo Molnar's avatarIngo Molnar <mingo@kernel.org>
      2766d2ee
    • Jacek Tomaka's avatar
      perf/x86/intel: Add support/quirk for the MISPREDICT bit on Knights Landing CPUs · 16160c19
      Jacek Tomaka authored
      Problem: perf did not show branch predicted/mispredicted bit in brstack.
      
      Output of perf -F brstack for profile collected
      
      Before:
      
       0x4fdbcd/0x4fdc03/-/-/-/0
       0x45f4c1/0x4fdba0/-/-/-/0
       0x45f544/0x45f4bb/-/-/-/0
       0x45f555/0x45f53c/-/-/-/0
       0x7f66901cc24b/0x45f555/-/-/-/0
       0x7f66901cc22e/0x7f66901cc23d/-/-/-/0
       0x7f66901cc1ff/0x7f66901cc20f/-/-/-/0
       0x7f66901cc1e8/0x7f66901cc1fc/-/-/-/0
      
      After:
      
       0x4fdbcd/0x4fdc03/P/-/-/0
       0x45f4c1/0x4fdba0/P/-/-/0
       0x45f544/0x45f4bb/P/-/-/0
       0x45f555/0x45f53c/P/-/-/0
       0x7f66901cc24b/0x45f555/P/-/-/0
       0x7f66901cc22e/0x7f66901cc23d/P/-/-/0
       0x7f66901cc1ff/0x7f66901cc20f/P/-/-/0
       0x7f66901cc1e8/0x7f66901cc1fc/P/-/-/0
      
      Cause:
      
      As mentioned in Software Development Manual vol 3, 17.4.8.1,
      IA32_PERF_CAPABILITIES[5:0] indicates the format of the address that is
      stored in the LBR stack. Knights Landing reports 1 (LBR_FORMAT_LIP) as
      its format. Despite that, registers containing FROM address of the branch,
      do have MISPREDICT bit but because of the format indicated in
      IA32_PERF_CAPABILITIES[5:0], LBR did not read MISPREDICT bit.
      
      Solution:
      
      Teach LBR about above Knights Landing quirk and make it read MISPREDICT bit.
      Signed-off-by: default avatarJacek Tomaka <jacek.tomaka@poczta.fm>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20180802013830.10600-1-jacekt@dugeo.comSigned-off-by: Ingo Molnar's avatarIngo Molnar <mingo@kernel.org>
      16160c19
  11. 31 Aug, 2018 1 commit
  12. 31 Jul, 2018 1 commit
    • Kan Liang's avatar
      perf/x86/intel/uncore: Fix hardcoded index of Broadwell extra PCI devices · 156c8b58
      Kan Liang authored
      Masayoshi Mizuma reported that a warning message is shown while a CPU is
      hot-removed on Broadwell servers:
      
        WARNING: CPU: 126 PID: 6 at arch/x86/events/intel/uncore.c:988
        uncore_pci_remove+0x10b/0x150
        Call Trace:
         pci_device_remove+0x42/0xd0
         device_release_driver_internal+0x148/0x220
         pci_stop_bus_device+0x76/0xa0
         pci_stop_root_bus+0x44/0x60
         acpi_pci_root_remove+0x1f/0x80
         acpi_bus_trim+0x57/0x90
         acpi_bus_trim+0x2e/0x90
         acpi_device_hotplug+0x2bc/0x4b0
         acpi_hotplug_work_fn+0x1a/0x30
         process_one_work+0x174/0x3a0
         worker_thread+0x4c/0x3d0
         kthread+0xf8/0x130
      
      This bug was introduced by:
      
        commit 15a3e845 ("perf/x86/intel/uncore: Fix SBOX support for Broadwell CPUs")
      
      The index of "QPI Port 2 filter" was hardcode to 2, but this conflicts with the
      index of "PCU.3" which is "HSWEP_PCI_PCU_3", which equals to 2 as well.
      
      To fix the conflict, the hardcoded index needs to be cleaned up:
      
       - introduce a new enumerator "BDX_PCI_QPI_PORT2_FILTER" for "QPI Port 2
         filter" on Broadwell,
       - increase UNCORE_EXTRA_PCI_DEV_MAX by one,
       - clean up the hardcoded index.
      Debugged-by: default avatarMasayoshi Mizuma <m.mizuma@jp.fujitsu.com>
      Suggested-by: Ingo Molnar's avatarIngo Molnar <mingo@kernel.org>
      Reported-by: default avatarMasayoshi Mizuma <m.mizuma@jp.fujitsu.com>
      Tested-by: default avatarMasayoshi Mizuma <m.mizuma@jp.fujitsu.com>
      Signed-off-by: default avatarKan Liang <kan.liang@linux.intel.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: msys.mizuma@gmail.com
      Cc: stable@vger.kernel.org
      Fixes: 15a3e845 ("perf/x86/intel/uncore: Fix SBOX support for Broadwell CPUs")
      Link: http://lkml.kernel.org/r/1532953688-15008-1-git-send-email-kan.liang@linux.intel.comSigned-off-by: Ingo Molnar's avatarIngo Molnar <mingo@kernel.org>
      156c8b58
  13. 25 Jul, 2018 5 commits
    • Kan Liang's avatar
      perf/x86/intel: Support Extended PEBS for Goldmont Plus · a38b0ba1
      Kan Liang authored
      Enable the extended PEBS for Goldmont Plus.
      
      There is no specific PEBS constrains for Goldmont Plus. Removing the
      pebs_constraints for Goldmont Plus.
      Signed-off-by: default avatarKan Liang <kan.liang@linux.intel.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: acme@kernel.org
      Link: http://lkml.kernel.org/r/20180309021542.11374-4-kan.liang@linux.intel.comSigned-off-by: Ingo Molnar's avatarIngo Molnar <mingo@kernel.org>
      a38b0ba1
    • Kan Liang's avatar
      perf/x86/intel/ds: Handle PEBS overflow for fixed counters · ec71a398
      Kan Liang authored
      The pebs_drain() need to support fixed counters. The DS Save Area now
      include "counter reset value" fields for each fixed counters.
      
      Extend the related variables (e.g. mask, counters, error) to support
      fixed counters. There is no extended PEBS in PEBS v2 and earlier PEBS
      format. Only need to change the code for PEBS v3 and later PEBS format.
      
      Extend the pebs_event_reset[] logic to support new "counter reset value" fields.
      
      Increase the reserve space for fixed counters.
      
      Based-on-code-from: Andi Kleen <ak@linux.intel.com>
      Signed-off-by: default avatarKan Liang <kan.liang@linux.intel.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: acme@kernel.org
      Link: http://lkml.kernel.org/r/20180309021542.11374-3-kan.liang@linux.intel.comSigned-off-by: Ingo Molnar's avatarIngo Molnar <mingo@kernel.org>
      ec71a398
    • Kan Liang's avatar
      perf/x86/intel: Support PEBS on fixed counters · 4f08b625
      Kan Liang authored
      The Extended PEBS feature supports PEBS on fixed-function performance
      counters as well as all four general purpose counters.
      
      It has to change the order of PEBS and fixed counter enabling to make
      sure PEBS is enabled for the fixed counters.
      
      The change of the order doesn't impact the behavior of current code on
      other platforms which don't support extended PEBS.
      Because there is no dependency among those enable/disable functions.
      
      Don't enable IRQ generation (0x8) for MSR_ARCH_PERFMON_FIXED_CTR_CTRL.
      The PEBS ucode will handle the interrupt generation.
      
      Based-on-code-from: Andi Kleen <ak@linux.intel.com>
      Signed-off-by: default avatarKan Liang <kan.liang@linux.intel.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: acme@kernel.org
      Link: http://lkml.kernel.org/r/20180309021542.11374-2-kan.liang@linux.intel.comSigned-off-by: Ingo Molnar's avatarIngo Molnar <mingo@kernel.org>
      4f08b625
    • Kan Liang's avatar
      perf/x86/intel: Introduce PMU flag for Extended PEBS · 31962340
      Kan Liang authored
      The Extended PEBS feature, introduced in the Goldmont Plus
      microarchitecture, supports all events as "Extended PEBS".
      
      Introduce flag PMU_FL_PEBS_ALL to indicate the platforms which support
      extended PEBS.
      
      To support all events, it needs to support all constraints for PEBS. To
      avoid duplicating all the constraints in the PEBS table, making the PEBS
      code search the normal constraints too.
      
      Based-on-code-from: Andi Kleen <ak@linux.intel.com>
      Signed-off-by: default avatarKan Liang <kan.liang@linux.intel.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: acme@kernel.org
      Link: http://lkml.kernel.org/r/20180309021542.11374-1-kan.liang@linux.intel.comSigned-off-by: Ingo Molnar's avatarIngo Molnar <mingo@kernel.org>
      31962340
    • Peter Zijlstra's avatar
      perf/x86/intel: Fix unwind errors from PEBS entries (mk-II) · 6cbc304f
      Peter Zijlstra authored
      Vince reported the perf_fuzzer giving various unwinder warnings and
      Josh reported:
      
      > Deja vu.  Most of these are related to perf PEBS, similar to the
      > following issue:
      >
      >   b8000586 ("perf/x86/intel: Cure bogus unwind from PEBS entries")
      >
      > This is basically the ORC version of that.  setup_pebs_sample_data() is
      > assembling a franken-pt_regs which ORC isn't happy about.  RIP is
      > inconsistent with some of the other registers (like RSP and RBP).
      
      And where the previous unwinder only needed BP,SP ORC also requires
      IP. But we cannot spoof IP because then the sample will get displaced,
      entirely negating the point of PEBS.
      
      So cure the whole thing differently by doing the unwind early; this
      does however require a means to communicate we did the unwind early.
      We (ab)use an unused sample_type bit for this, which we set on events
      that fill out the data->callchain before the normal
      perf_prepare_sample().
      Debugged-by: default avatarJosh Poimboeuf <jpoimboe@redhat.com>
      Reported-by: default avatarVince Weaver <vincent.weaver@maine.edu>
      Tested-by: default avatarJosh Poimboeuf <jpoimboe@redhat.com>
      Tested-by: default avatarPrashant Bhole <bhole_prashant_q7@lab.ntt.co.jp>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: Ingo Molnar's avatarIngo Molnar <mingo@kernel.org>
      6cbc304f
  14. 24 Jul, 2018 1 commit
    • Thomas Gleixner's avatar
      perf/x86/amd/ibs: Don't access non-started event · d2753e6b
      Thomas Gleixner authored
      Paul Menzel reported the following bug:
      
      > Enabling the undefined behavior sanitizer and building GNU/Linux 4.18-rc5+
      > (with some unrelated commits) with GCC 8.1.0 from Debian Sid/unstable, the
      > warning below is shown.
      >
      > > [    2.111913]
      > > ================================================================================
      > > [    2.111917] UBSAN: Undefined behaviour in arch/x86/events/amd/ibs.c:582:24
      > > [    2.111919] member access within null pointer of type 'struct perf_event'
      > > [    2.111926] CPU: 0 PID: 144 Comm: udevadm Not tainted 4.18.0-rc5-00316-g4864b68cedf2 #104
      > > [    2.111928] Hardware name: ASROCK E350M1/E350M1, BIOS TIMELESS 01/01/1970
      > > [    2.111930] Call Trace:
      > > [    2.111943]  dump_stack+0x55/0x89
      > > [    2.111949]  ubsan_epilogue+0xb/0x33
      > > [    2.111953]  handle_null_ptr_deref+0x7f/0x90
      > > [    2.111958]  __ubsan_handle_type_mismatch_v1+0x55/0x60
      > > [    2.111964]  perf_ibs_handle_irq+0x596/0x620
      
      The code dereferences event before checking the STARTED bit. Patch
      below should cure the issue.
      
      The warning should not trigger, if I analyzed the thing correctly.
      (And Paul's testing confirms this.)
      Reported-by: default avatarPaul Menzel <pmenzel@molgen.mpg.de>
      Tested-by: default avatarPaul Menzel <pmenzel@molgen.mpg.de>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul Menzel <pmenzel+linux-x86@molgen.mpg.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Link: http://lkml.kernel.org/r/alpine.DEB.2.21.1807200958390.1580@nanos.tec.linutronix.deSigned-off-by: Ingo Molnar's avatarIngo Molnar <mingo@kernel.org>
      d2753e6b
  15. 15 Jul, 2018 1 commit
    • Hugh Dickins's avatar
      x86/events/intel/ds: Fix bts_interrupt_threshold alignment · 2c991e40
      Hugh Dickins authored
      Markus reported that BTS is sporadically missing the tail of the trace
      in the perf_event data buffer: [decode error (1): instruction overflow]
      shown in GDB; and bisected it to the conversion of debug_store to PTI.
      
      A little "optimization" crept into alloc_bts_buffer(), which mistakenly
      placed bts_interrupt_threshold away from the 24-byte record boundary.
      Intel SDM Vol 3B 17.4.9 says "This address must point to an offset from
      the BTS buffer base that is a multiple of the BTS record size."
      
      Revert "max" from a byte count to a record count, to calculate the
      bts_interrupt_threshold correctly: which turns out to fix problem seen.
      
      Fixes: c1961a46 ("x86/events/intel/ds: Map debug buffers in cpu_entry_area")
      Reported-and-tested-by: default avatarMarkus T Metzger <markus.t.metzger@intel.com>
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Alexander Shishkin <alexander.shishkin@intel.com>
      Cc: Andi Kleen <andi.kleen@intel.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: stable@vger.kernel.org # v4.14+
      Link: https://lkml.kernel.org/r/alpine.LSU.2.11.1807141248290.1614@eggly.anvils
      2c991e40
  16. 21 Jun, 2018 2 commits
    • Kan Liang's avatar
      perf/x86/intel/lbr: Optimize context switches for the LBR call stack · 8b077e4a
      Kan Liang authored
      Context switches with perf LBR call stack context are fairly expensive
      because they do a lot of MSR writes. Currently we unconditionally do the
      expensive operation when LBR call stack is enabled. It's not necessary
      for some common cases, e.g task -> other kernel thread -> same task.
      The LBR registers are not changed, hence they don't need to be
      rewritten/restored.
      
      Introduce per-CPU variables to track the last LBR call stack context.
      If the same context is scheduled in, the rewrite/restore is not
      required, with the following two exceptions:
      
       - The LBR registers may be modified by a normal LBR event, i.e., adding
         a new LBR event or scheduling an existing LBR event. In both cases,
         the LBR registers are reset first. The last LBR call stack information
         is cleared in intel_pmu_lbr_reset(). Restoring the LBR registers is
         required.
      
       - The LBR registers are initialized to zero in C6.
         If the LBR registers which TOS points is cleared, C6 must be entered
         while swapped out. Restoring the LBR registers is required as well.
      
      These exceptions are not common.
      Signed-off-by: default avatarKan Liang <kan.liang@linux.intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: acme@kernel.org
      Cc: eranian@google.com
      Link: https://lore.kernel.org/lkml/1528213126-4312-2-git-send-email-kan.liang@linux.intel.comSigned-off-by: Ingo Molnar's avatarIngo Molnar <mingo@kernel.org>
      8b077e4a
    • Kan Liang's avatar
      perf/x86/intel/lbr: Fix incomplete LBR call stack · 0592e57b
      Kan Liang authored
      LBR has a limited stack size. If a task has a deeper call stack than
      LBR's stack size, only the overflowed part is reported. A complete call
      stack may not be reconstructed by perf tool.
      
      Current code doesn't access all LBR registers. It only read the ones
      below the TOS. The LBR registers above the TOS will be discarded
      unconditionally.
      
      When a CALL is captured, the TOS is incremented by 1 , modulo max LBR
      stack size. The LBR HW only records the call stack information to the
      register which the TOS points to. It will not touch other LBR
      registers. So the registers above the TOS probably still store the valid
      call stack information for an overflowed call stack, which need to be
      reported.
      
      To retrieve complete call stack information, we need to start from TOS,
      read all LBR registers until an invalid entry is detected.
      0s can be used to detect the invalid entry, because:
      
       - When a RET is captured, the HW zeros the LBR register which TOS points
         to, then decreases the TOS.
       - The LBR registers are reset to 0 when adding a new LBR event or
         scheduling an existing LBR event.
       - A taken branch at IP 0 is not expected
      
      The context switch code is also modified to save/restore all valid LBR
      registers. Furthermore, the LBR registers, which don't have valid call
      stack information, need to be reset in restore, because they may be
      polluted while swapped out.
      
      Here is a small test program, tchain_deep.
      Its call stack is deeper than 32.
      
       noinline void f33(void)
       {
              int i;
      
              for (i = 0; i < 10000000;) {
                      if (i%2)
                              i++;
                      else
                              i++;
              }
       }
      
       noinline void f32(void)
       {
              f33();
       }
      
       noinline void f31(void)
       {
              f32();
       }
      
       ... ...
      
       noinline void f1(void)
       {
              f2();
       }
      
       int main()
       {
              f1();
       }
      
      Here is the test result on SKX. The max stack size of SKX is 32.
      
      Without the patch:
      
       $ perf record -e cycles --call-graph lbr -- ./tchain_deep
       $ perf report --stdio
       #
       # Children      Self  Command      Shared Object     Symbol
       # ........  ........  ...........  ................  .................
       #
         100.00%    99.99%  tchain_deep    tchain_deep       [.] f33
                  |
                   --99.99%--f30
                             f31
                             f32
                             f33
      
      With the patch:
      
       $ perf record -e cycles --call-graph lbr -- ./tchain_deep
       $ perf report --stdio
       # Children      Self  Command      Shared Object     Symbol
       # ........  ........  ...........  ................  ..................
       #
          99.99%     0.00%  tchain_deep    tchain_deep       [.] f1
                  |
                  ---f1
                     f2
                     f3
                     f4
                     f5
                     f6
                     f7
                     f8
                     f9
                     f10
                     f11
                     f12
                     f13
                     f14
                     f15
                     f16
                     f17
                     f18
                     f19
                     f20
                     f21
                     f22
                     f23
                     f24
                     f25
                     f26
                     f27
                     f28
                     f29
                     f30
                     f31
                     f32
                     f33
      Signed-off-by: default avatarKan Liang <kan.liang@linux.intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: acme@kernel.org
      Cc: eranian@google.com
      Link: https://lore.kernel.org/lkml/1528213126-4312-1-git-send-email-kan.liang@linux.intel.comSigned-off-by: Ingo Molnar's avatarIngo Molnar <mingo@kernel.org>
      0592e57b
  17. 12 Jun, 2018 3 commits
    • Kees Cook's avatar
      treewide: kzalloc() -> kcalloc() · 6396bb22
      Kees Cook authored
      The kzalloc() function has a 2-factor argument form, kcalloc(). This
      patch replaces cases of:
      
              kzalloc(a * b, gfp)
      
      with:
              kcalloc(a * b, gfp)
      
      as well as handling cases of:
      
              kzalloc(a * b * c, gfp)
      
      with:
      
              kzalloc(array3_size(a, b, c), gfp)
      
      as it's slightly less ugly than:
      
              kzalloc_array(array_size(a, b), c, gfp)
      
      This does, however, attempt to ignore constant size factors like:
      
              kzalloc(4 * 1024, gfp)
      
      though any constants defined via macros get caught up in the conversion.
      
      Any factors with a sizeof() of "unsigned char", "char", and "u8" were
      dropped, since they're redundant.
      
      The Coccinelle script used for this was:
      
      // Fix redundant parens around sizeof().
      @@
      type TYPE;
      expression THING, E;
      @@
      
      (
        kzalloc(
      -	(sizeof(TYPE)) * E
      +	sizeof(TYPE) * E
        , ...)
      |
        kzalloc(
      -	(sizeof(THING)) * E
      +	sizeof(THING) * E
        , ...)
      )
      
      // Drop single-byte sizes and redundant parens.
      @@
      expression COUNT;
      typedef u8;
      typedef __u8;
      @@
      
      (
        kzalloc(
      -	sizeof(u8) * (COUNT)
      +	COUNT
        , ...)
      |
        kzalloc(
      -	sizeof(__u8) * (COUNT)
      +	COUNT
        , ...)
      |
        kzalloc(
      -	sizeof(char) * (COUNT)
      +	COUNT
        , ...)
      |
        kzalloc(
      -	sizeof(unsigned char) * (COUNT)
      +	COUNT
        , ...)
      |
        kzalloc(
      -	sizeof(u8) * COUNT
      +	COUNT
        , ...)
      |
        kzalloc(
      -	sizeof(__u8) * COUNT
      +	COUNT
        , ...)
      |
        kzalloc(
      -	sizeof(char) * COUNT
      +	COUNT
        , ...)
      |
        kzalloc(
      -	sizeof(unsigned char) * COUNT
      +	COUNT
        , ...)
      )
      
      // 2-factor product with sizeof(type/expression) and identifier or constant.
      @@
      type TYPE;
      expression THING;
      identifier COUNT_ID;
      constant COUNT_CONST;
      @@
      
      (
      - kzalloc
      + kcalloc
        (
      -	sizeof(TYPE) * (COUNT_ID)
      +	COUNT_ID, sizeof(TYPE)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(TYPE) * COUNT_ID
      +	COUNT_ID, sizeof(TYPE)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(TYPE) * (COUNT_CONST)
      +	COUNT_CONST, sizeof(TYPE)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(TYPE) * COUNT_CONST
      +	COUNT_CONST, sizeof(TYPE)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(THING) * (COUNT_ID)
      +	COUNT_ID, sizeof(THING)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(THING) * COUNT_ID
      +	COUNT_ID, sizeof(THING)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(THING) * (COUNT_CONST)
      +	COUNT_CONST, sizeof(THING)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(THING) * COUNT_CONST
      +	COUNT_CONST, sizeof(THING)
        , ...)
      )
      
      // 2-factor product, only identifiers.
      @@
      identifier SIZE, COUNT;
      @@
      
      - kzalloc
      + kcalloc
        (
      -	SIZE * COUNT
      +	COUNT, SIZE
        , ...)
      
      // 3-factor product with 1 sizeof(type) or sizeof(expression), with
      // redundant parens removed.
      @@
      expression THING;
      identifier STRIDE, COUNT;
      type TYPE;
      @@
      
      (
        kzalloc(
      -	sizeof(TYPE) * (COUNT) * (STRIDE)
      +	array3_size(COUNT, STRIDE, sizeof(TYPE))
        , ...)
      |
        kzalloc(
      -	sizeof(TYPE) * (COUNT) * STRIDE
      +	array3_size(COUNT, STRIDE, sizeof(TYPE))
        , ...)
      |
        kzalloc(
      -	sizeof(TYPE) * COUNT * (STRIDE)
      +	array3_size(COUNT, STRIDE, sizeof(TYPE))
        , ...)
      |
        kzalloc(
      -	sizeof(TYPE) * COUNT * STRIDE
      +	array3_size(COUNT, STRIDE, sizeof(TYPE))
        , ...)
      |
        kzalloc(
      -	sizeof(THING) * (COUNT) * (STRIDE)
      +	array3_size(COUNT, STRIDE, sizeof(THING))
        , ...)
      |
        kzalloc(
      -	sizeof(THING) * (COUNT) * STRIDE
      +	array3_size(COUNT, STRIDE, sizeof(THING))
        , ...)
      |
        kzalloc(
      -	sizeof(THING) * COUNT * (STRIDE)
      +	array3_size(COUNT, STRIDE, sizeof(THING))
        , ...)
      |
        kzalloc(
      -	sizeof(THING) * COUNT * STRIDE
      +	array3_size(COUNT, STRIDE, sizeof(THING))
        , ...)
      )
      
      // 3-factor product with 2 sizeof(variable), with redundant parens removed.
      @@
      expression THING1, THING2;
      identifier COUNT;
      type TYPE1, TYPE2;
      @@
      
      (
        kzalloc(
      -	sizeof(TYPE1) * sizeof(TYPE2) * COUNT
      +	array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
        , ...)
      |
        kzalloc(
      -	sizeof(TYPE1) * sizeof(THING2) * (COUNT)
      +	array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
        , ...)
      |
        kzalloc(
      -	sizeof(THING1) * sizeof(THING2) * COUNT
      +	array3_size(COUNT, sizeof(THING1), sizeof(THING2))
        , ...)
      |
        kzalloc(
      -	sizeof(THING1) * sizeof(THING2) * (COUNT)
      +	array3_size(COUNT, sizeof(THING1), sizeof(THING2))
        , ...)
      |
        kzalloc(
      -	sizeof(TYPE1) * sizeof(THING2) * COUNT
      +	array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
        , ...)
      |
        kzalloc(
      -	sizeof(TYPE1) * sizeof(THING2) * (COUNT)
      +	array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
        , ...)
      )
      
      // 3-factor product, only identifiers, with redundant parens removed.
      @@
      identifier STRIDE, SIZE, COUNT;
      @@
      
      (
        kzalloc(
      -	(COUNT) * STRIDE * SIZE
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kzalloc(
      -	COUNT * (STRIDE) * SIZE
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kzalloc(
      -	COUNT * STRIDE * (SIZE)
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kzalloc(
      -	(COUNT) * (STRIDE) * SIZE
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kzalloc(
      -	COUNT * (STRIDE) * (SIZE)
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kzalloc(
      -	(COUNT) * STRIDE * (SIZE)
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kzalloc(
      -	(COUNT) * (STRIDE) * (SIZE)
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kzalloc(
      -	COUNT * STRIDE * SIZE
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      )
      
      // Any remaining multi-factor products, first at least 3-factor products,
      // when they're not all constants...
      @@
      expression E1, E2, E3;
      constant C1, C2, C3;
      @@
      
      (
        kzalloc(C1 * C2 * C3, ...)
      |
        kzalloc(
      -	(E1) * E2 * E3
      +	array3_size(E1, E2, E3)
        , ...)
      |
        kzalloc(
      -	(E1) * (E2) * E3
      +	array3_size(E1, E2, E3)
        , ...)
      |
        kzalloc(
      -	(E1) * (E2) * (E3)
      +	array3_size(E1, E2, E3)
        , ...)
      |
        kzalloc(
      -	E1 * E2 * E3
      +	array3_size(E1, E2, E3)
        , ...)
      )
      
      // And then all remaining 2 factors products when they're not all constants,
      // keeping sizeof() as the second factor argument.
      @@
      expression THING, E1, E2;
      type TYPE;
      constant C1, C2, C3;
      @@
      
      (
        kzalloc(sizeof(THING) * C2, ...)
      |
        kzalloc(sizeof(TYPE) * C2, ...)
      |
        kzalloc(C1 * C2 * C3, ...)
      |
        kzalloc(C1 * C2, ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(TYPE) * (E2)
      +	E2, sizeof(TYPE)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(TYPE) * E2
      +	E2, sizeof(TYPE)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(THING) * (E2)
      +	E2, sizeof(THING)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(THING) * E2
      +	E2, sizeof(THING)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	(E1) * E2
      +	E1, E2
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	(E1) * (E2)
      +	E1, E2
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	E1 * E2
      +	E1, E2
        , ...)
      )
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      6396bb22
    • Kees Cook's avatar
      treewide: kmalloc() -> kmalloc_array() · 6da2ec56
      Kees Cook authored
      The kmalloc() function has a 2-factor argument form, kmalloc_array(). This
      patch replaces cases of:
      
              kmalloc(a * b, gfp)
      
      with:
              kmalloc_array(a * b, gfp)
      
      as well as handling cases of:
      
              kmalloc(a * b * c, gfp)
      
      with:
      
              kmalloc(array3_size(a, b, c), gfp)
      
      as it's slightly less ugly than:
      
              kmalloc_array(array_size(a, b), c, gfp)
      
      This does, however, attempt to ignore constant size factors like:
      
              kmalloc(4 * 1024, gfp)
      
      though any constants defined via macros get caught up in the conversion.
      
      Any factors with a sizeof() of "unsigned char", "char", and "u8" were
      dropped, since they're redundant.
      
      The tools/ directory was manually excluded, since it has its own
      implementation of kmalloc().
      
      The Coccinelle script used for this was:
      
      // Fix redundant parens around sizeof().
      @@
      type TYPE;
      expression THING, E;
      @@
      
      (
        kmalloc(
      -	(sizeof(TYPE)) * E
      +	sizeof(TYPE) * E
        , ...)
      |
        kmalloc(
      -	(sizeof(THING)) * E
      +	sizeof(THING) * E
        , ...)
      )
      
      // Drop single-byte sizes and redundant parens.
      @@
      expression COUNT;
      typedef u8;
      typedef __u8;
      @@
      
      (
        kmalloc(
      -	sizeof(u8) * (COUNT)
      +	COUNT
        , ...)
      |
        kmalloc(
      -	sizeof(__u8) * (COUNT)
      +	COUNT
        , ...)
      |
        kmalloc(
      -	sizeof(char) * (COUNT)
      +	COUNT
        , ...)
      |
        kmalloc(
      -	sizeof(unsigned char) * (COUNT)
      +	COUNT
        , ...)
      |
        kmalloc(
      -	sizeof(u8) * COUNT
      +	COUNT
        , ...)
      |
        kmalloc(
      -	sizeof(__u8) * COUNT
      +	COUNT
        , ...)
      |
        kmalloc(
      -	sizeof(char) * COUNT
      +	COUNT
        , ...)
      |
        kmalloc(
      -	sizeof(unsigned char) * COUNT
      +	COUNT
        , ...)
      )
      
      // 2-factor product with sizeof(type/expression) and identifier or constant.
      @@
      type TYPE;
      expression THING;
      identifier COUNT_ID;
      constant COUNT_CONST;
      @@
      
      (
      - kmalloc
      + kmalloc_array
        (
      -	sizeof(TYPE) * (COUNT_ID)
      +	COUNT_ID, sizeof(TYPE)
        , ...)
      |
      - kmalloc
      + kmalloc_array
        (
      -	sizeof(TYPE) * COUNT_ID
      +	COUNT_ID, sizeof(TYPE)
        , ...)
      |
      - kmalloc
      + kmalloc_array
        (
      -	sizeof(TYPE) * (COUNT_CONST)
      +	COUNT_CONST, sizeof(TYPE)
        , ...)
      |
      - kmalloc
      + kmalloc_array
        (
      -	sizeof(TYPE) * COUNT_CONST
      +	COUNT_CONST, sizeof(TYPE)
        , ...)
      |
      - kmalloc
      + kmalloc_array
        (
      -	sizeof(THING) * (COUNT_ID)
      +	COUNT_ID, sizeof(THING)
        , ...)
      |
      - kmalloc
      + kmalloc_array
        (
      -	sizeof(THING) * COUNT_ID
      +	COUNT_ID, sizeof(THING)
        , ...)
      |
      - kmalloc
      + kmalloc_array
        (
      -	sizeof(THING) * (COUNT_CONST)
      +	COUNT_CONST, sizeof(THING)
        , ...)
      |
      - kmalloc
      + kmalloc_array
        (
      -	sizeof(THING) * COUNT_CONST
      +	COUNT_CONST, sizeof(THING)
        , ...)
      )
      
      // 2-factor product, only identifiers.
      @@
      identifier SIZE, COUNT;
      @@
      
      - kmalloc
      + kmalloc_array
        (
      -	SIZE * COUNT
      +	COUNT, SIZE
        , ...)
      
      // 3-factor product with 1 sizeof(type) or sizeof(expression), with
      // redundant parens removed.
      @@
      expression THING;
      identifier STRIDE, COUNT;
      type TYPE;
      @@
      
      (
        kmalloc(
      -	sizeof(TYPE) * (COUNT) * (STRIDE)
      +	array3_size(COUNT, STRIDE, sizeof(TYPE))
        , ...)
      |
        kmalloc(
      -	sizeof(TYPE) * (COUNT) * STRIDE
      +	array3_size(COUNT, STRIDE, sizeof(TYPE))
        , ...)
      |
        kmalloc(
      -	sizeof(TYPE) * COUNT * (STRIDE)
      +	array3_size(COUNT, STRIDE, sizeof(TYPE))
        , ...)
      |
        kmalloc(
      -	sizeof(TYPE) * COUNT * STRIDE
      +	array3_size(COUNT, STRIDE, sizeof(TYPE))
        , ...)
      |
        kmalloc(
      -	sizeof(THING) * (COUNT) * (STRIDE)
      +	array3_size(COUNT, STRIDE, sizeof(THING))
        , ...)
      |
        kmalloc(
      -	sizeof(THING) * (COUNT) * STRIDE
      +	array3_size(COUNT, STRIDE, sizeof(THING))
        , ...)
      |
        kmalloc(
      -	sizeof(THING) * COUNT * (STRIDE)
      +	array3_size(COUNT, STRIDE, sizeof(THING))
        , ...)
      |
        kmalloc(
      -	sizeof(THING) * COUNT * STRIDE
      +	array3_size(COUNT, STRIDE, sizeof(THING))
        , ...)
      )
      
      // 3-factor product with 2 sizeof(variable), with redundant parens removed.
      @@
      expression THING1, THING2;
      identifier COUNT;
      type TYPE1, TYPE2;
      @@
      
      (
        kmalloc(
      -	sizeof(TYPE1) * sizeof(TYPE2) * COUNT
      +	array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
        , ...)
      |
        kmalloc(
      -	sizeof(TYPE1) * sizeof(THING2) * (COUNT)
      +	array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
        , ...)
      |
        kmalloc(
      -	sizeof(THING1) * sizeof(THING2) * COUNT
      +	array3_size(COUNT, sizeof(THING1), sizeof(THING2))
        , ...)
      |
        kmalloc(
      -	sizeof(THING1) * sizeof(THING2) * (COUNT)
      +	array3_size(COUNT, sizeof(THING1), sizeof(THING2))
        , ...)
      |
        kmalloc(
      -	sizeof(TYPE1) * sizeof(THING2) * COUNT
      +	array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
        , ...)
      |
        kmalloc(
      -	sizeof(TYPE1) * sizeof(THING2) * (COUNT)
      +	array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
        , ...)
      )
      
      // 3-factor product, only identifiers, with redundant parens removed.
      @@
      identifier STRIDE, SIZE, COUNT;
      @@
      
      (
        kmalloc(
      -	(COUNT) * STRIDE * SIZE
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kmalloc(
      -	COUNT * (STRIDE) * SIZE
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kmalloc(
      -	COUNT * STRIDE * (SIZE)
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kmalloc(
      -	(COUNT) * (STRIDE) * SIZE
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kmalloc(
      -	COUNT * (STRIDE) * (SIZE)
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kmalloc(
      -	(COUNT) * STRIDE * (SIZE)
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kmalloc(
      -	(COUNT) * (STRIDE) * (SIZE)
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kmalloc(
      -	COUNT * STRIDE * SIZE
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      )
      
      // Any remaining multi-factor products, first at least 3-factor products,
      // when they're not all constants...
      @@
      expression E1, E2, E3;
      constant C1, C2, C3;
      @@
      
      (
        kmalloc(C1 * C2 * C3, ...)
      |
        kmalloc(
      -	(E1) * E2 * E3
      +	array3_size(E1, E2, E3)
        , ...)
      |
        kmalloc(
      -	(E1) * (E2) * E3
      +	array3_size(E1, E2, E3)
        , ...)
      |
        kmalloc(
      -	(E1) * (E2) * (E3)
      +	array3_size(E1, E2, E3)
        , ...)
      |
        kmalloc(
      -	E1 * E2 * E3
      +	array3_size(E1, E2, E3)
        , ...)
      )
      
      // And then all remaining 2 factors products when they're not all constants,
      // keeping sizeof() as the second factor argument.
      @@
      expression THING, E1, E2;
      type TYPE;
      constant C1, C2, C3;
      @@
      
      (
        kmalloc(sizeof(THING) * C2, ...)
      |
        kmalloc(sizeof(TYPE) * C2, ...)
      |
        kmalloc(C1 * C2 * C3, ...)
      |
        kmalloc(C1 * C2, ...)
      |
      - kmalloc
      + kmalloc_array
        (
      -	sizeof(TYPE) * (E2)
      +	E2, sizeof(TYPE)
        , ...)
      |
      - kmalloc
      + kmalloc_array
        (
      -	sizeof(TYPE) * E2
      +	E2, sizeof(TYPE)
        , ...)
      |
      - kmalloc
      + kmalloc_array
        (
      -	sizeof(THING) * (E2)
      +	E2, sizeof(THING)
        , ...)
      |
      - kmalloc
      + kmalloc_array
        (
      -	sizeof(THING) * E2
      +	E2, sizeof(THING)
        , ...)
      |
      - kmalloc
      + kmalloc_array
        (
      -	(E1) * E2
      +	E1, E2
        , ...)
      |
      - kmalloc
      + kmalloc_array
        (
      -	(E1) * (E2)
      +	E1, E2
        , ...)
      |
      - kmalloc
      + kmalloc_array
        (
      -	E1 * E2
      +	E1, E2
        , ...)
      )
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      6da2ec56
    • Matthew Wilcox's avatar
      Convert intel uncore to struct_size · 6566f907
      Matthew Wilcox authored
      Need to do a bit of rearranging to make this work.
      Signed-off-by: default avatarMatthew Wilcox <mawilcox@microsoft.com>
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      6566f907
  18. 31 May, 2018 6 commits
    • Kan Liang's avatar
      perf/x86/intel/uncore: Clean up client IMC uncore · 9aae1780
      Kan Liang authored
      The counters in client IMC uncore are free running counters, not fixed
      counters. It should be corrected. The new infrastructure for free
      running counter should be applied.
      
      Introducing a new type SNB_PCI_UNCORE_IMC_DATA for client IMC free
      running counters.
      
      Keeping the customized event_init() function to be compatible with old
      event encoding.
      
      Clean up other customized event_*() functions.
      Signed-off-by: default avatarKan Liang <kan.liang@intel.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: acme@kernel.org
      Cc: eranian@google.com
      Link: http://lkml.kernel.org/r/1525371913-10597-8-git-send-email-kan.liang@intel.comSigned-off-by: Ingo Molnar's avatarIngo Molnar <mingo@kernel.org>
      9aae1780
    • Kan Liang's avatar
      perf/x86/intel/uncore: Expose uncore_pmu_event*() functions · 5a6c9d94
      Kan Liang authored
      Some uncores have customized PMU. For customized PMU, it does not need
      to customize everything. For example, it only needs to customize init()
      function for client IMC uncore. Other functions like
      add()/del()/start()/stop()/read() can use generic code.
      
      Expose the uncore_pmu_event_add/del/start/stop() functions.
      Signed-off-by: default avatarKan Liang <kan.liang@intel.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: acme@kernel.org
      Cc: eranian@google.com
      Link: http://lkml.kernel.org/r/1525371913-10597-7-git-send-email-kan.liang@intel.comSigned-off-by: Ingo Molnar's avatarIngo Molnar <mingo@kernel.org>
      5a6c9d94
    • Kan Liang's avatar
      perf/x86/intel/uncore: Support IIO free-running counters on SKX · 0f519f03
      Kan Liang authored
      As of Skylake Server, there are a number of free running counters in
      each IIO Box that collect counts of per-box IO clocks and per-port
      Input/Output x BW/Utilization.
      
      The free running counters cannot be part of the existing IIO BOX,
      because, quoting from Peter Zijlstra:
      
        "This will result in some (probably) unexpected scheduling artifacts.
         Probably the only way to really cure that is to have the free running
         counters in their own PMU and not share with the GP counters of this
         box."
      
      So let's add a new PMU for the free running counters, as suggested.
      
      The free-running counter is read-only and always active. Counting will
      be suspended only when the IIO Box is powered down.
      
      There are three types of IIO free-running counters on Skylake server, IO
      CLOCKS counter, BANDWIDTH counters and UTILIZATION counters.
      IO CLOCKS counter is a clock of IIO box.
      BANDWIDTH counters are to count inbound(PCIe->CPU)/outbound(CPU->PCIe)
      bandwidth.
      UTILIZATION counters are to count input/output utilization.
      
      The bit width of the free-running counters is 36-bits.
      Suggested-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarKan Liang <kan.liang@intel.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: acme@kernel.org
      Cc: eranian@google.com
      Link: http://lkml.kernel.org/r/1525371913-10597-6-git-send-email-kan.liang@intel.comSigned-off-by: Ingo Molnar's avatarIngo Molnar <mingo@kernel.org>
      0f519f03
    • Kan Liang's avatar
      perf/x86/intel/uncore: Add infrastructure for free running counters · 0e0162df
      Kan Liang authored
      There are a number of free running counters introduced for uncore, which
      provide highly valuable information to a wide array of customers.
      However, the generic uncore code doesn't support them yet.
      
      The free running counters will be specially handled based on their
      unique attributes:
      
       - They are read-only. They cannot be enabled/disabled.
      
       - The event and the counter are always 1:1 mapped. It doesn't need to
         be assigned nor tracked by event_list.
      
       - They are always active. It doesn't need to check the availability.
      
       - They have different bit width.
      
      Also, using inline helpers to replace the check for fixed counter and
      free running counter.
      Signed-off-by: default avatarKan Liang <kan.liang@intel.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: acme@kernel.org
      Cc: eranian@google.com
      Link: http://lkml.kernel.org/r/1525371913-10597-5-git-send-email-kan.liang@intel.comSigned-off-by: Ingo Molnar's avatarIngo Molnar <mingo@kernel.org>
      0e0162df
    • Kan Liang's avatar
      perf/x86/intel/uncore: Add new data structures for free running counters · 927b2deb
      Kan Liang authored
      There are a number of free running counters introduced for uncore, which
      provide highly valuable information to a wide array of customers.
      For example, Skylake Server has IIO free running counters to collect
      Input/Output x BW/Utilization.
      
      There is NO event available on the general purpose counters, that is
      exactly the same as the free running counters. The generic uncore code
      needs to be enhanced to support the new counters.
      
      In the uncore document, there is no event-code assigned to free running
      counters. Some events need to be defined to indicate the free running
      counters. The events are encoded as event-code + umask-code.
      
      The event-code for all free running counters is 0xff, which is the same
      as the fixed counters:
      
      - It has not been decided what code will be used for common events on
        future platforms. 0xff is the only one which will definitely not be
        used as any common event-code.
      - Cannot re-use current events on the general purpose counters. Because
        there is NO event available, that is exactly the same as the free
        running counters.
      - Even in the existing codes, the fixed counters for core, that have the
        same event-code, may count different things. Hence, it should not
        surprise the users if the free running counters that share the same
        event-code also count different things.
        Umask will be used to distinguish the counters.
      
      The umask-code is used to distinguish a fixed counter and a free running
      counter, and different types of free running counters.
      
      For fixed counters, the umask-code is 0x0X, where X indicates the index
      of the fixed counter, which starts from 0.
      
       - Compatible with the old event encoding.
      
       - Currently, there is only one fixed counter. There are still 15
         reserved spaces for extension.
      
      For free running counters, the umask-code uses the rest of the space.
      It would follow the format of 0xXY:
      
       - X stands for the type of free running counters, which starts from 1.
      
       - Y stands for the index of free running counters of same type, which
         starts from 0.
      
      - The free running counters do different thing. It can be categorized to
        several types, according to the MSR location, bit width and
        definition. E.g. there are three types of IIO free running counters on
        Skylake server to monitor IO CLOCKS, BANDWIDTH and UTILIZATION  on
        different ports. It makes it easy to locate the free running counter
        of a specific type.
      
      - So far, there are at most 8 counters of each type.  There are still 8
        reserved spaces for extension.
      
      Introducing a new index to indicate the free running counters. Only one
      index is enough for all free running counters. Because the free running
      counters are always active, and the event and free running counter are
      always 1:1 mapped, it does not need extra index to indicate the assigned
      counter.
      
      Introducing a new data structure to store free running counters related
      information for each type. It includes the number of counters, bit
      width, base address, offset between counters and offset between boxes.
      
      Introducing several inline helpers to check index for fixed counter and
      free running counter, validate free running counter event, and retrieve
      the free running counter information according to box and event.
      Signed-off-by: default avatarKan Liang <kan.liang@intel.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: acme@kernel.org
      Cc: eranian@google.com
      Link: http://lkml.kernel.org/r/1525371913-10597-4-git-send-email-kan.liang@intel.comSigned-off-by: Ingo Molnar's avatarIngo Molnar <mingo@kernel.org>
      927b2deb
    • Kan Liang's avatar
      perf/x86/intel/uncore: Correct fixed counter index check in generic code · 4749f819
      Kan Liang authored
      There is no index which is bigger than UNCORE_PMC_IDX_FIXED. The only
      exception is client IMC uncore, which has been specially handled.
      For generic code, it is not correct to use >= to check fixed counter.
      The code quality issue will bring problem when a new counter index is
      introduced.
      Signed-off-by: default avatarKan Liang <kan.liang@intel.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: acme@kernel.org
      Cc: eranian@google.com
      Link: http://lkml.kernel.org/r/1525371913-10597-3-git-send-email-kan.liang@intel.comSigned-off-by: Ingo Molnar's avatarIngo Molnar <mingo@kernel.org>
      4749f819