1. 05 Apr, 2018 1 commit
    • Dominik Brodowski's avatar
      syscalls/core: Prepare CONFIG_ARCH_HAS_SYSCALL_WRAPPER=y for compat syscalls · 7303e30e
      Dominik Brodowski authored
      It may be useful for an architecture to override the definitions of the
      COMPAT_SYSCALL_DEFINE0() and __COMPAT_SYSCALL_DEFINEx() macros in
      <linux/compat.h>, in particular to use a different calling convention
      for syscalls. This patch provides a mechanism to do so, based on the
      previously introduced CONFIG_ARCH_HAS_SYSCALL_WRAPPER. If it is enabled,
      <asm/sycall_wrapper.h> is included in <linux/compat.h> and may be used
      to define the macros mentioned above. Moreover, as the syscall calling
      convention may be different if CONFIG_ARCH_HAS_SYSCALL_WRAPPER is set,
      the compat syscall function prototypes in <linux/compat.h> are #ifndef'd
      out in that case.
      
      As some of the syscalls and/or compat syscalls may not be present,
      the COND_SYSCALL() and COND_SYSCALL_COMPAT() macros in kernel/sys_ni.c
      as well as the SYS_NI() and COMPAT_SYS_NI() macros in
      kernel/time/posix-stubs.c can be re-defined in <asm/syscall_wrapper.h> iff
      CONFIG_ARCH_HAS_SYSCALL_WRAPPER is enabled.
      Signed-off-by: default avatarDominik Brodowski <linux@dominikbrodowski.net>
      Acked-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20180405095307.3730-5-linux@dominikbrodowski.netSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      7303e30e
  2. 02 Apr, 2018 17 commits
  3. 31 Mar, 2018 1 commit
  4. 29 Mar, 2018 2 commits
  5. 28 Mar, 2018 2 commits
  6. 27 Mar, 2018 1 commit
  7. 23 Mar, 2018 2 commits
  8. 22 Mar, 2018 1 commit
  9. 20 Mar, 2018 13 commits
    • Chenbo Feng's avatar
      bpf: skip unnecessary capability check · 0fa4fe85
      Chenbo Feng authored
      The current check statement in BPF syscall will do a capability check
      for CAP_SYS_ADMIN before checking sysctl_unprivileged_bpf_disabled. This
      code path will trigger unnecessary security hooks on capability checking
      and cause false alarms on unprivileged process trying to get CAP_SYS_ADMIN
      access. This can be resolved by simply switch the order of the statement
      and CAP_SYS_ADMIN is not required anyway if unprivileged bpf syscall is
      allowed.
      Signed-off-by: default avatarChenbo Feng <fengc@google.com>
      Acked-by: default avatarLorenzo Colitti <lorenzo@google.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      0fa4fe85
    • Yonghong Song's avatar
      trace/bpf: remove helper bpf_perf_prog_read_value from tracepoint type programs · f005afed
      Yonghong Song authored
      Commit 4bebdc7a ("bpf: add helper bpf_perf_prog_read_value")
      added helper bpf_perf_prog_read_value so that perf_event type program
      can read event counter and enabled/running time.
      This commit, however, introduced a bug which allows this helper
      for tracepoint type programs. This is incorrect as bpf_perf_prog_read_value
      needs to access perf_event through its bpf_perf_event_data_kern type context,
      which is not available for tracepoint type program.
      
      This patch fixed the issue by separating bpf_func_proto between tracepoint
      and perf_event type programs and removed bpf_perf_prog_read_value
      from tracepoint func prototype.
      
      Fixes: 4bebdc7a ("bpf: add helper bpf_perf_prog_read_value")
      Reported-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarYonghong Song <yhs@fb.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      f005afed
    • Joe Lawrence's avatar
      sched/debug: Adjust newlines for better alignment · e9ca2670
      Joe Lawrence authored
      Scheduler debug stats include newlines that display out of alignment
      when prefixed by timestamps.  For example, the dmesg utility:
      
        % echo t > /proc/sysrq-trigger
        % dmesg
        ...
        [   83.124251]
        runnable tasks:
         S           task   PID         tree-key  switches  prio     wait-time
        sum-exec        sum-sleep
        -----------------------------------------------------------------------------------------------------------
      
      At the same time, some syslog utilities (like rsyslog by default) don't
      like the additional newlines control characters, saving lines like this
      to /var/log/messages:
      
        Mar 16 16:02:29 localhost kernel: #012runnable tasks:#012 S           task   PID         tree-key ...
                                          ^^^^               ^^^^
      Clean these up by moving newline characters to their own SEQ_printf
      invocation.  This leaves the /proc/sched_debug unchanged, but brings the
      entire output into alignment when prefixed:
      
        % echo t > /proc/sysrq-trigger
        % dmesg
        ...
        [   62.410368] runnable tasks:
        [   62.410368]  S           task   PID         tree-key  switches  prio     wait-time             sum-exec        sum-sleep
        [   62.410369] -----------------------------------------------------------------------------------------------------------
        [   62.410369]  I  kworker/u12:0     5      1932.215593       332   120         0.000000         3.621252         0.000000 0 0 /
      
      and no escaped control characters from rsyslog in /var/log/messages:
      
        Mar 16 16:15:06 localhost kernel: runnable tasks:
        Mar 16 16:15:06 localhost kernel: S           task   PID         tree-key  ...
      Signed-off-by: default avatarJoe Lawrence <joe.lawrence@redhat.com>
      Acked-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1521484555-8620-3-git-send-email-joe.lawrence@redhat.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      e9ca2670
    • Joe Lawrence's avatar
      sched/debug: Fix per-task line continuation for console output · a8c024cd
      Joe Lawrence authored
      When the SEQ_printf() macro prints to the console, it runs a simple
      printk() without KERN_CONT "continued" line printing.  The result of
      this is oddly wrapped task info, for example:
      
        % echo t > /proc/sysrq-trigger
        % dmesg
        ...
        runnable tasks:
        ...
        [   29.608611]  I
        [   29.608613]       rcu_sched     8      3252.013846      4087   120
        [   29.608614]         0.000000        29.090111         0.000000
        [   29.608615]  0 0
        [   29.608616]  /
      
      Modify SEQ_printf to use pr_cont() for expected one-line results:
      
        % echo t > /proc/sysrq-trigger
        % dmesg
        ...
        runnable tasks:
        ...
        [  106.716329]  S        cpuhp/5    37      2006.315026        14   120         0.000000         0.496893         0.000000 0 0 /
      Signed-off-by: default avatarJoe Lawrence <joe.lawrence@redhat.com>
      Acked-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1521484555-8620-2-git-send-email-joe.lawrence@redhat.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      a8c024cd
    • Song Liu's avatar
      perf/cgroup: Fix child event counting bug · c917e0f2
      Song Liu authored
      When a perf_event is attached to parent cgroup, it should count events
      for all children cgroups:
      
         parent_group   <---- perf_event
           \
            - child_group  <---- process(es)
      
      However, in our tests, we found this perf_event cannot report reliable
      results. Here is an example case:
      
        # create cgroups
        mkdir -p /sys/fs/cgroup/p/c
        # start perf for parent group
        perf stat -e instructions -G "p"
      
        # on another console, run test process in child cgroup:
        stressapptest -s 2 -M 1000 & echo $! > /sys/fs/cgroup/p/c/cgroup.procs
      
        # after the test process is done, stop perf in the first console shows
      
             <not counted>      instructions              p
      
      The instruction should not be "not counted" as the process runs in the
      child cgroup.
      
      We found this is because perf_event->cgrp and cpuctx->cgrp are not
      identical, thus perf_event->cgrp are not updated properly.
      
      This patch fixes this by updating perf_cgroup properly for ancestor
      cgroup(s).
      Reported-by: default avatarEphraim Park <ephiepark@fb.com>
      Signed-off-by: default avatarSong Liu <songliubraving@fb.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: <jolsa@redhat.com>
      Cc: <kernel-team@fb.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Link: http://lkml.kernel.org/r/20180312165943.1057894-1-songliubraving@fb.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      c917e0f2
    • Josh Poimboeuf's avatar
      jump_label: Disable jump labels in __exit code · 578ae447
      Josh Poimboeuf authored
      With the following commit:
      
        33352244 ("jump_label: Explicitly disable jump labels in __init code")
      
      ... we explicitly disabled jump labels in __init code, so they could be
      detected and not warned about in the following commit:
      
        dc1dd184 ("jump_label: Warn on failed jump_label patching attempt")
      
      In-kernel __exit code has the same issue.  It's never used, so it's
      freed along with the rest of initmem.  But jump label entries in __exit
      code aren't explicitly disabled, so we get the following warning when
      enabling pr_debug() in __exit code:
      
        can't patch jump_label at dmi_sysfs_exit+0x0/0x2d
        WARNING: CPU: 0 PID: 22572 at kernel/jump_label.c:376 __jump_label_update+0x9d/0xb0
      
      Fix the warning by disabling all jump labels in initmem (which includes
      both __init and __exit code).
      Reported-and-tested-by: default avatarLi Wang <liwang@redhat.com>
      Signed-off-by: default avatarJosh Poimboeuf <jpoimboe@redhat.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Jason Baron <jbaron@akamai.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Fixes: dc1dd184 ("jump_label: Warn on failed jump_label patching attempt")
      Link: http://lkml.kernel.org/r/7121e6e595374f06616c505b6e690e275c0054d1.1521483452.git.jpoimboe@redhat.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      578ae447
    • Peter Zijlstra's avatar
      sched/wait: Improve __var_waitqueue() code generation · b3fc5c9b
      Peter Zijlstra authored
      Since we fixed hash_64() to not suck there is no need to play games to
      attempt to improve the hash value on 64-bit.
      
      Also, since we don't use the bit value for the variables, use hash_ptr()
      directly.
      
      No change in functionality.
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: George Spelvin <linux@sciencehorizons.net>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      b3fc5c9b
    • Peter Zijlstra's avatar
      sched/wait: Remove the wait_on_atomic_t() API · 9b8cce52
      Peter Zijlstra authored
      There are no users left (everyone got converted to wait_var_event()), remove it.
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      9b8cce52
    • Peter Zijlstra's avatar
      sched/wait: Introduce wait_var_event() · 6b2bb726
      Peter Zijlstra authored
      As a replacement for the wait_on_atomic_t() API provide the
      wait_var_event() API.
      
      The wait_var_event() API is based on the very same hashed-waitqueue
      idea, but doesn't care about the type (atomic_t) or the specific
      condition (atomic_read() == 0). IOW. it's much more widely
      applicable/flexible.
      
      It shares all the benefits/disadvantages of a hashed-waitqueue
      approach with the existing wait_on_atomic_t/wait_on_bit() APIs.
      
      The API is modeled after the existing wait_event() API, but instead of
      taking a wait_queue_head, it takes an address. This addresses is
      hashed to obtain a wait_queue_head from the bit_wait_table.
      
      Similar to the wait_event() API, it takes a condition expression as
      second argument and will wait until this expression becomes true.
      
      The following are (mostly) identical replacements:
      
      	wait_on_atomic_t(&my_atomic, atomic_t_wait, TASK_UNINTERRUPTIBLE);
      	wake_up_atomic_t(&my_atomic);
      
      	wait_var_event(&my_atomic, !atomic_read(&my_atomic));
      	wake_up_var(&my_atomic);
      
      The only difference is that wake_up_var() is an unconditional wakeup
      and doesn't check the previously hard-coded (atomic_read() == 0)
      condition here. This is of little concequence, since most callers are
      already conditional on atomic_dec_and_test() and the ones that are
      not, are trivial to make so.
      Tested-by: default avatarDan Williams <dan.j.williams@intel.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      6b2bb726
    • Patrick Bellasi's avatar
      sched/fair: Update util_est only on util_avg updates · d519329f
      Patrick Bellasi authored
      The estimated utilization of a task is currently updated every time the
      task is dequeued. However, to keep overheads under control, PELT signals
      are effectively updated at maximum once every 1ms.
      
      Thus, for really short running tasks, it can happen that their util_avg
      value has not been updates since their last enqueue.  If such tasks are
      also frequently running tasks (e.g. the kind of workload generated by
      hackbench) it can also happen that their util_avg is updated only every
      few activations.
      
      This means that updating util_est at every dequeue potentially introduces
      not necessary overheads and it's also conceptually wrong if the util_avg
      signal has never been updated during a task activation.
      
      Let's introduce a throttling mechanism on task's util_est updates
      to sync them with util_avg updates. To make the solution memory
      efficient, both in terms of space and load/store operations, we encode a
      synchronization flag into the LSB of util_est.enqueued.
      This makes util_est an even values only metric, which is still
      considered good enough for its purpose.
      The synchronization bit is (re)set by __update_load_avg_se() once the
      PELT signal of a task has been updated during its last activation.
      
      Such a throttling mechanism allows to keep under control util_est
      overheads in the wakeup hot path, thus making it a suitable mechanism
      which can be enabled also on high-intensity workload systems.
      Thus, this now switches on by default the estimation utilization
      scheduler feature.
      Suggested-by: default avatarChris Redpath <chris.redpath@arm.com>
      Signed-off-by: default avatarPatrick Bellasi <patrick.bellasi@arm.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
      Cc: Joel Fernandes <joelaf@google.com>
      Cc: Juri Lelli <juri.lelli@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Morten Rasmussen <morten.rasmussen@arm.com>
      Cc: Paul Turner <pjt@google.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rafael J . Wysocki <rafael.j.wysocki@intel.com>
      Cc: Steve Muckle <smuckle@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Todd Kjos <tkjos@android.com>
      Cc: Vincent Guittot <vincent.guittot@linaro.org>
      Cc: Viresh Kumar <viresh.kumar@linaro.org>
      Link: http://lkml.kernel.org/r/20180309095245.11071-5-patrick.bellasi@arm.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      d519329f
    • Patrick Bellasi's avatar
      sched/cpufreq/schedutil: Use util_est for OPP selection · a07630b8
      Patrick Bellasi authored
      When schedutil looks at the CPU utilization, the current PELT value for
      that CPU is returned straight away. In certain scenarios this can have
      undesired side effects and delays on frequency selection.
      
      For example, since the task utilization is decayed at wakeup time, a
      long sleeping big task newly enqueued does not add immediately a
      significant contribution to the target CPU. This introduces some latency
      before schedutil will be able to detect the best frequency required by
      that task.
      
      Moreover, the PELT signal build-up time is a function of the current
      frequency, because of the scale invariant load tracking support. Thus,
      starting from a lower frequency, the utilization build-up time will
      increase even more and further delays the selection of the actual
      frequency which better serves the task requirements.
      
      In order to reduce these kind of latencies, we integrate the usage
      of the CPU's estimated utilization in the sugov_get_util function.
      
      This allows to properly consider the expected utilization of a CPU which,
      for example, has just got a big task running after a long sleep period.
      Ultimately this allows to select the best frequency to run a task
      right after its wake-up.
      Signed-off-by: default avatarPatrick Bellasi <patrick.bellasi@arm.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: default avatarDietmar Eggemann <dietmar.eggemann@arm.com>
      Acked-by: default avatarRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Acked-by: default avatarViresh Kumar <viresh.kumar@linaro.org>
      Cc: Joel Fernandes <joelaf@google.com>
      Cc: Juri Lelli <juri.lelli@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Morten Rasmussen <morten.rasmussen@arm.com>
      Cc: Paul Turner <pjt@google.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steve Muckle <smuckle@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Todd Kjos <tkjos@android.com>
      Cc: Vincent Guittot <vincent.guittot@linaro.org>
      Link: http://lkml.kernel.org/r/20180309095245.11071-4-patrick.bellasi@arm.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      a07630b8
    • Patrick Bellasi's avatar
      sched/fair: Use util_est in LB and WU paths · f9be3e59
      Patrick Bellasi authored
      When the scheduler looks at the CPU utilization, the current PELT value
      for a CPU is returned straight away. In certain scenarios this can have
      undesired side effects on task placement.
      
      For example, since the task utilization is decayed at wakeup time, when
      a long sleeping big task is enqueued it does not add immediately a
      significant contribution to the target CPU.
      As a result we generate a race condition where other tasks can be placed
      on the same CPU while it is still considered relatively empty.
      
      In order to reduce this kind of race conditions, this patch introduces the
      required support to integrate the usage of the CPU's estimated utilization
      in the wakeup path, via cpu_util_wake(), as well as in the load-balance
      path, via cpu_util() which is used by update_sg_lb_stats().
      
      The estimated utilization of a CPU is defined to be the maximum between
      its PELT's utilization and the sum of the estimated utilization (at
      previous dequeue time) of all the tasks currently RUNNABLE on that CPU.
      This allows to properly represent the spare capacity of a CPU which, for
      example, has just got a big task running since a long sleep period.
      Signed-off-by: default avatarPatrick Bellasi <patrick.bellasi@arm.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: default avatarDietmar Eggemann <dietmar.eggemann@arm.com>
      Cc: Joel Fernandes <joelaf@google.com>
      Cc: Juri Lelli <juri.lelli@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Morten Rasmussen <morten.rasmussen@arm.com>
      Cc: Paul Turner <pjt@google.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rafael J . Wysocki <rafael.j.wysocki@intel.com>
      Cc: Steve Muckle <smuckle@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Todd Kjos <tkjos@android.com>
      Cc: Vincent Guittot <vincent.guittot@linaro.org>
      Cc: Viresh Kumar <viresh.kumar@linaro.org>
      Link: http://lkml.kernel.org/r/20180309095245.11071-3-patrick.bellasi@arm.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      f9be3e59
    • Patrick Bellasi's avatar
      sched/fair: Add util_est on top of PELT · 7f65ea42
      Patrick Bellasi authored
      The util_avg signal computed by PELT is too variable for some use-cases.
      For example, a big task waking up after a long sleep period will have its
      utilization almost completely decayed. This introduces some latency before
      schedutil will be able to pick the best frequency to run a task.
      
      The same issue can affect task placement. Indeed, since the task
      utilization is already decayed at wakeup, when the task is enqueued in a
      CPU, this can result in a CPU running a big task as being temporarily
      represented as being almost empty. This leads to a race condition where
      other tasks can be potentially allocated on a CPU which just started to run
      a big task which slept for a relatively long period.
      
      Moreover, the PELT utilization of a task can be updated every [ms], thus
      making it a continuously changing value for certain longer running
      tasks. This means that the instantaneous PELT utilization of a RUNNING
      task is not really meaningful to properly support scheduler decisions.
      
      For all these reasons, a more stable signal can do a better job of
      representing the expected/estimated utilization of a task/cfs_rq.
      Such a signal can be easily created on top of PELT by still using it as
      an estimator which produces values to be aggregated on meaningful
      events.
      
      This patch adds a simple implementation of util_est, a new signal built on
      top of PELT's util_avg where:
      
          util_est(task) = max(task::util_avg, f(task::util_avg@dequeue))
      
      This allows to remember how big a task has been reported by PELT in its
      previous activations via f(task::util_avg@dequeue), which is the new
      _task_util_est(struct task_struct*) function added by this patch.
      
      If a task should change its behavior and it runs longer in a new
      activation, after a certain time its util_est will just track the
      original PELT signal (i.e. task::util_avg).
      
      The estimated utilization of cfs_rq is defined only for root ones.
      That's because the only sensible consumer of this signal are the
      scheduler and schedutil when looking for the overall CPU utilization
      due to FAIR tasks.
      
      For this reason, the estimated utilization of a root cfs_rq is simply
      defined as:
      
          util_est(cfs_rq) = max(cfs_rq::util_avg, cfs_rq::util_est::enqueued)
      
      where:
      
          cfs_rq::util_est::enqueued = sum(_task_util_est(task))
                                       for each RUNNABLE task on that root cfs_rq
      
      It's worth noting that the estimated utilization is tracked only for
      objects of interests, specifically:
      
       - Tasks: to better support tasks placement decisions
       - root cfs_rqs: to better support both tasks placement decisions as
                       well as frequencies selection
      Signed-off-by: default avatarPatrick Bellasi <patrick.bellasi@arm.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: default avatarDietmar Eggemann <dietmar.eggemann@arm.com>
      Cc: Joel Fernandes <joelaf@google.com>
      Cc: Juri Lelli <juri.lelli@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Morten Rasmussen <morten.rasmussen@arm.com>
      Cc: Paul Turner <pjt@google.com>
      Cc: Rafael J . Wysocki <rafael.j.wysocki@intel.com>
      Cc: Steve Muckle <smuckle@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Todd Kjos <tkjos@android.com>
      Cc: Vincent Guittot <vincent.guittot@linaro.org>
      Cc: Viresh Kumar <viresh.kumar@linaro.org>
      Link: http://lkml.kernel.org/r/20180309095245.11071-2-patrick.bellasi@arm.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      7f65ea42