1. 27 9月, 2017 2 次提交
  2. 16 8月, 2017 2 次提交
  3. 14 8月, 2017 1 次提交
  4. 02 8月, 2017 19 次提交
  5. 01 7月, 2017 1 次提交
  6. 14 4月, 2017 4 次提交
  7. 11 4月, 2017 1 次提交
    • J
      x86/intel_rdt: Add cpus_list rdtgroup file · 4ffa3c97
      Jiri Olsa 提交于
      The resource control filesystem provides only a bitmask based cpus file for
      assigning CPUs to a resource group. That's cumbersome with large cpumasks
      and non-intuitive when modifying the file from the command line.
      
      Range based cpu lists are commonly used along with bitmask based cpu files
      in various subsystems throughout the kernel.
      
      Add 'cpus_list' file which is CPU range based.
      
        # cd /sys/fs/resctrl/
        # echo 1-10 > krava/cpus_list
        # cat krava/cpus_list
        1-10
        # cat krava/cpus
        0007fe
        # cat cpus
        fffff9
        # cat cpus_list
        0,3-23
      
      [ tglx: Massaged changelog and replaced "bitmask lists" by "CPU ranges" ]
      Signed-off-by: NJiri Olsa <jolsa@kernel.org>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Shaohua Li <shli@fb.com>
      Link: http://lkml.kernel.org/r/20170410145232.GF25354@kravaSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      4ffa3c97
  8. 15 3月, 2017 1 次提交
    • J
      x86/intel_rdt: Put group node in rdtgroup_kn_unlock · 49ec8f5b
      Jiri Olsa 提交于
      The rdtgroup_kn_unlock waits for the last user to release and put its
      node. But it's calling kernfs_put on the node which calls the
      rdtgroup_kn_unlock, which might not be the group's directory node, but
      another group's file node.
      
      This race could be easily reproduced by running 2 instances
      of following script:
      
        mount -t resctrl resctrl /sys/fs/resctrl/
        pushd /sys/fs/resctrl/
        mkdir krava
        echo "krava" > krava/schemata
        rmdir krava
        popd
        umount  /sys/fs/resctrl
      
      It triggers the slub debug error message with following command
      line config: slub_debug=,kernfs_node_cache.
      
      Call kernfs_put on the group's node to fix it.
      
      Fixes: 60cf5e10 ("x86/intel_rdt: Add mkdir to resctrl file system")
      Signed-off-by: NJiri Olsa <jolsa@kernel.org>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Shaohua Li <shli@fb.com>
      Cc: stable@vger.kernel.org
      Link: http://lkml.kernel.org/r/1489501253-20248-1-git-send-email-jolsa@kernel.orgSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      49ec8f5b
  9. 02 3月, 2017 2 次提交
  10. 01 3月, 2017 1 次提交
  11. 09 12月, 2016 1 次提交
  12. 02 12月, 2016 1 次提交
  13. 28 11月, 2016 2 次提交
  14. 16 11月, 2016 2 次提交
    • F
      x86/intel_rdt: Update percpu closid immeditately on CPUs affected by changee · f4107702
      Fenghua Yu 提交于
      If CPUs are moved to or removed from a rdtgroup, the percpu closid storage
      is updated. If tasks running on an affected CPU use the percpu closid then
      the PQR_ASSOC MSR is only updated when the task runs through a context
      switch. Up to the context switch the CPUs operate on the wrong closid. This
      state is potentially unbound.
          
      Make the change immediately effective by invoking a smp function call on
      the affected CPUs which stores the new closid in the perpu storage and
      calls the rdt_sched_in() function which updates the MSR, if the current
      task uses the percpu closid.
      
      [ tglx: Made it work and massaged changelog once more ]
      Signed-off-by: NFenghua Yu <fenghua.yu@intel.com>
      Cc: "Ravi V Shankar" <ravi.v.shankar@intel.com>
      Cc: "Tony Luck" <tony.luck@intel.com>
      Cc: "Sai Prakhya" <sai.praneeth.prakhya@intel.com>
      Cc: "Vikas Shivappa" <vikas.shivappa@linux.intel.com>
      Cc: "Ingo Molnar" <mingo@elte.hu>
      Cc: "H. Peter Anvin" <h.peter.anvin@intel.com>
      Link: http://lkml.kernel.org/r/1478912558-55514-3-git-send-email-fenghua.yu@intel.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      f4107702
    • F
      x86/intel_rdt: Reset per cpu closids on unmount · c7cc0cc1
      Fenghua Yu 提交于
      All CPUs in a rdtgroup are given back to the default rdtgroup before the
      rdtgroup is removed during umount. After umount, the default rdtgroup
      contains all online CPUs, but the per cpu closids are not cleared. As a
      result the stale closid value will be used immediately after the next
      mount.
      
      Move all cpus to the default group and update the percpu closid storage.
      
      [ tglx: Massaged changelong ]
      Signed-off-by: NFenghua Yu <fenghua.yu@intel.com>
      Cc: "Ravi V Shankar" <ravi.v.shankar@intel.com>
      Cc: "Tony Luck" <tony.luck@intel.com>
      Cc: "Sai Prakhya" <sai.praneeth.prakhya@intel.com>
      Cc: "Vikas Shivappa" <vikas.shivappa@linux.intel.com>
      Cc: "Ingo Molnar" <mingo@elte.hu>
      Cc: "H. Peter Anvin" <h.peter.anvin@intel.com>
      Link: http://lkml.kernel.org/r/1478912558-55514-2-git-send-email-fenghua.yu@intel.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      c7cc0cc1