1. 23 2月, 2018 1 次提交
    • W
      x86/intel_rdt: Fix incorrect returned value when creating rdgroup... · 36e74d35
      Wang Hui 提交于
      x86/intel_rdt: Fix incorrect returned value when creating rdgroup sub-directory in resctrl file system
      
      If no monitoring feature is detected because all monitoring features are
      disabled during boot time or there is no monitoring feature in hardware,
      creating rdtgroup sub-directory by "mkdir" command reports error:
      
        mkdir: cannot create directory ‘/sys/fs/resctrl/p1’: No such file or directory
      
      But the sub-directory actually is generated and content is correct:
      
        cpus  cpus_list  schemata  tasks
      
      The error is because rdtgroup_mkdir_ctrl_mon() returns non zero value after
      the sub-directory is created and the returned value is reported as an error
      to user.
      
      Clear the returned value to report to user that the sub-directory is
      actually created successfully.
      Signed-off-by: NWang Hui <john.wanghui@huawei.com>
      Signed-off-by: NZhang Yanfei <yanfei.zhang@huawei.com>
      Signed-off-by: NFenghua Yu <fenghua.yu@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ravi V Shankar <ravi.v.shankar@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vikas <vikas.shivappa@intel.com>
      Cc: Xiaochen Shen <xiaochen.shen@intel.com>
      Link: http://lkml.kernel.org/r/1519356363-133085-1-git-send-email-fenghua.yu@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      36e74d35
  2. 18 1月, 2018 1 次提交
  3. 21 10月, 2017 2 次提交
    • R
      x86/intel_rdt: Fix potential deadlock during resctrl mount · 87943db7
      Reinette Chatre 提交于
      Sai reported a warning during some MBA tests:
      
      [  236.755559] ======================================================
      [  236.762443] WARNING: possible circular locking dependency detected
      [  236.769328] 4.14.0-rc4-yocto-standard #8 Not tainted
      [  236.774857] ------------------------------------------------------
      [  236.781738] mount/10091 is trying to acquire lock:
      [  236.787071]  (cpu_hotplug_lock.rw_sem){++++}, at: [<ffffffff8117f892>] static_key_enable+0x12/0x30
      [  236.797058]
                     but task is already holding lock:
      [  236.803552]  (&type->s_umount_key#37/1){+.+.}, at: [<ffffffff81208b2f>] sget_userns+0x32f/0x520
      [  236.813247]
                     which lock already depends on the new lock.
      
      [  236.822353]
                     the existing dependency chain (in reverse order) is:
      [  236.830686]
                     -> #4 (&type->s_umount_key#37/1){+.+.}:
      [  236.837756]        __lock_acquire+0x1100/0x11a0
      [  236.842799]        lock_acquire+0xdf/0x1d0
      [  236.847363]        down_write_nested+0x46/0x80
      [  236.852310]        sget_userns+0x32f/0x520
      [  236.856873]        kernfs_mount_ns+0x7e/0x1f0
      [  236.861728]        rdt_mount+0x30c/0x440
      [  236.866096]        mount_fs+0x38/0x150
      [  236.870262]        vfs_kern_mount+0x67/0x150
      [  236.875015]        do_mount+0x1df/0xd50
      [  236.879286]        SyS_mount+0x95/0xe0
      [  236.883464]        entry_SYSCALL_64_fastpath+0x18/0xad
      [  236.889183]
                     -> #3 (rdtgroup_mutex){+.+.}:
      [  236.895292]        __lock_acquire+0x1100/0x11a0
      [  236.900337]        lock_acquire+0xdf/0x1d0
      [  236.904899]        __mutex_lock+0x80/0x8f0
      [  236.909459]        mutex_lock_nested+0x1b/0x20
      [  236.914407]        intel_rdt_online_cpu+0x3b/0x4a0
      [  236.919745]        cpuhp_invoke_callback+0xce/0xb80
      [  236.925177]        cpuhp_thread_fun+0x1c5/0x230
      [  236.930222]        smpboot_thread_fn+0x11a/0x1e0
      [  236.935362]        kthread+0x152/0x190
      [  236.939536]        ret_from_fork+0x27/0x40
      [  236.944097]
                     -> #2 (cpuhp_state-up){+.+.}:
      [  236.950199]        __lock_acquire+0x1100/0x11a0
      [  236.955241]        lock_acquire+0xdf/0x1d0
      [  236.959800]        cpuhp_issue_call+0x12e/0x1c0
      [  236.964845]        __cpuhp_setup_state_cpuslocked+0x13b/0x2f0
      [  236.971242]        __cpuhp_setup_state+0xa7/0x120
      [  236.976483]        page_writeback_init+0x43/0x67
      [  236.981623]        pagecache_init+0x38/0x3b
      [  236.986281]        start_kernel+0x3c6/0x41a
      [  236.990931]        x86_64_start_reservations+0x2a/0x2c
      [  236.996650]        x86_64_start_kernel+0x72/0x75
      [  237.001793]        verify_cpu+0x0/0xfb
      [  237.005966]
                     -> #1 (cpuhp_state_mutex){+.+.}:
      [  237.012364]        __lock_acquire+0x1100/0x11a0
      [  237.017408]        lock_acquire+0xdf/0x1d0
      [  237.021969]        __mutex_lock+0x80/0x8f0
      [  237.026527]        mutex_lock_nested+0x1b/0x20
      [  237.031475]        __cpuhp_setup_state_cpuslocked+0x54/0x2f0
      [  237.037777]        __cpuhp_setup_state+0xa7/0x120
      [  237.043013]        page_alloc_init+0x28/0x30
      [  237.047769]        start_kernel+0x148/0x41a
      [  237.052425]        x86_64_start_reservations+0x2a/0x2c
      [  237.058145]        x86_64_start_kernel+0x72/0x75
      [  237.063284]        verify_cpu+0x0/0xfb
      [  237.067456]
                     -> #0 (cpu_hotplug_lock.rw_sem){++++}:
      [  237.074436]        check_prev_add+0x401/0x800
      [  237.079286]        __lock_acquire+0x1100/0x11a0
      [  237.084330]        lock_acquire+0xdf/0x1d0
      [  237.088890]        cpus_read_lock+0x42/0x90
      [  237.093548]        static_key_enable+0x12/0x30
      [  237.098496]        rdt_mount+0x406/0x440
      [  237.102862]        mount_fs+0x38/0x150
      [  237.107035]        vfs_kern_mount+0x67/0x150
      [  237.111787]        do_mount+0x1df/0xd50
      [  237.116058]        SyS_mount+0x95/0xe0
      [  237.120233]        entry_SYSCALL_64_fastpath+0x18/0xad
      [  237.125952]
                     other info that might help us debug this:
      
      [  237.134867] Chain exists of:
                       cpu_hotplug_lock.rw_sem --> rdtgroup_mutex --> &type->s_umount_key#37/1
      
      [  237.148425]  Possible unsafe locking scenario:
      
      [  237.155015]        CPU0                    CPU1
      [  237.160057]        ----                    ----
      [  237.165100]   lock(&type->s_umount_key#37/1);
      [  237.169952]                                lock(rdtgroup_mutex);
      [  237.176641]
      lock(&type->s_umount_key#37/1);
      [  237.184287]   lock(cpu_hotplug_lock.rw_sem);
      [  237.189041]
                      *** DEADLOCK ***
      
      When the resctrl filesystem is mounted the locks must be acquired in the
      same order as was done when the cpus came online:
      
           cpu_hotplug_lock before rdtgroup_mutex.
      
      This also requires to switch the static_branch_enable() calls to the
      _cpulocked variant because now cpu hotplug lock is held already.
      
      [ tglx: Switched to cpus_read_[un]lock ]
      Reported-by: NSai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
      Signed-off-by: NReinette Chatre <reinette.chatre@intel.com>
      Tested-by: NSai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
      Acked-by: NVikas Shivappa <vikas.shivappa@linux.intel.com>
      Cc: fenghua.yu@intel.com
      Cc: tony.luck@intel.com
      Link: https://lkml.kernel.org/r/9c41b91bc2f47d9e95b62b213ecdb45623c47a9f.1508490116.git.reinette.chatre@intel.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      87943db7
    • R
      x86/intel_rdt: Fix potential deadlock during resctrl unmount · 36b6f9fc
      Reinette Chatre 提交于
      Lockdep warns about a potential deadlock:
      
      [   66.782842] ======================================================
      [   66.782888] WARNING: possible circular locking dependency detected
      [   66.782937] 4.14.0-rc2-test-test+ #48 Not tainted
      [   66.782983] ------------------------------------------------------
      [   66.783052] umount/336 is trying to acquire lock:
      [   66.783117]  (cpu_hotplug_lock.rw_sem){++++}, at: [<ffffffff81032395>] rdt_kill_sb+0x215/0x390
      [   66.783193]
                     but task is already holding lock:
      [   66.783244]  (rdtgroup_mutex){+.+.}, at: [<ffffffff810321b6>] rdt_kill_sb+0x36/0x390
      [   66.783305]
                     which lock already depends on the new lock.
      
      [   66.783364]
                     the existing dependency chain (in reverse order) is:
      [   66.783419]
                     -> #3 (rdtgroup_mutex){+.+.}:
      [   66.783467]        __lock_acquire+0x1293/0x13f0
      [   66.783509]        lock_acquire+0xaf/0x220
      [   66.783543]        __mutex_lock+0x71/0x9b0
      [   66.783575]        mutex_lock_nested+0x1b/0x20
      [   66.783610]        intel_rdt_online_cpu+0x3b/0x430
      [   66.783649]        cpuhp_invoke_callback+0xab/0x8e0
      [   66.783687]        cpuhp_thread_fun+0x7a/0x150
      [   66.783722]        smpboot_thread_fn+0x1cc/0x270
      [   66.783764]        kthread+0x16e/0x190
      [   66.783794]        ret_from_fork+0x27/0x40
      [   66.783825]
                     -> #2 (cpuhp_state){+.+.}:
      [   66.783870]        __lock_acquire+0x1293/0x13f0
      [   66.783906]        lock_acquire+0xaf/0x220
      [   66.783938]        cpuhp_issue_call+0x102/0x170
      [   66.783974]        __cpuhp_setup_state_cpuslocked+0x154/0x2a0
      [   66.784023]        __cpuhp_setup_state+0xc7/0x170
      [   66.784061]        page_writeback_init+0x43/0x67
      [   66.784097]        pagecache_init+0x43/0x4a
      [   66.784131]        start_kernel+0x3ad/0x3f7
      [   66.784165]        x86_64_start_reservations+0x2a/0x2c
      [   66.784204]        x86_64_start_kernel+0x72/0x75
      [   66.784241]        verify_cpu+0x0/0xfb
      [   66.784270]
                     -> #1 (cpuhp_state_mutex){+.+.}:
      [   66.784319]        __lock_acquire+0x1293/0x13f0
      [   66.784355]        lock_acquire+0xaf/0x220
      [   66.784387]        __mutex_lock+0x71/0x9b0
      [   66.784419]        mutex_lock_nested+0x1b/0x20
      [   66.784454]        __cpuhp_setup_state_cpuslocked+0x52/0x2a0
      [   66.784497]        __cpuhp_setup_state+0xc7/0x170
      [   66.784535]        page_alloc_init+0x28/0x30
      [   66.784569]        start_kernel+0x148/0x3f7
      [   66.784602]        x86_64_start_reservations+0x2a/0x2c
      [   66.784642]        x86_64_start_kernel+0x72/0x75
      [   66.784678]        verify_cpu+0x0/0xfb
      [   66.784707]
                     -> #0 (cpu_hotplug_lock.rw_sem){++++}:
      [   66.784759]        check_prev_add+0x32f/0x6e0
      [   66.784794]        __lock_acquire+0x1293/0x13f0
      [   66.784830]        lock_acquire+0xaf/0x220
      [   66.784863]        cpus_read_lock+0x3d/0xb0
      [   66.784896]        rdt_kill_sb+0x215/0x390
      [   66.784930]        deactivate_locked_super+0x3e/0x70
      [   66.784968]        deactivate_super+0x40/0x60
      [   66.785003]        cleanup_mnt+0x3f/0x80
      [   66.785034]        __cleanup_mnt+0x12/0x20
      [   66.785070]        task_work_run+0x8b/0xc0
      [   66.785103]        exit_to_usermode_loop+0x94/0xa0
      [   66.786804]        syscall_return_slowpath+0xe8/0x150
      [   66.788502]        entry_SYSCALL_64_fastpath+0xab/0xad
      [   66.790194]
                     other info that might help us debug this:
      
      [   66.795139] Chain exists of:
                       cpu_hotplug_lock.rw_sem --> cpuhp_state --> rdtgroup_mutex
      
      [   66.800035]  Possible unsafe locking scenario:
      
      [   66.803267]        CPU0                    CPU1
      [   66.804867]        ----                    ----
      [   66.806443]   lock(rdtgroup_mutex);
      [   66.808002]                                lock(cpuhp_state);
      [   66.809565]                                lock(rdtgroup_mutex);
      [   66.811110]   lock(cpu_hotplug_lock.rw_sem);
      [   66.812608]
                      *** DEADLOCK ***
      
      [   66.816983] 2 locks held by umount/336:
      [   66.818418]  #0:  (&type->s_umount_key#35){+.+.}, at: [<ffffffff81229738>] deactivate_super+0x38/0x60
      [   66.819922]  #1:  (rdtgroup_mutex){+.+.}, at: [<ffffffff810321b6>] rdt_kill_sb+0x36/0x390
      
      When the resctrl filesystem is unmounted the locks should be obtain in the
      locks in the same order as was done when the cpus came online:
      
            cpu_hotplug_lock before rdtgroup_mutex.
      
      This also requires to switch the static_branch_disable() calls to the
      _cpulocked variant because now cpu hotplug lock is held already.
      
      [ tglx: Switched to cpus_read_[un]lock ]
      Signed-off-by: NReinette Chatre <reinette.chatre@intel.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Tested-by: NSai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
      Acked-by: NVikas Shivappa <vikas.shivappa@linux.intel.com>
      Acked-by: NFenghua Yu <fenghua.yu@intel.com>
      Acked-by: NTony Luck <tony.luck@intel.com>
      Link: https://lkml.kernel.org/r/cc292e76be073f7260604651711c47b09fd0dc81.1508490116.git.reinette.chatre@intel.com
      36b6f9fc
  4. 05 10月, 2017 1 次提交
  5. 27 9月, 2017 4 次提交
  6. 16 8月, 2017 2 次提交
  7. 14 8月, 2017 1 次提交
  8. 02 8月, 2017 19 次提交
  9. 01 7月, 2017 1 次提交
  10. 14 4月, 2017 4 次提交
  11. 11 4月, 2017 1 次提交
    • J
      x86/intel_rdt: Add cpus_list rdtgroup file · 4ffa3c97
      Jiri Olsa 提交于
      The resource control filesystem provides only a bitmask based cpus file for
      assigning CPUs to a resource group. That's cumbersome with large cpumasks
      and non-intuitive when modifying the file from the command line.
      
      Range based cpu lists are commonly used along with bitmask based cpu files
      in various subsystems throughout the kernel.
      
      Add 'cpus_list' file which is CPU range based.
      
        # cd /sys/fs/resctrl/
        # echo 1-10 > krava/cpus_list
        # cat krava/cpus_list
        1-10
        # cat krava/cpus
        0007fe
        # cat cpus
        fffff9
        # cat cpus_list
        0,3-23
      
      [ tglx: Massaged changelog and replaced "bitmask lists" by "CPU ranges" ]
      Signed-off-by: NJiri Olsa <jolsa@kernel.org>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Shaohua Li <shli@fb.com>
      Link: http://lkml.kernel.org/r/20170410145232.GF25354@kravaSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      4ffa3c97
  12. 15 3月, 2017 1 次提交
    • J
      x86/intel_rdt: Put group node in rdtgroup_kn_unlock · 49ec8f5b
      Jiri Olsa 提交于
      The rdtgroup_kn_unlock waits for the last user to release and put its
      node. But it's calling kernfs_put on the node which calls the
      rdtgroup_kn_unlock, which might not be the group's directory node, but
      another group's file node.
      
      This race could be easily reproduced by running 2 instances
      of following script:
      
        mount -t resctrl resctrl /sys/fs/resctrl/
        pushd /sys/fs/resctrl/
        mkdir krava
        echo "krava" > krava/schemata
        rmdir krava
        popd
        umount  /sys/fs/resctrl
      
      It triggers the slub debug error message with following command
      line config: slub_debug=,kernfs_node_cache.
      
      Call kernfs_put on the group's node to fix it.
      
      Fixes: 60cf5e10 ("x86/intel_rdt: Add mkdir to resctrl file system")
      Signed-off-by: NJiri Olsa <jolsa@kernel.org>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Shaohua Li <shli@fb.com>
      Cc: stable@vger.kernel.org
      Link: http://lkml.kernel.org/r/1489501253-20248-1-git-send-email-jolsa@kernel.orgSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      49ec8f5b
  13. 02 3月, 2017 2 次提交