1. 20 4月, 2015 4 次提交
  2. 09 4月, 2015 1 次提交
    • B
      Defer processing of REQ_PREEMPT requests for blocked devices · bba0bdd7
      Bart Van Assche 提交于
      SCSI transport drivers and SCSI LLDs block a SCSI device if the
      transport layer is not operational. This means that in this state
      no requests should be processed, even if the REQ_PREEMPT flag has
      been set. This patch avoids that a rescan shortly after a cable
      pull sporadically triggers the following kernel oops:
      
      BUG: unable to handle kernel paging request at ffffc9001a6bc084
      IP: [<ffffffffa04e08f2>] mlx4_ib_post_send+0xd2/0xb30 [mlx4_ib]
      Process rescan-scsi-bus (pid: 9241, threadinfo ffff88053484a000, task ffff880534aae100)
      Call Trace:
       [<ffffffffa0718135>] srp_post_send+0x65/0x70 [ib_srp]
       [<ffffffffa071b9df>] srp_queuecommand+0x1cf/0x3e0 [ib_srp]
       [<ffffffffa0001ff1>] scsi_dispatch_cmd+0x101/0x280 [scsi_mod]
       [<ffffffffa0009ad1>] scsi_request_fn+0x411/0x4d0 [scsi_mod]
       [<ffffffff81223b37>] __blk_run_queue+0x27/0x30
       [<ffffffff8122a8d2>] blk_execute_rq_nowait+0x82/0x110
       [<ffffffff8122a9c2>] blk_execute_rq+0x62/0xf0
       [<ffffffffa000b0e8>] scsi_execute+0xe8/0x190 [scsi_mod]
       [<ffffffffa000b2f3>] scsi_execute_req+0xa3/0x130 [scsi_mod]
       [<ffffffffa000c1aa>] scsi_probe_lun+0x17a/0x450 [scsi_mod]
       [<ffffffffa000ce86>] scsi_probe_and_add_lun+0x156/0x480 [scsi_mod]
       [<ffffffffa000dc2f>] __scsi_scan_target+0xdf/0x1f0 [scsi_mod]
       [<ffffffffa000dfa3>] scsi_scan_host_selected+0x183/0x1c0 [scsi_mod]
       [<ffffffffa000edfb>] scsi_scan+0xdb/0xe0 [scsi_mod]
       [<ffffffffa000ee13>] store_scan+0x13/0x20 [scsi_mod]
       [<ffffffff811c8d9b>] sysfs_write_file+0xcb/0x160
       [<ffffffff811589de>] vfs_write+0xce/0x140
       [<ffffffff81158b53>] sys_write+0x53/0xa0
       [<ffffffff81464592>] system_call_fastpath+0x16/0x1b
       [<00007f611c9d9300>] 0x7f611c9d92ff
      Reported-by: NMax Gurtuvoy <maxg@mellanox.com>
      Signed-off-by: NBart Van Assche <bart.vanassche@sandisk.com>
      Reviewed-by: NMike Christie <michaelc@cs.wisc.edu>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NJames Bottomley <JBottomley@Odin.com>
      bba0bdd7
  3. 08 4月, 2015 2 次提交
    • M
      include/linux/dmapool.h: declare struct device · ce66b032
      Mark Brown 提交于
      dmapool uses struct device in function arguments but relies on an
      implicit inclusion to declare struct device causing warnings in some
      configurations:
      
        include/linux/dmapool.h:31:7: warning: 'struct device' declared inside parameter list
      
      Fix this by adding a struct device declaration to the file.
      Signed-off-by: NMark Brown <broonie@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ce66b032
    • M
      mm: move zone lock to a different cache line than order-0 free page lists · a368ab67
      Mel Gorman 提交于
      Huang Ying reported the following problem due to commit 3484b2de ("mm:
      rearrange zone fields into read-only, page alloc, statistics and page
      reclaim lines") from the Intel performance tests
      
          24b7e581  3484b2de
          ----------------  --------------------------
                   %stddev     %change         %stddev
                       \          |                \
              152288 \261  0%     -46.2%      81911 \261  0%  aim7.jobs-per-min
                 237 \261  0%     +85.6%        440 \261  0%  aim7.time.elapsed_time
                 237 \261  0%     +85.6%        440 \261  0%  aim7.time.elapsed_time.max
               25026 \261  0%     +70.7%      42712 \261  0%  aim7.time.system_time
             2186645 \261  5%     +32.0%    2885949 \261  4%  aim7.time.voluntary_context_switches
             4576561 \261  1%     +24.9%    5715773 \261  0%  aim7.time.involuntary_context_switches
      
      The problem is specific to very large machines under stress.  It was not
      reproducible with the machines I had used to justify the original patch
      because large numbers of CPUs are required.  When pressure is high enough,
      the cache line is bouncing between CPUs trying to acquire the lock and the
      holder of the lock adjusting free lists.  The intention was that the
      acquirer of the lock would automatically have the cache line holding the
      free lists but according to Huang, this is not a universal win.
      
      One possibility is to move the zone lock to its own cache line but it
      increases the size of the zone.  This patch moves the lock to the other
      end of the free lists where they do not contend under high pressure.  It
      does mean the page allocator paths now require more cache lines but Huang
      reports that it restores performance to previous levels on large machines
      
                   %stddev     %change         %stddev
                       \          |                \
               84568 \261  1%     +94.3%     164280 \261  1%  aim7.jobs-per-min
             2881944 \261  2%     -35.1%    1870386 \261  8%  aim7.time.voluntary_context_switches
                 681 \261  1%      -3.4%        658 \261  0%  aim7.time.user_time
             5538139 \261  0%     -12.1%    4867884 \261  0%  aim7.time.involuntary_context_switches
               44174 \261  1%     -46.0%      23848 \261  1%  aim7.time.system_time
                 426 \261  1%     -48.4%        219 \261  1%  aim7.time.elapsed_time
                 426 \261  1%     -48.4%        219 \261  1%  aim7.time.elapsed_time.max
                 468 \261  1%     -43.1%        266 \261  2%  uptime.boot
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      Reported-by: NHuang Ying <ying.huang@intel.com>
      Tested-by: NHuang Ying <ying.huang@intel.com>
      Acked-by: NDavid Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a368ab67
  4. 07 4月, 2015 2 次提交
    • A
      fix mremap() vs. ioctx_kill() race · b2edffdd
      Al Viro 提交于
      teach ->mremap() method to return an error and have it fail for
      aio mappings in process of being killed
      
      Note that in case of ->mremap() failure we need to undo move_page_tables()
      we'd already done; we could call ->mremap() first, but then the failure of
      move_page_tables() would require undoing whatever _successful_ ->mremap()
      has done, which would be a lot more headache in general.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      b2edffdd
    • H
      ipv6: protect skb->sk accesses from recursive dereference inside the stack · f60e5990
      hannes@stressinduktion.org 提交于
      We should not consult skb->sk for output decisions in xmit recursion
      levels > 0 in the stack. Otherwise local socket settings could influence
      the result of e.g. tunnel encapsulation process.
      
      ipv6 does not conform with this in three places:
      
      1) ip6_fragment: we do consult ipv6_npinfo for frag_size
      
      2) sk_mc_loop in ipv6 uses skb->sk and checks if we should
         loop the packet back to the local socket
      
      3) ip6_skb_dst_mtu could query the settings from the user socket and
         force a wrong MTU
      
      Furthermore:
      In sk_mc_loop we could potentially land in WARN_ON(1) if we use a
      PF_PACKET socket ontop of an IPv6-backed vxlan device.
      
      Reuse xmit_recursion as we are currently only interested in protecting
      tunnel devices.
      
      Cc: Jiri Pirko <jiri@resnulli.us>
      Signed-off-by: NHannes Frederic Sowa <hannes@stressinduktion.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f60e5990
  5. 03 4月, 2015 1 次提交
  6. 01 4月, 2015 1 次提交
    • J
      sunrpc: make debugfs file creation failure non-fatal · f9c72d10
      Jeff Layton 提交于
      We currently have a problem that SELinux policy is being enforced when
      creating debugfs files. If a debugfs file is created as a side effect of
      doing some syscall, then that creation can fail if the SELinux policy
      for that process prevents it.
      
      This seems wrong. We don't do that for files under /proc, for instance,
      so Bruce has proposed a patch to fix that.
      
      While discussing that patch however, Greg K.H. stated:
      
          "No kernel code should care / fail if a debugfs function fails, so
           please fix up the sunrpc code first."
      
      This patch converts all of the sunrpc debugfs setup code to be void
      return functins, and the callers to not look for errors from those
      functions.
      
      This should allow rpc_clnt and rpc_xprt creation to work, even if the
      kernel fails to create debugfs files for some reason.
      
      Symptoms were failing krb5 mounts on systems using gss-proxy and
      selinux.
      
      Fixes: 388f0c77 "sunrpc: add a debugfs rpc_xprt directory..."
      Cc: stable@vger.kernel.org
      Signed-off-by: NJeff Layton <jeff.layton@primarydata.com>
      Acked-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: NJ. Bruce Fields <bfields@redhat.com>
      f9c72d10
  7. 31 3月, 2015 1 次提交
    • M
      block: fix blk_stack_limits() regression due to lcm() change · e9637415
      Mike Snitzer 提交于
      Linux 3.19 commit 69c953c8 ("lib/lcm.c: lcm(n,0)=lcm(0,n) is 0, not n")
      caused blk_stack_limits() to not properly stack queue_limits for stacked
      devices (e.g. DM).
      
      Fix this regression by establishing lcm_not_zero() and switching
      blk_stack_limits() over to using it.
      
      DM uses blk_set_stacking_limits() to establish the initial top-level
      queue_limits that are then built up based on underlying devices' limits
      using blk_stack_limits().  In the case of optimal_io_size (io_opt)
      blk_set_stacking_limits() establishes a default value of 0.  With commit
      69c953c8, lcm(0, n) is no longer n, which compromises proper stacking of
      the underlying devices' io_opt.
      
      Test:
      $ modprobe scsi_debug dev_size_mb=10 num_tgts=1 opt_blks=1536
      $ cat /sys/block/sde/queue/optimal_io_size
      786432
      $ dmsetup create node --table "0 100 linear /dev/sde 0"
      
      Before this fix:
      $ cat /sys/block/dm-5/queue/optimal_io_size
      0
      
      After this fix:
      $ cat /sys/block/dm-5/queue/optimal_io_size
      786432
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Cc: stable@vger.kernel.org # 3.19+
      Acked-by: NMartin K. Petersen <martin.petersen@oracle.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      e9637415
  8. 30 3月, 2015 4 次提交
  9. 26 3月, 2015 1 次提交
    • M
      mm: numa: slow PTE scan rate if migration failures occur · 074c2381
      Mel Gorman 提交于
      Dave Chinner reported the following on https://lkml.org/lkml/2015/3/1/226
      
        Across the board the 4.0-rc1 numbers are much slower, and the degradation
        is far worse when using the large memory footprint configs. Perf points
        straight at the cause - this is from 4.0-rc1 on the "-o bhash=101073" config:
      
         -   56.07%    56.07%  [kernel]            [k] default_send_IPI_mask_sequence_phys
            - default_send_IPI_mask_sequence_phys
               - 99.99% physflat_send_IPI_mask
                  - 99.37% native_send_call_func_ipi
                       smp_call_function_many
                     - native_flush_tlb_others
                        - 99.85% flush_tlb_page
                             ptep_clear_flush
                             try_to_unmap_one
                             rmap_walk
                             try_to_unmap
                             migrate_pages
                             migrate_misplaced_page
                           - handle_mm_fault
                              - 99.73% __do_page_fault
                                   trace_do_page_fault
                                   do_async_page_fault
                                 + async_page_fault
                    0.63% native_send_call_func_single_ipi
                       generic_exec_single
                       smp_call_function_single
      
      This is showing excessive migration activity even though excessive
      migrations are meant to get throttled.  Normally, the scan rate is tuned
      on a per-task basis depending on the locality of faults.  However, if
      migrations fail for any reason then the PTE scanner may scan faster if
      the faults continue to be remote.  This means there is higher system CPU
      overhead and fault trapping at exactly the time we know that migrations
      cannot happen.  This patch tracks when migration failures occur and
      slows the PTE scanner.
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      Reported-by: NDave Chinner <david@fromorbit.com>
      Tested-by: NDave Chinner <david@fromorbit.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Aneesh Kumar <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      074c2381
  10. 20 3月, 2015 1 次提交
    • S
      ata: Add a new flag to destinguish sas controller · 5067c046
      Shaohua Li 提交于
      SAS controller has its own tag allocation, which doesn't directly match to ATA
      tag, so SAS and SATA have different code path for ata tags. Originally we use
      port->scsi_host (98bd4be1) to destinguish SAS controller, but libsas set
      ->scsi_host too, so we can't use it for the destinguish, we add a new flag for
      this purpose.
      
      Without this patch, the following oops can happen because scsi-mq uses
      a host-wide tag map shared among all devices with some integer tag
      values >= ATA_MAX_QUEUE.  These unexpectedly high tag values cause
      __ata_qc_from_tag() to return NULL, which is then dereferenced in
      ata_qc_new_init().
      
        BUG: unable to handle kernel NULL pointer dereference at 0000000000000058
        IP: [<ffffffff804fd46e>] ata_qc_new_init+0x3e/0x120
        PGD 32adf0067 PUD 32adf1067 PMD 0
        Oops: 0002 [#1] SMP DEBUG_PAGEALLOC
        Modules linked in: iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi igb
        i2c_algo_bit ptp pps_core pm80xx libsas scsi_transport_sas sg coretemp
        eeprom w83795 i2c_i801
        CPU: 4 PID: 1450 Comm: cydiskbench Not tainted 4.0.0-rc3 #1
        Hardware name: Supermicro X8DTH-i/6/iF/6F/X8DTH, BIOS 2.1b       05/04/12
        task: ffff8800ba86d500 ti: ffff88032a064000 task.ti: ffff88032a064000
        RIP: 0010:[<ffffffff804fd46e>]  [<ffffffff804fd46e>] ata_qc_new_init+0x3e/0x120
        RSP: 0018:ffff88032a067858  EFLAGS: 00010046
        RAX: 0000000000000000 RBX: ffff8800ba0d2230 RCX: 000000000000002a
        RDX: ffffffff80505ae0 RSI: 0000000000000020 RDI: ffff8800ba0d2230
        RBP: ffff88032a067868 R08: 0000000000000201 R09: 0000000000000001
        R10: 0000000000000000 R11: 0000000000000000 R12: ffff8800ba0d0000
        R13: ffff8800ba0d2230 R14: ffffffff80505ae0 R15: ffff8800ba0d0000
        FS:  0000000041223950(0063) GS:ffff88033e480000(0000) knlGS:0000000000000000
        CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
        CR2: 0000000000000058 CR3: 000000032a0a3000 CR4: 00000000000006e0
        Stack:
         ffff880329eee758 ffff880329eee758 ffff88032a0678a8 ffffffff80502dad
         ffff8800ba167978 ffff880329eee758 ffff88032bf9c520 ffff8800ba167978
         ffff88032bf9c520 ffff88032bf9a290 ffff88032a0678b8 ffffffff80506909
        Call Trace:
         [<ffffffff80502dad>] ata_scsi_translate+0x3d/0x1b0
         [<ffffffff80506909>] ata_sas_queuecmd+0x149/0x2a0
         [<ffffffffa0046650>] sas_queuecommand+0xa0/0x1f0 [libsas]
         [<ffffffff804ea544>] scsi_dispatch_cmd+0xd4/0x1a0
         [<ffffffff804eb50f>] scsi_queue_rq+0x66f/0x7f0
         [<ffffffff803e5098>] __blk_mq_run_hw_queue+0x208/0x3f0
         [<ffffffff803e54b8>] blk_mq_run_hw_queue+0x88/0xc0
         [<ffffffff803e5c74>] blk_mq_insert_request+0xc4/0x130
         [<ffffffff803e0b63>] blk_execute_rq_nowait+0x73/0x160
         [<ffffffffa0023fca>] sg_common_write+0x3da/0x720 [sg]
         [<ffffffffa0025100>] sg_new_write+0x250/0x360 [sg]
         [<ffffffffa0025feb>] sg_write+0x13b/0x450 [sg]
         [<ffffffff8032ec91>] vfs_write+0xd1/0x1b0
         [<ffffffff8032ee54>] SyS_write+0x54/0xc0
         [<ffffffff80689932>] system_call_fastpath+0x12/0x17
      
      tj: updated description.
      
      Fixes: 12cb5ce1 ("libata: use blk taging")
      Reported-and-tested-by: NTony Battersby <tonyb@cybernetics.com>
      Signed-off-by: NShaohua Li <shli@fb.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      5067c046
  11. 18 3月, 2015 4 次提交
  12. 17 3月, 2015 2 次提交
    • K
      regulator: palmas: Correct TPS659038 register definition for REGEN2 · e03826d5
      Keerthy 提交于
      The register offset for REGEN2_CTRL in different for TPS659038 chip as when
      compared with other Palmas family PMICs. In the case of TPS659038 the wrong
      offset pointed to PLLEN_CTRL and was causing a hang. Correcting the same.
      Signed-off-by: NKeerthy <j-keerthy@ti.com>
      Signed-off-by: NMark Brown <broonie@kernel.org>
      Cc: stable@vger.kernel.org
      e03826d5
    • P
      livepatch: Fix subtle race with coming and going modules · 8cb2c2dc
      Petr Mladek 提交于
      There is a notifier that handles live patches for coming and going modules.
      It takes klp_mutex lock to avoid races with coming and going patches but
      it does not keep the lock all the time. Therefore the following races are
      possible:
      
        1. The notifier is called sometime in STATE_MODULE_COMING. The module
           is visible by find_module() in this state all the time. It means that
           new patch can be registered and enabled even before the notifier is
           called. It might create wrong order of stacked patches, see below
           for an example.
      
         2. New patch could still see the module in the GOING state even after
            the notifier has been called. It will try to initialize the related
            object structures but the module could disappear at any time. There
            will stay mess in the structures. It might even cause an invalid
            memory access.
      
      This patch solves the problem by adding a boolean variable into struct module.
      The value is true after the coming and before the going handler is called.
      New patches need to be applied when the value is true and they need to ignore
      the module when the value is false.
      
      Note that we need to know state of all modules on the system. The races are
      related to new patches. Therefore we do not know what modules will get
      patched.
      
      Also note that we could not simply ignore going modules. The code from the
      module could be called even in the GOING state until mod->exit() finishes.
      If we start supporting patches with semantic changes between function
      calls, we need to apply new patches to any still usable code.
      See below for an example.
      
      Finally note that the patch solves only the situation when a new patch is
      registered. There are no such problems when the patch is being removed.
      It does not matter who disable the patch first, whether the normal
      disable_patch() or the module notifier. There is nothing to do
      once the patch is disabled.
      
      Alternative solutions:
      ======================
      
      + reject new patches when a patched module is coming or going; this is ugly
      
      + wait with adding new patch until the module leaves the COMING and GOING
        states; this might be dangerous and complicated; we would need to release
        kgr_lock in the middle of the patch registration to avoid a deadlock
        with the coming and going handlers; also we might need a waitqueue for
        each module which seems to be even bigger overhead than the boolean
      
      + stop modules from entering COMING and GOING states; wait until modules
        leave these states when they are already there; looks complicated; we would
        need to ignore the module that asked to stop the others to avoid a deadlock;
        also it is unclear what to do when two modules asked to stop others and
        both are in COMING state (situation when two new patches are applied)
      
      + always register/enable new patches and fix up the potential mess (registered
        patches order) in klp_module_init(); this is nasty and prone to regressions
        in the future development
      
      + add another MODULE_STATE where the kallsyms are visible but the module is not
        used yet; this looks too complex; the module states are checked on "many"
        locations
      
      Example of patch stacking breakage:
      ===================================
      
      The notifier could _not_ _simply_ ignore already initialized module objects.
      For example, let's have three patches (P1, P2, P3) for functions a() and b()
      where a() is from vmcore and b() is from a module M. Something like:
      
      	a()	b()
      P1	a1()	b1()
      P2	a2()	b2()
      P3	a3()	b3(3)
      
      If you load the module M after all patches are registered and enabled.
      The ftrace ops for function a() and b() has listed the functions in this
      order:
      
      	ops_a->func_stack -> list(a3,a2,a1)
      	ops_b->func_stack -> list(b3,b2,b1)
      
      , so the pointer to b3() is the first and will be used.
      
      Then you might have the following scenario. Let's start with state when patches
      P1 and P2 are registered and enabled but the module M is not loaded. Then ftrace
      ops for b() does not exist. Then we get into the following race:
      
      CPU0					CPU1
      
      load_module(M)
      
        complete_formation()
      
        mod->state = MODULE_STATE_COMING;
        mutex_unlock(&module_mutex);
      
      					klp_register_patch(P3);
      					klp_enable_patch(P3);
      
      					# STATE 1
      
        klp_module_notify(M)
          klp_module_notify_coming(P1);
          klp_module_notify_coming(P2);
          klp_module_notify_coming(P3);
      
      					# STATE 2
      
      The ftrace ops for a() and b() then looks:
      
        STATE1:
      
      	ops_a->func_stack -> list(a3,a2,a1);
      	ops_b->func_stack -> list(b3);
      
        STATE2:
      	ops_a->func_stack -> list(a3,a2,a1);
      	ops_b->func_stack -> list(b2,b1,b3);
      
      therefore, b2() is used for the module but a3() is used for vmcore
      because they were the last added.
      
      Example of the race with going modules:
      =======================================
      
      CPU0					CPU1
      
      delete_module()  #SYSCALL
      
         try_stop_module()
           mod->state = MODULE_STATE_GOING;
      
         mutex_unlock(&module_mutex);
      
      					klp_register_patch()
      					klp_enable_patch()
      
      					#save place to switch universe
      
      					b()     # from module that is going
      					  a()   # from core (patched)
      
         mod->exit();
      
      Note that the function b() can be called until we call mod->exit().
      
      If we do not apply patch against b() because it is in MODULE_STATE_GOING,
      it will call patched a() with modified semantic and things might get wrong.
      
      [jpoimboe@redhat.com: use one boolean instead of two]
      Signed-off-by: NPetr Mladek <pmladek@suse.cz>
      Acked-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Acked-by: NRusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NJiri Kosina <jkosina@suse.cz>
      8cb2c2dc
  13. 13 3月, 2015 3 次提交
  14. 12 3月, 2015 2 次提交
    • E
      xps: must clear sender_cpu before forwarding · c29390c6
      Eric Dumazet 提交于
      John reported that my previous commit added a regression
      on his router.
      
      This is because sender_cpu & napi_id share a common location,
      so get_xps_queue() can see garbage and perform an out of bound access.
      
      We need to make sure sender_cpu is cleared before doing the transmit,
      otherwise any NIC busy poll enabled (skb_mark_napi_id()) can trigger
      this bug.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Reported-by: NJohn <jw@nuclearfallout.net>
      Bisected-by: NJohn <jw@nuclearfallout.net>
      Fixes: 2bd82484 ("xps: fix xps for stacked devices")
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c29390c6
    • M
      clk: introduce clk_is_match · 3d3801ef
      Michael Turquette 提交于
      Some drivers compare struct clk pointers as a means of knowing
      if the two pointers reference the same clock hardware. This behavior is
      dubious (drivers must not dereference struct clk), but did not cause any
      regressions until the per-user struct clk patch was merged. Now the test
      for matching clk's will always fail with per-user struct clk's.
      
      clk_is_match is introduced to fix the regression and prevent drivers
      from comparing the pointers manually.
      
      Fixes: 035a61c3 ("clk: Make clk API return per-user struct clk instances")
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Shawn Guo <shawn.guo@linaro.org>
      Cc: Tomeu Vizoso <tomeu.vizoso@collabora.com>
      Signed-off-by: NMichael Turquette <mturquette@linaro.org>
      [arnd@arndb.de: Fix COMMON_CLK=N && HAS_CLK=Y config]
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      [sboyd@codeaurora.org: const arguments to clk_is_match() and
      remove unnecessary ternary operation]
      Signed-off-by: NStephen Boyd <sboyd@codeaurora.org>
      3d3801ef
  15. 08 3月, 2015 2 次提交
  16. 07 3月, 2015 2 次提交
  17. 06 3月, 2015 1 次提交
  18. 05 3月, 2015 3 次提交
    • T
      workqueue: fix hang involving racing cancel[_delayed]_work_sync()'s for PREEMPT_NONE · 8603e1b3
      Tejun Heo 提交于
      cancel[_delayed]_work_sync() are implemented using
      __cancel_work_timer() which grabs the PENDING bit using
      try_to_grab_pending() and then flushes the work item with PENDING set
      to prevent the on-going execution of the work item from requeueing
      itself.
      
      try_to_grab_pending() can always grab PENDING bit without blocking
      except when someone else is doing the above flushing during
      cancelation.  In that case, try_to_grab_pending() returns -ENOENT.  In
      this case, __cancel_work_timer() currently invokes flush_work().  The
      assumption is that the completion of the work item is what the other
      canceling task would be waiting for too and thus waiting for the same
      condition and retrying should allow forward progress without excessive
      busy looping
      
      Unfortunately, this doesn't work if preemption is disabled or the
      latter task has real time priority.  Let's say task A just got woken
      up from flush_work() by the completion of the target work item.  If,
      before task A starts executing, task B gets scheduled and invokes
      __cancel_work_timer() on the same work item, its try_to_grab_pending()
      will return -ENOENT as the work item is still being canceled by task A
      and flush_work() will also immediately return false as the work item
      is no longer executing.  This puts task B in a busy loop possibly
      preventing task A from executing and clearing the canceling state on
      the work item leading to a hang.
      
      task A			task B			worker
      
      						executing work
      __cancel_work_timer()
        try_to_grab_pending()
        set work CANCELING
        flush_work()
          block for work completion
      						completion, wakes up A
      			__cancel_work_timer()
      			while (forever) {
      			  try_to_grab_pending()
      			    -ENOENT as work is being canceled
      			  flush_work()
      			    false as work is no longer executing
      			}
      
      This patch removes the possible hang by updating __cancel_work_timer()
      to explicitly wait for clearing of CANCELING rather than invoking
      flush_work() after try_to_grab_pending() fails with -ENOENT.
      
      Link: http://lkml.kernel.org/g/20150206171156.GA8942@axis.com
      
      v3: bit_waitqueue() can't be used for work items defined in vmalloc
          area.  Switched to custom wake function which matches the target
          work item and exclusive wait and wakeup.
      
      v2: v1 used wake_up() on bit_waitqueue() which leads to NULL deref if
          the target bit waitqueue has wait_bit_queue's on it.  Use
          DEFINE_WAIT_BIT() and __wake_up_bit() instead.  Reported by Tomeu
          Vizoso.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reported-by: NRabin Vincent <rabin.vincent@axis.com>
      Cc: Tomeu Vizoso <tomeu.vizoso@gmail.com>
      Cc: stable@vger.kernel.org
      Tested-by: NJesper Nilsson <jesper.nilsson@axis.com>
      Tested-by: NRabin Vincent <rabin.vincent@axis.com>
      8603e1b3
    • L
      Revert "pinctrl: consumer: use correct retval for placeholder functions" · 40eeb111
      Linus Walleij 提交于
      This reverts commit 5a7d2efd.
      
      As per discussion on the mailing list, this is not the right
      thing to do. NULL cookies are valid in the stubs.
      Reported-by: NWolfram Sang <wsa@the-dreams.de>
      Signed-off-by: NLinus Walleij <linus.walleij@linaro.org>
      40eeb111
    • R
      genirq / PM: Add flag for shared NO_SUSPEND interrupt lines · 17f48034
      Rafael J. Wysocki 提交于
      It currently is required that all users of NO_SUSPEND interrupt
      lines pass the IRQF_NO_SUSPEND flag when requesting the IRQ or the
      WARN_ON_ONCE() in irq_pm_install_action() will trigger.  That is
      done to warn about situations in which unprepared interrupt handlers
      may be run unnecessarily for suspended devices and may attempt to
      access those devices by mistake.  However, it may cause drivers
      that have no technical reasons for using IRQF_NO_SUSPEND to set
      that flag just because they happen to share the interrupt line
      with something like a timer.
      
      Moreover, the generic handling of wakeup interrupts introduced by
      commit 9ce7a258 (genirq: Simplify wakeup mechanism) only works
      for IRQs without any NO_SUSPEND users, so the drivers of wakeup
      devices needing to use shared NO_SUSPEND interrupt lines for
      signaling system wakeup generally have to detect wakeup in their
      interrupt handlers.  Thus if they happen to share an interrupt line
      with a NO_SUSPEND user, they also need to request that their
      interrupt handlers be run after suspend_device_irqs().
      
      In both cases the reason for using IRQF_NO_SUSPEND is not because
      the driver in question has a genuine need to run its interrupt
      handler after suspend_device_irqs(), but because it happens to
      share the line with some other NO_SUSPEND user.  Otherwise, the
      driver would do without IRQF_NO_SUSPEND just fine.
      
      To make it possible to specify that condition explicitly, introduce
      a new IRQ action handler flag for shared IRQs, IRQF_COND_SUSPEND,
      that, when set, will indicate to the IRQ core that the interrupt
      user is generally fine with suspending the IRQ, but it also can
      tolerate handler invocations after suspend_device_irqs() and, in
      particular, it is capable of detecting system wakeup and triggering
      it as appropriate from its interrupt handler.
      
      That will allow us to work around a problem with a shared timer
      interrupt line on at91 platforms.
      
      Link: http://marc.info/?l=linux-kernel&m=142252777602084&w=2
      Link: http://marc.info/?t=142252775300011&r=1&w=2
      Link: https://lkml.org/lkml/2014/12/15/552Reported-by: NBoris Brezillon <boris.brezillon@free-electrons.com>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      17f48034
  19. 04 3月, 2015 1 次提交
    • T
      NFS: Fix a regression in the read() syscall · 874f9463
      Trond Myklebust 提交于
      When invalidating the page cache for a regular file, we want to first
      sync all dirty data to disk and then call invalidate_inode_pages2().
      The latter relies on nfs_launder_page() and nfs_release_page() to deal
      respectively with dirty pages, and unstable written pages.
      
      When commit 95905446 ("NFS: avoid deadlocks with loop-back mounted
      NFS filesystems.") changed the behaviour of nfs_release_page(), then it
      made it possible for invalidate_inode_pages2() to fail with an EBUSY.
      Unfortunately, that error is then propagated back to read().
      
      Let's therefore work around the problem for now by protecting the call
      to sync the data and invalidate_inode_pages2() so that they are atomic
      w.r.t. the addition of new writes.
      Later on, we can revisit whether or not we still need nfs_launder_page()
      and nfs_release_page().
      Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com>
      874f9463
  20. 03 3月, 2015 2 次提交