1. 13 2月, 2014 5 次提交
  2. 05 2月, 2014 1 次提交
  3. 03 2月, 2014 3 次提交
  4. 02 2月, 2014 1 次提交
    • R
      Revert "PCI: Remove from bus_list and release resources in pci_release_dev()" · 04480094
      Rafael J. Wysocki 提交于
      Revert commit ef83b078 "PCI: Remove from bus_list and release
      resources in pci_release_dev()" that made some nasty race conditions
      become possible.  For example, if a Thunderbolt link is unplugged
      and then replugged immediately, the pci_release_dev() resulting from
      the hot-remove code path may be racing with the hot-add code path
      which after that commit causes various kinds of breakage to happen
      (up to and including a hard crash of the whole system).
      
      Moreover, the problem that commit ef83b078 attempted to address
      cannot happen any more after commit 8a4c5c32 "PCI: Check parent
      kobject in pci_destroy_dev()", because pci_destroy_dev() will now
      return immediately if it has already been executed for the given
      device.
      
      Note, however, that the invocation of msi_remove_pci_irq_vectors()
      removed by commit ef83b078 from pci_free_resources() along with
      the other changes made by it is not added back because of subsequent
      code changes depending on that modification.
      
      Fixes: ef83b078 (PCI: Remove from bus_list and release resources in pci_release_dev())
      Reported-by: NMika Westerberg <mika.westerberg@linux.intel.com>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      04480094
  5. 01 2月, 2014 2 次提交
  6. 31 1月, 2014 16 次提交
    • B
      drivers: xen: deaggressive selfballoon driver · bc1b0df5
      Bob Liu 提交于
      Current xen-selfballoon driver is too aggressive which may cause OOM be
      triggered more often. Eg. this bug reported by James:
      https://lkml.org/lkml/2013/11/21/158
      
      There are two mainly reasons:
      1) The original goal_page didn't consider some pages used by kernel space, like
      slab pages and pages used by device drivers.
      
      2) The balloon driver may not give back memory to guest OS fast enough when the
      workload suddenly aquries a lot of physical memory.
      
      In both cases, the guest OS will suffer from memory pressure and OOM may
      be triggered.
      
      The fix is make xen-selfballoon driver not that aggressive by adding extra 10%
      of total ram pages to goal_page.
      It's more valuable to keep the guest system reliable and response faster than
      balloon out these 10% pages to XEN.
      Signed-off-by: NBob Liu <bob.liu@oracle.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      bc1b0df5
    • Z
      xen/grant-table: Avoid m2p_override during mapping · 08ece5bb
      Zoltan Kiss 提交于
      The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
      for blkback and future netback patches it just cause a lock contention, as
      those pages never go to userspace. Therefore this series does the following:
      - the original functions were renamed to __gnttab_[un]map_refs, with a new
        parameter m2p_override
      - based on m2p_override either they follow the original behaviour, or just set
        the private flag and call set_phys_to_machine
      - gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
        m2p_override false
      - a new function gnttab_[un]map_refs_userspace provides the old behaviour
      
      It also removes a stray space from page.h and change ret to 0 if
      XENFEAT_auto_translated_physmap, as that is the only possible return value
      there.
      
      v2:
      - move the storing of the old mfn in page->index to gnttab_map_refs
      - move the function header update to a separate patch
      
      v3:
      - a new approach to retain old behaviour where it needed
      - squash the patches into one
      
      v4:
      - move out the common bits from m2p* functions, and pass pfn/mfn as parameter
      - clear page->private before doing anything with the page, so m2p_find_override
        won't race with this
      
      v5:
      - change return value handling in __gnttab_[un]map_refs
      - remove a stray space in page.h
      - add detail why ret = 0 now at some places
      
      v6:
      - don't pass pfn to m2p* functions, just get it locally
      Signed-off-by: NZoltan Kiss <zoltan.kiss@citrix.com>
      Suggested-by: NDavid Vrabel <david.vrabel@citrix.com>
      Acked-by: NDavid Vrabel <david.vrabel@citrix.com>
      Acked-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      08ece5bb
    • M
      zram: remove zram->lock in read path and change it with mutex · e46e3315
      Minchan Kim 提交于
      Finally, we separated zram->lock dependency from 32bit stat/ table
      handling so there is no reason to use rw_semaphore between read and
      write path so this patch removes the lock from read path totally and
      changes rw_semaphore with mutex.  So, we could do
      
      old:
      
        read-read: OK
        read-write: NO
        write-write: NO
      
      Now:
      
        read-read: OK
        read-write: OK
        write-write: NO
      
      The below data proves mixed workload performs well 11 times and there is
      also enhance on write-write path because current rw-semaphore doesn't
      support SPIN_ON_OWNER.  It's side effect but anyway good thing for us.
      
      Write-related tests perform better (from 61% to 1058%) but read path has
      good/bad(from -2.22% to 1.45%) but they are all marginal within stddev.
      
        CPU 12
        iozone -t -T -l 12 -u 12 -r 16K -s 60M -I +Z -V 0
      
        ==Initial write                ==Initial write
        records: 10                    records: 10
        avg:  516189.16                avg:  839907.96
        std:   22486.53 (4.36%)        std:   47902.17 (5.70%)
        max:  546970.60                max:  909910.35
        min:  481131.54                min:  751148.38
        ==Rewrite                      ==Rewrite
        records: 10                    records: 10
        avg:  509527.98                avg: 1050156.37
        std:   45799.94 (8.99%)        std:   40695.44 (3.88%)
        max:  611574.27                max: 1111929.26
        min:  443679.95                min:  980409.62
        ==Read                         ==Read
        records: 10                    records: 10
        avg: 4408624.17                avg: 4472546.76
        std:  281152.61 (6.38%)        std:  163662.78 (3.66%)
        max: 4867888.66                max: 4727351.03
        min: 4058347.69                min: 4126520.88
        ==Re-read                      ==Re-read
        records: 10                    records: 10
        avg: 4462147.53                avg: 4363257.75
        std:  283546.11 (6.35%)        std:  247292.63 (5.67%)
        max: 4912894.44                max: 4677241.75
        min: 4131386.50                min: 4035235.84
        ==Reverse Read                 ==Reverse Read
        records: 10                    records: 10
        avg: 4565865.97                avg: 4485818.08
        std:  313395.63 (6.86%)        std:  248470.10 (5.54%)
        max: 5232749.16                max: 4789749.94
        min: 4185809.62                min: 3963081.34
        ==Stride read                  ==Stride read
        records: 10                    records: 10
        avg: 4515981.80                avg: 4418806.01
        std:  211192.32 (4.68%)        std:  212837.97 (4.82%)
        max: 4889287.28                max: 4686967.22
        min: 4210362.00                min: 4083041.84
        ==Random read                  ==Random read
        records: 10                    records: 10
        avg: 4410525.23                avg: 4387093.18
        std:  236693.22 (5.37%)        std:  235285.23 (5.36%)
        max: 4713698.47                max: 4669760.62
        min: 4057163.62                min: 3952002.16
        ==Mixed workload               ==Mixed workload
        records: 10                    records: 10
        avg:  243234.25                avg: 2818677.27
        std:   28505.07 (11.72%)       std:  195569.70 (6.94%)
        max:  288905.23                max: 3126478.11
        min:  212473.16                min: 2484150.69
        ==Random write                 ==Random write
        records: 10                    records: 10
        avg:  555887.07                avg: 1053057.79
        std:   70841.98 (12.74%)       std:   35195.36 (3.34%)
        max:  683188.28                max: 1096125.73
        min:  437299.57                min:  992481.93
        ==Pwrite                       ==Pwrite
        records: 10                    records: 10
        avg:  501745.93                avg:  810363.09
        std:   16373.54 (3.26%)        std:   19245.01 (2.37%)
        max:  518724.52                max:  833359.70
        min:  464208.73                min:  765501.87
        ==Pread                        ==Pread
        records: 10                    records: 10
        avg: 4539894.60                avg: 4457680.58
        std:  197094.66 (4.34%)        std:  188965.60 (4.24%)
        max: 4877170.38                max: 4689905.53
        min: 4226326.03                min: 4095739.72
      Signed-off-by: NMinchan Kim <minchan@kernel.org>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Tested-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e46e3315
    • M
      zram: remove workqueue for freeing removed pending slot · f614a9f4
      Minchan Kim 提交于
      Commit a0c516cb ("zram: don't grab mutex in zram_slot_free_noity")
      introduced free request pending code to avoid scheduling by mutex under
      spinlock and it was a mess which made code lenghty and increased
      overhead.
      
      Now, we don't need zram->lock any more to free slot so this patch
      reverts it and then, tb_lock should protect it.
      Signed-off-by: NMinchan Kim <minchan@kernel.org>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Tested-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f614a9f4
    • M
      zram: introduce zram->tb_lock · 92967471
      Minchan Kim 提交于
      Currently, the zram table is protected by zram->lock but it's rather
      coarse-grained lock and it makes hard for scalibility.
      
      Let's use own rwlock instead of depending on zram->lock.  This patch
      adds new locking so obviously, it would make slow but this patch is just
      prepartion for removing coarse-grained rw_semaphore(ie, zram->lock)
      which is hurdle about zram scalability.
      
      Final patch in this patchset series will remove the lock from read-path
      and change rw_semaphore with mutex in write path.  With bonus, we could
      drop pending slot free mess in next patch.
      Signed-off-by: NMinchan Kim <minchan@kernel.org>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Tested-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      92967471
    • M
      zram: use atomic operation for stat · deb0bdeb
      Minchan Kim 提交于
      Some of fields in zram->stats are protected by zram->lock which is
      rather coarse-grained so let's use atomic operation without explict
      locking.
      
      This patch is ready for removing dependency of zram->lock in read path
      which is very coarse-grained rw_semaphore.  Of course, this patch adds
      new atomic operation so it might make slow but my 12CPU test couldn't
      spot any regression.  All gain/lose is marginal within stddev.
      
        iozone -t -T -l 12 -u 12 -r 16K -s 60M -I +Z -V 0
      
        ==Initial write                ==Initial write
        records: 50                    records: 50
        avg:  412875.17                avg:  415638.23
        std:   38543.12 (9.34%)        std:   36601.11 (8.81%)
        max:  521262.03                max:  502976.72
        min:  343263.13                min:  351389.12
        ==Rewrite                      ==Rewrite
        records: 50                    records: 50
        avg:  416640.34                avg:  397914.33
        std:   60798.92 (14.59%)       std:   46150.42 (11.60%)
        max:  543057.07                max:  522669.17
        min:  304071.67                min:  316588.77
        ==Read                         ==Read
        records: 50                    records: 50
        avg: 4147338.63                avg: 4070736.51
        std:  179333.25 (4.32%)        std:  223499.89 (5.49%)
        max: 4459295.28                max: 4539514.44
        min: 3753057.53                min: 3444686.31
        ==Re-read                      ==Re-read
        records: 50                    records: 50
        avg: 4096706.71                avg: 4117218.57
        std:  229735.04 (5.61%)        std:  171676.25 (4.17%)
        max: 4430012.09                max: 4459263.94
        min: 2987217.80                min: 3666904.28
        ==Reverse Read                 ==Reverse Read
        records: 50                    records: 50
        avg: 4062763.83                avg: 4078508.32
        std:  186208.46 (4.58%)        std:  172684.34 (4.23%)
        max: 4401358.78                max: 4424757.22
        min: 3381625.00                min: 3679359.94
        ==Stride read                  ==Stride read
        records: 50                    records: 50
        avg: 4094933.49                avg: 4082170.22
        std:  185710.52 (4.54%)        std:  196346.68 (4.81%)
        max: 4478241.25                max: 4460060.97
        min: 3732593.23                min: 3584125.78
        ==Random read                  ==Random read
        records: 50                    records: 50
        avg: 4031070.04                avg: 4074847.49
        std:  192065.51 (4.76%)        std:  206911.33 (5.08%)
        max: 4356931.16                max: 4399442.56
        min: 3481619.62                min: 3548372.44
        ==Mixed workload               ==Mixed workload
        records: 50                    records: 50
        avg:  149925.73                avg:  149675.54
        std:    7701.26 (5.14%)        std:    6902.09 (4.61%)
        max:  191301.56                max:  175162.05
        min:  133566.28                min:  137762.87
        ==Random write                 ==Random write
        records: 50                    records: 50
        avg:  404050.11                avg:  393021.47
        std:   58887.57 (14.57%)       std:   42813.70 (10.89%)
        max:  601798.09                max:  524533.43
        min:  325176.99                min:  313255.34
        ==Pwrite                       ==Pwrite
        records: 50                    records: 50
        avg:  411217.70                avg:  411237.96
        std:   43114.99 (10.48%)       std:   33136.29 (8.06%)
        max:  530766.79                max:  471899.76
        min:  320786.84                min:  317906.94
        ==Pread                        ==Pread
        records: 50                    records: 50
        avg: 4154908.65                avg: 4087121.92
        std:  151272.08 (3.64%)        std:  219505.04 (5.37%)
        max: 4459478.12                max: 4435857.38
        min: 3730512.41                min: 3101101.67
      Signed-off-by: NMinchan Kim <minchan@kernel.org>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Tested-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      deb0bdeb
    • M
      zram: remove unnecessary free · 874e3cdd
      Minchan Kim 提交于
      Commit a0c516cb ("zram: don't grab mutex in zram_slot_free_noity")
      introduced pending zram slot free in zram's write path in case of
      missing slot free by memory allocation failure in zram_slot_free_notify
      but it is not necessary because we have already freed the slot right
      before overwriting.
      Signed-off-by: NMinchan Kim <minchan@kernel.org>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Tested-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      874e3cdd
    • M
      zram: delay pending free request in read path · 9b353db1
      Minchan Kim 提交于
      Sergey reported we don't need to handle pending free request every I/O
      so that this patch removes it in read path while we remain it in write
      path.
      
      Let's consider below example.
      
      Swap subsystem ask to zram "A" block free by swap_slot_free_notify but
      zram had been pended it without real freeing.  Swap subsystem allocates
      "A" block for new data but request pended for a long time just handled
      and zram blindly free new data on the "A" block.  :(
      
      That's why we couldn't remove handle pending free request right before
      zram-write.
      Signed-off-by: NMinchan Kim <minchan@kernel.org>
      Reported-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Tested-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9b353db1
    • M
      zram: fix race between reset and flushing pending work · da4a0412
      Minchan Kim 提交于
      Dan and Sergey reported that there is a racy between reset and flushing
      of pending work so that it could make oops by freeing zram->meta in
      reset while zram_slot_free can access zram->meta if new request is
      adding during the race window.
      
      This patch moves flush after taking init_lock so it prevents new request
      so that it closes the race.
      Signed-off-by: NMinchan Kim <minchan@kernel.org>
      Reported-by: NDan Carpenter <dan.carpenter@oracle.com>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Tested-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      da4a0412
    • M
      zram: add copyright · 7bfb3de8
      Minchan Kim 提交于
      Add my copyright to the zram source code which I maintain.
      Signed-off-by: NMinchan Kim <minchan@kernel.org>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7bfb3de8
    • M
      zram: remove old private project comment · 49061236
      Minchan Kim 提交于
      Remove the old private compcache project address so upcoming patches
      should be sent to LKML because we Linux kernel community will take care.
      Signed-off-by: NMinchan Kim <minchan@kernel.org>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      49061236
    • M
      zram: promote zram from staging · cd67e10a
      Minchan Kim 提交于
      Zram has lived in staging for a LONG LONG time and have been
      fixed/improved by many contributors so code is clean and stable now.  Of
      course, there are lots of product using zram in real practice.
      
      The major TV companys have used zram as swap since two years ago and
      recently our production team released android smart phone with zram
      which is used as swap, too and recently Android Kitkat start to use zram
      for small memory smart phone.  And there was a report Google released
      their ChromeOS with zram, too and cyanogenmod have been used zram long
      time ago.  And I heard some disto have used zram block device for tmpfs.
      In addition, I saw many report from many other peoples.  For example,
      Lubuntu start to use it.
      
      The benefit of zram is very clear.  With my experience, one of the
      benefit was to remove jitter of video application with backgroud memory
      pressure.  It would be effect of efficient memory usage by compression
      but more issue is whether swap is there or not in the system.  Recent
      mobile platforms have used JAVA so there are many anonymous pages.  But
      embedded system normally are reluctant to use eMMC or SDCard as swap
      because there is wear-leveling and latency issues so if we do not use
      swap, it means we can't reclaim anoymous pages and at last, we could
      encounter OOM kill.  :(
      
      Although we have real storage as swap, it was a problem, too.  Because
      it sometime ends up making system very unresponsible caused by slow swap
      storage performance.
      
      Quote from Luigi on Google
       "Since Chrome OS was mentioned: the main reason why we don't use swap
        to a disk (rotating or SSD) is because it doesn't degrade gracefully
        and leads to a bad interactive experience.  Generally we prefer to
        manage RAM at a higher level, by transparently killing and restarting
        processes.  But we noticed that zram is fast enough to be competitive
        with the latter, and it lets us make more efficient use of the
        available RAM.  " and he announced.
      http://www.spinics.net/lists/linux-mm/msg57717.html
      
      Other uses case is to use zram for block device.  Zram is block device
      so anyone can format the block device and mount on it so some guys on
      the internet start zram as /var/tmp.
      http://forums.gentoo.org/viewtopic-t-838198-start-0.html
      
      Let's promote zram and enhance/maintain it instead of removing.
      Signed-off-by: NMinchan Kim <minchan@kernel.org>
      Reviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Acked-by: NNitin Gupta <ngupta@vflare.org>
      Acked-by: NPekka Enberg <penberg@kernel.org>
      Cc: Bob Liu <bob.liu@oracle.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Luigi Semenzato <semenzato@google.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Seth Jennings <sjenning@linux.vnet.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cd67e10a
    • M
      zsmalloc: move it under mm · bcf1647d
      Minchan Kim 提交于
      This patch moves zsmalloc under mm directory.
      
      Before that, description will explain why we have needed custom
      allocator.
      
      Zsmalloc is a new slab-based memory allocator for storing compressed
      pages.  It is designed for low fragmentation and high allocation success
      rate on large object, but <= PAGE_SIZE allocations.
      
      zsmalloc differs from the kernel slab allocator in two primary ways to
      achieve these design goals.
      
      zsmalloc never requires high order page allocations to back slabs, or
      "size classes" in zsmalloc terms.  Instead it allows multiple
      single-order pages to be stitched together into a "zspage" which backs
      the slab.  This allows for higher allocation success rate under memory
      pressure.
      
      Also, zsmalloc allows objects to span page boundaries within the zspage.
      This allows for lower fragmentation than could be had with the kernel
      slab allocator for objects between PAGE_SIZE/2 and PAGE_SIZE.  With the
      kernel slab allocator, if a page compresses to 60% of it original size,
      the memory savings gained through compression is lost in fragmentation
      because another object of the same size can't be stored in the leftover
      space.
      
      This ability to span pages results in zsmalloc allocations not being
      directly addressable by the user.  The user is given an
      non-dereferencable handle in response to an allocation request.  That
      handle must be mapped, using zs_map_object(), which returns a pointer to
      the mapped region that can be used.  The mapping is necessary since the
      object data may reside in two different noncontigious pages.
      
      The zsmalloc fulfills the allocation needs for zram perfectly
      
      [sjenning@linux.vnet.ibm.com: borrow Seth's quote]
      Signed-off-by: NMinchan Kim <minchan@kernel.org>
      Acked-by: NNitin Gupta <ngupta@vflare.org>
      Reviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Bob Liu <bob.liu@oracle.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Luigi Semenzato <semenzato@google.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Seth Jennings <sjenning@linux.vnet.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      bcf1647d
    • L
      drivers/net/phy/mdio_bus.c: call put_device on device_register() failure · 0c692d07
      Levente Kurusa 提交于
      It is required to call put_device() if device_register() fails, so that
      we give up the last reference to the device.  Calling put_device allows
      for mdiobus_release to be executed, kfreeing the bus.
      Signed-off-by: NLevente Kurusa <levex@linux.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Grant Likely <grant.likely@secretlab.ca>
      Cc: David Daney <david.daney@cavium.com>
      Cc: David S. Miller <davem@davemloft.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0c692d07
    • L
      drivers/video/backlight/lcd.c: call put_device if device_register fails · 54f5968d
      Levente Kurusa 提交于
      Currently we kfree the container of the device which failed to register.
      This is wrong as the last reference is not given up with a put_device
      call.  Also, now that we have put_device() callen, we no longer need the
      kfree as the new_ld->dev.release function will take care of kfreeing the
      associated memory.
      Signed-off-by: NLevente Kurusa <levex@linux.com>
      Acked-by: NJingoo Han <jg1.han@samsung.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      54f5968d
    • G
      ipmi: Add missing rv in ipmi_parisc_probe() · dfa19426
      Geert Uytterhoeven 提交于
      Fix
      
        drivers/char/ipmi/ipmi_si_intf.c: In function 'ipmi_parisc_probe':
        drivers/char/ipmi/ipmi_si_intf.c:2752:2: error: 'rv' undeclared (first use in this function)
        drivers/char/ipmi/ipmi_si_intf.c:2752:2: note: each undeclared identifier is reported only once for each function it appears in
      
      Introduced by commit d02b3709 ("ipmi: Cleanup error return")
      Signed-off-by: NGeert Uytterhoeven <geert@linux-m68k.org>
      Acked-by: NCorey Minyard <cminyard@mvista.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      dfa19426
  7. 30 1月, 2014 12 次提交
    • J
      xen/gnttab: Use phys_addr_t to describe the grant frame base address · 47c54205
      Julien Grall 提交于
      On ARM, address size can be 32 bits or 64 bits (if CONFIG_ARCH_PHYS_ADDR_T_64BIT
      is enabled).
      We can't assume that the grant frame base address will always fits in an
      unsigned long. Use phys_addr_t instead of unsigned long as argument for
      gnttab_setup_auto_xlat_frames.
      Signed-off-by: NJulien Grall <julien.grall@linaro.org>
      Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      Acked-by: NIan Campbell <ian.campbell@citrix.com>
      Reviewed-by: NDavid Vrabel <david.vrabel@citrix.com>
      47c54205
    • I
      xen: swiotlb: handle sizeof(dma_addr_t) != sizeof(phys_addr_t) · e17b2f11
      Ian Campbell 提交于
      The use of phys_to_machine and machine_to_phys in the phys<=>bus conversions
      causes us to lose the top bits of the DMA address if the size of a DMA address is not the same as the size of the phyiscal address.
      
      This can happen in practice on ARM where foreign pages can be above 4GB even
      though the local kernel does not have LPAE page tables enabled (which is
      totally reasonable if the guest does not itself have >4GB of RAM). In this
      case the kernel still maps the foreign pages at a phys addr below 4G (as it
      must) but the resulting DMA address (returned by the grant map operation) is
      much higher.
      
      This is analogous to a hardware device which has its view of RAM mapped up
      high for some reason.
      
      This patch makes I/O to foreign pages (specifically blkif) work on 32-bit ARM
      systems with more than 4GB of RAM.
      Signed-off-by: NIan Campbell <ian.campbell@citrix.com>
      Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      e17b2f11
    • N
      target: Fix percpu_ref_put race in transport_lun_remove_cmd · 5259a06e
      Nicholas Bellinger 提交于
      This patch fixes a percpu_ref_put race for se_lun->lun_ref in
      transport_lun_remove_cmd() where ->lun_ref could end up being
      put more than once per command via different target completion
      and fabric release contexts.
      
      It adds a cmpxchg() for se_cmd->lun_ref_active to ensure that
      percpu_ref_put() is only ever called once per se_cmd.
      
      This bug was manifesting itself as a LUN shutdown regression
      bug in >= v3.13 code, where percpu_ref_kill() would end up
      hanging indefinately due to the incorrect percpu_ref count.
      
      (Change se_cmd->lun_ref_active from bool -> int to force at
       least a 4-byte cmpxchg with MIPS ll/sc ins. - Fengguang)
      Reported-by: NTommy Apel <tommyapeldk@gmail.com>
      Cc: Tommy Apel <tommyapeldk@gmail.com>
      Cc: <stable@vger.kernel.org> #3.13+
      Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
      5259a06e
    • A
      target/iscsi: Fix network portal creation race · ee291e63
      Andy Grover 提交于
      When creating network portals rapidly, such as when restoring a
      configuration, LIO's code to reuse existing portals can return a false
      negative if the thread hasn't run yet and set np_thread_state to
      ISCSI_NP_THREAD_ACTIVE. This causes an error in the network stack
      when attempting to bind to the same address/port.
      
      This patch sets NP_THREAD_ACTIVE before the np is placed on g_np_list,
      so even if the thread hasn't run yet, iscsit_get_np will return the
      existing np.
      
      Also, convert np_lock -> np_mutex + hold across adding new net portal
      to g_np_list to prevent a race where two threads may attempt to create
      the same network portal, resulting in one of them failing.
      
      (nab: Add missing mutex_unlocks in iscsit_add_np failure paths)
      (DanC: Fix incorrect spin_unlock -> spin_unlock_bh)
      Signed-off-by: NAndy Grover <agrover@redhat.com>
      Cc: <stable@vger.kernel.org> #3.1+
      Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
      ee291e63
    • M
      ath10k: configure access category for arp · ab6258ed
      Marek Puzyniak 提交于
      ARP frames exchange does not work properly for UAPSD enabled AP.
      ARP requests which arrives with access category 0 are processed
      by network stack and send back with access category 0. FW changes
      access category to 6. This is causing problems when UAPSD associated
      STA is sleeping after has sent ARP request. Configure ARP access
      category in FW to best effort (0) solves this problem. ARP frames
      will be send with access category 0.
      
      Simplify arp ac override functionality by removing redundant entry in
      pdev param maping table. There should be only one entry in pdev param
      map but enum has different name for different FW.
      
      kvalo: change the warning message
      Signed-off-by: NMarek Puzyniak <marek.puzyniak@tieto.com>
      Signed-off-by: NKalle Valo <kvalo@qca.qualcomm.com>
      ab6258ed
    • M
      ath10k: properly return err from start() · c60bdd83
      Michal Kazior 提交于
      If recovery failed ath10k returned 0 (success) and
      mac80211 continued to call other driver callbacks.
      This caused null dereference.
      
      This is how the failure looked like:
      
      ath10k: ctl_resp never came in (-110)
      ath10k: failed to connect to HTC: -110
      ath10k: could not init core (-110)
      BUG: unable to handle kernel NULL pointer dereference at           (null)
      IP: [<ffffffffa0b355c1>] ath10k_ce_send+0x1d/0x15d [ath10k_pci]
      PGD 0
      Oops: 0000 [#1] PREEMPT SMP
      Modules linked in: ath10k_pci ath10k_core ath5k ath9k ath9k_common ath9k_hw ath mac80211 cfg80211 nf_nat_ipv4 ]
      CPU: 1 PID: 36 Comm: kworker/1:1 Tainted: G        WC   3.13.0-rc8-wl-ath+ #8
      Hardware name: To be filled by O.E.M. To be filled by O.E.M./HURONRIVER, BIOS 4.6.5 05/02/2012
      Workqueue: events ieee80211_restart_work [mac80211]
      task: ffff880215b521c0 ti: ffff880215e18000 task.ti: ffff880215e18000
      RIP: 0010:[<ffffffffa0b355c1>]  [<ffffffffa0b355c1>] ath10k_ce_send+0x1d/0x15d [ath10k_pci]
      RSP: 0018:ffff880215e19af8  EFLAGS: 00010292
      RAX: ffff880215e19b10 RBX: 0000000000000000 RCX: 0000000000000018
      RDX: 00000000d9ccf800 RSI: ffff8800c965ad00 RDI: 0000000000000000
      RBP: ffff880215e19b58 R08: 0000000000000002 R09: 0000000000000000
      R10: ffffffff812e1a23 R11: 0000000000000292 R12: 0000000000000018
      R13: 0000000000000000 R14: 0000000000000002 R15: ffff88021562d700
      FS:  0000000000000000(0000) GS:ffff88021fa80000(0000) knlGS:0000000000000000
      CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      CR2: 0000000000000000 CR3: 0000000001a0d000 CR4: 00000000000407e0
      Stack:
       d9ccf8000d47df40 ffffffffa0b367a0 ffff880215e19b10 0000000000000010
       ffff880215e19b68 ffff880215e19b28 0000000000000018 ffff8800c965ad00
       0000000000000018 0000000000000000 0000000000000002 ffff88021562d700
      Call Trace:
       [<ffffffffa0b3251d>] ath10k_pci_hif_send_head+0xa7/0xcb [ath10k_pci]
       [<ffffffffa0b16cbe>] ath10k_htc_send+0x23d/0x2d0 [ath10k_core]
       [<ffffffffa0b1a169>] ath10k_wmi_cmd_send_nowait+0x5d/0x85 [ath10k_core]
       [<ffffffffa0b1aaef>] ath10k_wmi_cmd_send+0x62/0x115 [ath10k_core]
       [<ffffffff814e8abd>] ? __netdev_alloc_skb+0x4b/0x9b
       [<ffffffffa0b1c438>] ath10k_wmi_vdev_set_param+0x91/0xa3 [ath10k_core]
       [<ffffffffa0b0e0d5>] ath10k_mac_set_rts+0x3e/0x40 [ath10k_core]
       [<ffffffffa0b0e1d0>] ath10k_set_frag_threshold+0x5e/0x9c [ath10k_core]
       [<ffffffffa09d60eb>] ieee80211_reconfig+0x12a/0x7b3 [mac80211]
       [<ffffffff815a8069>] ? mutex_unlock+0x9/0xb
       [<ffffffffa09b3a40>] ieee80211_restart_work+0x5e/0x68 [mac80211]
       [<ffffffff810c01d0>] process_one_work+0x1d7/0x2fc
       [<ffffffff810c0166>] ? process_one_work+0x16d/0x2fc
       [<ffffffff810c06c8>] worker_thread+0x12e/0x1fb
       [<ffffffff810c059a>] ? rescuer_thread+0x27b/0x27b
       [<ffffffff810c5aee>] kthread+0xb5/0xbd
       [<ffffffff815a9220>] ? _raw_spin_unlock_irq+0x28/0x42
       [<ffffffff810c5a39>] ? __kthread_parkme+0x5c/0x5c
       [<ffffffff815ae04c>] ret_from_fork+0x7c/0xb0
       [<ffffffff810c5a39>] ? __kthread_parkme+0x5c/0x5c
      Code: df ff d0 48 83 c4 18 5b 41 5c 41 5d 5d c3 55 48 89 e5 41 57 41 56 45 89 c6 41 55 41 54 41 89 cc 53 48 89
      RIP  [<ffffffffa0b355c1>] ath10k_ce_send+0x1d/0x15d [ath10k_pci]
       RSP <ffff880215e19af8>
      CR2: 0000000000000000
      Reported-By: NBen Greear <greearb@candelatech.com>
      Signed-off-by: NMichal Kazior <michal.kazior@tieto.com>
      Signed-off-by: NKalle Valo <kvalo@qca.qualcomm.com>
      c60bdd83
    • I
      drm/nouveau: resume display if any later suspend bits fail · f3980dc5
      Ilia Mirkin 提交于
      If either idling channels or suspending the fence were to fail, the
      display would never be resumed. Also if a client fails, resume the fence
      (not functionally important, but it would potentially leak memory).
      
      See https://bugs.freedesktop.org/show_bug.cgi?id=70213Signed-off-by: NIlia Mirkin <imirkin@alum.mit.edu>
      Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
      f3980dc5
    • M
      drm/nouveau: fix lock unbalance in nouveau_crtc_page_flip · 09c3de13
      Maarten Lankhorst 提交于
      Fixes a regression introduced by d5c1e84b
      "drm/nouveau: hold mutex while syncing to kernel channel".
      
      Cc: stable@vger.kernel.org #3.13
      Reported-by: NFengguang Wu <fengguang.wu@intel.com>
      Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com>
      Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
      09c3de13
    • B
    • B
    • B
      drm/nv50: fill in crtc mode struct members from crtc_mode_fixup · eb2e9686
      Ben Skeggs 提交于
      The DRM uses the adjusted mode to calculate constants for vblank
      timestamping.  Our encoder mode_fixup (usually) replaces this data
      with our backend mode information, which doesn't have the needed
      data filled in already.
      
      Reported-by: Mario Kleiner mario.kleiner.de@gmail.com
      Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
      eb2e9686
    • A
      drm/radeon/dce8: workaround for atom BlankCrtc table · 78fe9e54
      Alex Deucher 提交于
      Some DCE8 boards have a funky BlankCrtc table that results
      in a timeout when trying to blank the display.  The
      timeout is harmless (all operations needed from the table
      are complete), but wastes time and is confusing to users so
      work around it.
      
      bug:
      https://bugs.freedesktop.org/show_bug.cgi?id=73420Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
      Cc: stable@vger.kernel.org
      78fe9e54