1. 06 12月, 2014 1 次提交
  2. 27 11月, 2014 6 次提交
  3. 26 11月, 2014 1 次提交
    • A
      kvm: fix kvm_is_mmio_pfn() and rename to kvm_is_reserved_pfn() · d3fccc7e
      Ard Biesheuvel 提交于
      This reverts commit 85c8555f ("KVM: check for !is_zero_pfn() in
      kvm_is_mmio_pfn()") and renames the function to kvm_is_reserved_pfn.
      
      The problem being addressed by the patch above was that some ARM code
      based the memory mapping attributes of a pfn on the return value of
      kvm_is_mmio_pfn(), whose name indeed suggests that such pfns should
      be mapped as device memory.
      
      However, kvm_is_mmio_pfn() doesn't do quite what it says on the tin,
      and the existing non-ARM users were already using it in a way which
      suggests that its name should probably have been 'kvm_is_reserved_pfn'
      from the beginning, e.g., whether or not to call get_page/put_page on
      it etc. This means that returning false for the zero page is a mistake
      and the patch above should be reverted.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      d3fccc7e
  4. 25 11月, 2014 4 次提交
  5. 24 11月, 2014 2 次提交
  6. 21 11月, 2014 2 次提交
  7. 20 11月, 2014 2 次提交
    • D
      drm: s/enum_blob_list/enum_list/ in drm_property · 3758b341
      Daniel Vetter 提交于
      I guess for hysterical raisins this was meant to be the way to read
      blob properties. But that's done with the two-stage approach which
      uses separate blob kms object and the special-purpose get_blob ioctl.
      
      Shipping userspace seems to have never relied on this, and the kernel
      also never put any blob thing onto that property. And nowadays it
      would blow up, e.g. in drm_property_destroy. Also it makes no sense to
      return values in an ioctl that only returns metadata about everything.
      
      So let's ditch all the internal code for the blob list, rename the
      list to be unambiguous and sprinkle comments all over the place to
      explain this peculiar piece of api.
      
      v2: Squash in fixup from Rob to remove now unused variables.
      
      Cc: Rob Clark <robdclark@gmail.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
      Reviewed-by: NRob Clark <robdclark@gmail.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      3758b341
    • D
      drm/atomic: Don't overrun the connector array when hotplugging · f52b69f1
      Daniel Vetter 提交于
      Yet another fallout from not considering DP MST hotplug. With the
      previous patches we have stable indices, but it might still happen
      that a connector gets added between when we allocate the array and
      when we actually add a connector. Especially when we back off due to
      ww mutex contention or similar issues.
      
      So store the sizes of the arrays in struct drm_atomic_state and double
      check them. We don't really care about races except that we want to
      use a consistent value, so ACCESS_ONCE is all we need. And if we
      indeed notice that we'd overrun the array then just give up and
      restart the entire ioctl.
      Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
      Reviewed-by: NRob Clark <robdclark@gmail.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      f52b69f1
  8. 19 11月, 2014 1 次提交
  9. 18 11月, 2014 3 次提交
    • D
      can: dev: add can_is_canfd_skb() API · 98e69016
      Dong Aisheng 提交于
      The CAN device drivers can use can_is_canfd_skb() to check if the frame to send
      is on CAN FD mode or normal CAN mode.
      Acked-by: NOliver Hartkopp <socketcan@hartkopp.net>
      Signed-off-by: NDong Aisheng <b29396@freescale.com>
      Signed-off-by: NMarc Kleine-Budde <mkl@pengutronix.de>
      98e69016
    • J
      clk-divider: Fix READ_ONLY when divider > 1 · e6d5e7d9
      James Hogan 提交于
      Commit 79c6ab50 (clk: divider: add CLK_DIVIDER_READ_ONLY flag) in
      v3.16 introduced the CLK_DIVIDER_READ_ONLY flag which caused the
      recalc_rate() and round_rate() clock callbacks to be omitted.
      
      However using this flag has the unfortunate side effect of causing the
      clock recalculation code when a clock rate change is attempted to always
      treat it as a pass-through clock, i.e. with a fixed divide of 1, which
      may not be the case. Child clock rates are then recalculated using the
      wrong parent rate.
      
      Therefore instead of dropping the recalc_rate() and round_rate()
      callbacks, alter clk_divider_bestdiv() to always report the current
      divider as the best divider so that it is never altered.
      
      For me the read only clock was the system clock, which divided the PLL
      rate by 2, from which both the UART and the SPI clocks were divided.
      Initial setting of the UART rate set it correctly, but when the SPI
      clock was set, the other child clocks were miscalculated. The UART clock
      was recalculated using the PLL rate as the parent rate, resulting in a
      UART new_rate of double what it should be, and a UART which spewed forth
      garbage when the rate changes were propagated.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Thomas Abraham <thomas.ab@samsung.com>
      Cc: Tomasz Figa <t.figa@samsung.com>
      Cc: Max Schwarz <max.schwarz@online.de>
      Cc: <stable@vger.kernel.org> # v3.16+
      Acked-by: NHaojian Zhuang <haojian.zhuang@gmail.com>
      Signed-off-by: NMichael Turquette <mturquette@linaro.org>
      e6d5e7d9
    • G
      clk: qcom: Fix duplicate rbcpr clock name · 9a6cb70f
      Georgi Djakov 提交于
      There is a duplication in a clock name for apq8084 platform that causes
      the following warning: "RBCPR_CLK_SRC" redefined
      
      Resolve this by adding a MMSS_ prefix to this clock and making its name
      coherent with msm8974 platform.
      
      Fixes: 2b46cd23 ("clk: qcom: Add APQ8084 Multimedia Clock Controller (MMCC) support")
      Signed-off-by: NGeorgi Djakov <gdjakov@mm-sol.com>
      Reviewed-by: NStephen Boyd <sboyd@codeaurora.org>
      Signed-off-by: NMichael Turquette <mturquette@linaro.org>
      9a6cb70f
  10. 16 11月, 2014 3 次提交
  11. 15 11月, 2014 5 次提交
  12. 14 11月, 2014 3 次提交
    • C
      drm/i915: Make the physical object coherent with GTT · 6a2c4232
      Chris Wilson 提交于
      Currently objects for which the hardware needs a contiguous physical
      address are allocated a shadow backing storage to satisfy the contraint.
      This shadow buffer is not wired into the normal obj->pages and so the
      physical object is incoherent with accesses via the GPU, GTT and CPU. By
      setting up the appropriate scatter-gather table, we can allow userspace
      to access the physical object via either a GTT mmaping of or by rendering
      into the GEM bo. However, keeping the CPU mmap of the shmemfs backing
      storage coherent with the contiguous shadow is not yet possible.
      Fortuituously, CPU mmaps of objects requiring physical addresses are not
      expected to be coherent anyway.
      
      This allows the physical constraint of the GEM object to be transparent
      to userspace and allow it to efficiently render into or update them via
      the GTT and GPU.
      
      v2: Fix leak of pci handle spotted by Ville
      v3: Remove the now duplicate call to detach_phys_object during free.
      v4: Wait for rendering before pwrite. As this patch makes it possible to
      render into the phys object, we should make it correct as well!
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
      Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com>
      Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      6a2c4232
    • T
      mem-hotplug: reset node managed pages when hot-adding a new pgdat · f784a3f1
      Tang Chen 提交于
      In free_area_init_core(), zone->managed_pages is set to an approximate
      value for lowmem, and will be adjusted when the bootmem allocator frees
      pages into the buddy system.
      
      But free_area_init_core() is also called by hotadd_new_pgdat() when
      hot-adding memory.  As a result, zone->managed_pages of the newly added
      node's pgdat is set to an approximate value in the very beginning.
      
      Even if the memory on that node has node been onlined,
      /sys/device/system/node/nodeXXX/meminfo has wrong value:
      
        hot-add node2 (memory not onlined)
        cat /sys/device/system/node/node2/meminfo
        Node 2 MemTotal:       33554432 kB
        Node 2 MemFree:               0 kB
        Node 2 MemUsed:        33554432 kB
        Node 2 Active:                0 kB
      
      This patch fixes this problem by reset node managed pages to 0 after
      hot-adding a new node.
      
      1. Move reset_managed_pages_done from reset_node_managed_pages() to
         reset_all_zones_managed_pages()
      2. Make reset_node_managed_pages() non-static
      3. Call reset_node_managed_pages() in hotadd_new_pgdat() after pgdat
         is initialized
      Signed-off-by: NTang Chen <tangchen@cn.fujitsu.com>
      Signed-off-by: NYasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Cc: <stable@vger.kernel.org>	[3.16+]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f784a3f1
    • J
      mm/page_alloc: fix incorrect isolation behavior by rechecking migratetype · ad53f92e
      Joonsoo Kim 提交于
      Before describing bugs itself, I first explain definition of freepage.
      
       1. pages on buddy list are counted as freepage.
       2. pages on isolate migratetype buddy list are *not* counted as freepage.
       3. pages on cma buddy list are counted as CMA freepage, too.
      
      Now, I describe problems and related patch.
      
      Patch 1: There is race conditions on getting pageblock migratetype that
      it results in misplacement of freepages on buddy list, incorrect
      freepage count and un-availability of freepage.
      
      Patch 2: Freepages on pcp list could have stale cached information to
      determine migratetype of buddy list to go.  This causes misplacement of
      freepages on buddy list and incorrect freepage count.
      
      Patch 4: Merging between freepages on different migratetype of
      pageblocks will cause freepages accouting problem.  This patch fixes it.
      
      Without patchset [3], above problem doesn't happens on my CMA allocation
      test, because CMA reserved pages aren't used at all.  So there is no
      chance for above race.
      
      With patchset [3], I did simple CMA allocation test and get below
      result:
      
       - Virtual machine, 4 cpus, 1024 MB memory, 256 MB CMA reservation
       - run kernel build (make -j16) on background
       - 30 times CMA allocation(8MB * 30 = 240MB) attempts in 5 sec interval
       - Result: more than 5000 freepage count are missed
      
      With patchset [3] and this patchset, I found that no freepage count are
      missed so that I conclude that problems are solved.
      
      On my simple memory offlining test, these problems also occur on that
      environment, too.
      
      This patch (of 4):
      
      There are two paths to reach core free function of buddy allocator,
      __free_one_page(), one is free_one_page()->__free_one_page() and the
      other is free_hot_cold_page()->free_pcppages_bulk()->__free_one_page().
      Each paths has race condition causing serious problems.  At first, this
      patch is focused on first type of freepath.  And then, following patch
      will solve the problem in second type of freepath.
      
      In the first type of freepath, we got migratetype of freeing page
      without holding the zone lock, so it could be racy.  There are two cases
      of this race.
      
       1. pages are added to isolate buddy list after restoring orignal
          migratetype
      
          CPU1                                   CPU2
      
          get migratetype => return MIGRATE_ISOLATE
          call free_one_page() with MIGRATE_ISOLATE
      
                                      grab the zone lock
                                      unisolate pageblock
                                      release the zone lock
      
          grab the zone lock
          call __free_one_page() with MIGRATE_ISOLATE
          freepage go into isolate buddy list,
          although pageblock is already unisolated
      
      This may cause two problems.  One is that we can't use this page anymore
      until next isolation attempt of this pageblock, because freepage is on
      isolate buddy list.  The other is that freepage accouting could be wrong
      due to merging between different buddy list.  Freepages on isolate buddy
      list aren't counted as freepage, but ones on normal buddy list are
      counted as freepage.  If merge happens, buddy freepage on normal buddy
      list is inevitably moved to isolate buddy list without any consideration
      of freepage accouting so it could be incorrect.
      
       2. pages are added to normal buddy list while pageblock is isolated.
          It is similar with above case.
      
      This also may cause two problems.  One is that we can't keep these
      freepages from being allocated.  Although this pageblock is isolated,
      freepage would be added to normal buddy list so that it could be
      allocated without any restriction.  And the other problem is same as
      case 1, that it, incorrect freepage accouting.
      
      This race condition would be prevented by checking migratetype again
      with holding the zone lock.  Because it is somewhat heavy operation and
      it isn't needed in common case, we want to avoid rechecking as much as
      possible.  So this patch introduce new variable, nr_isolate_pageblock in
      struct zone to check if there is isolated pageblock.  With this, we can
      avoid to re-check migratetype in common case and do it only if there is
      isolated pageblock or migratetype is MIGRATE_ISOLATE.  This solve above
      mentioned problems.
      
      Changes from v3:
      Add one more check in free_one_page() that checks whether migratetype is
      MIGRATE_ISOLATE or not. Without this, abovementioned case 1 could happens.
      Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Acked-by: NMinchan Kim <minchan@kernel.org>
      Acked-by: NMichal Nazarewicz <mina86@mina86.com>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
      Cc: Tang Chen <tangchen@cn.fujitsu.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
      Cc: Wen Congyang <wency@cn.fujitsu.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Laura Abbott <lauraa@codeaurora.org>
      Cc: Heesub Shin <heesub.shin@samsung.com>
      Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Ritesh Harjani <ritesh.list@gmail.com>
      Cc: Gioh Kim <gioh.kim@lge.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ad53f92e
  13. 13 11月, 2014 7 次提交