1. 19 4月, 2017 1 次提交
  2. 25 2月, 2017 3 次提交
  3. 11 1月, 2017 1 次提交
  4. 12 11月, 2016 1 次提交
  5. 12 10月, 2016 1 次提交
  6. 28 5月, 2016 1 次提交
    • S
      mm/cma: silence warnings due to max() usage · badbda53
      Stephen Rothwell 提交于
      pageblock_order can be (at least) an unsigned int or an unsigned long
      depending on the kernel config and architecture, so use max_t(unsigned
      long, ...) when comparing it.
      
      fixes these warnings:
      
      In file included from include/asm-generic/bug.h:13:0,
                       from arch/powerpc/include/asm/bug.h:127,
                       from include/linux/bug.h:4,
                       from include/linux/mmdebug.h:4,
                       from include/linux/mm.h:8,
                       from include/linux/memblock.h:18,
                       from mm/cma.c:28:
      mm/cma.c: In function 'cma_init_reserved_mem':
      include/linux/kernel.h:748:17: warning: comparison of distinct pointer types lacks a cast
        (void) (&_max1 == &_max2);                   ^
      mm/cma.c:186:27: note: in expansion of macro 'max'
        alignment = PAGE_SIZE << max(MAX_ORDER - 1, pageblock_order);
                                 ^
      mm/cma.c: In function 'cma_declare_contiguous':
      include/linux/kernel.h:748:17: warning: comparison of distinct pointer types lacks a cast
        (void) (&_max1 == &_max2);                   ^
      include/linux/kernel.h:747:9: note: in definition of macro 'max'
        typeof(y) _max2 = (y);            ^
      mm/cma.c:270:29: note: in expansion of macro 'max'
         (phys_addr_t)PAGE_SIZE << max(MAX_ORDER - 1, pageblock_order));
                                   ^
      include/linux/kernel.h:748:17: warning: comparison of distinct pointer types lacks a cast
        (void) (&_max1 == &_max2);                   ^
      include/linux/kernel.h:747:21: note: in definition of macro 'max'
        typeof(y) _max2 = (y);                        ^
      mm/cma.c:270:29: note: in expansion of macro 'max'
         (phys_addr_t)PAGE_SIZE << max(MAX_ORDER - 1, pageblock_order));
                                   ^
      
      [akpm@linux-foundation.org: coding-style fixes]
      Link: http://lkml.kernel.org/r/20160526150748.5be38a4f@canb.auug.org.auSigned-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      badbda53
  7. 06 11月, 2015 1 次提交
    • A
      mm/cma.c: suppress warning · 3acaea68
      Andrew Morton 提交于
      mm/cma.c: In function 'cma_alloc':
      mm/cma.c:366: warning: 'pfn' may be used uninitialized in this function
      
      The patch actually improves the tracing a bit: if alloc_contig_range()
      fails, tracing will display the offending pfn rather than -1.
      
      Cc: Stefan Strogin <stefan.strogin@gmail.com>
      Cc: Michal Nazarewicz <mpn@google.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
      Cc: Thierry Reding <treding@nvidia.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3acaea68
  8. 23 10月, 2015 1 次提交
  9. 25 6月, 2015 2 次提交
    • T
      mm/memblock: add extra "flags" to memblock to allow selection of memory based on attribute · fc6daaf9
      Tony Luck 提交于
      Some high end Intel Xeon systems report uncorrectable memory errors as a
      recoverable machine check.  Linux has included code for some time to
      process these and just signal the affected processes (or even recover
      completely if the error was in a read only page that can be replaced by
      reading from disk).
      
      But we have no recovery path for errors encountered during kernel code
      execution.  Except for some very specific cases were are unlikely to ever
      be able to recover.
      
      Enter memory mirroring. Actually 3rd generation of memory mirroing.
      
      Gen1: All memory is mirrored
      	Pro: No s/w enabling - h/w just gets good data from other side of the
      	     mirror
      	Con: Halves effective memory capacity available to OS/applications
      
      Gen2: Partial memory mirror - just mirror memory begind some memory controllers
      	Pro: Keep more of the capacity
      	Con: Nightmare to enable. Have to choose between allocating from
      	     mirrored memory for safety vs. NUMA local memory for performance
      
      Gen3: Address range partial memory mirror - some mirror on each memory
            controller
      	Pro: Can tune the amount of mirror and keep NUMA performance
      	Con: I have to write memory management code to implement
      
      The current plan is just to use mirrored memory for kernel allocations.
      This has been broken into two phases:
      
      1) This patch series - find the mirrored memory, use it for boot time
         allocations
      
      2) Wade into mm/page_alloc.c and define a ZONE_MIRROR to pick up the
         unused mirrored memory from mm/memblock.c and only give it out to
         select kernel allocations (this is still being scoped because
         page_alloc.c is scary).
      
      This patch (of 3):
      
      Add extra "flags" to memblock to allow selection of memory based on
      attribute.  No functional changes
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      Cc: Xishi Qiu <qiuxishi@huawei.com>
      Cc: Hanjun Guo <guohanjun@huawei.com>
      Cc: Xiexiuqi <xiexiuqi@huawei.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Naoya Horiguchi <nao.horiguchi@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      fc6daaf9
    • S
  10. 16 4月, 2015 1 次提交
  11. 15 4月, 2015 3 次提交
  12. 13 3月, 2015 1 次提交
    • D
      mm: cma: fix CMA aligned offset calculation · 850fc430
      Danesh Petigara 提交于
      The CMA aligned offset calculation is incorrect for non-zero order_per_bit
      values.
      
      For example, if cma->order_per_bit=1, cma->base_pfn= 0x2f800000 and
      align_order=12, the function returns a value of 0x17c00 instead of 0x400.
      
      This patch fixes the CMA aligned offset calculation.
      
      The previous calculation was wrong and would return too-large values for
      the offset, so that when cma_alloc looks for free pages in the bitmap with
      the requested alignment > order_per_bit, it starts too far into the bitmap
      and so CMA allocations will fail despite there actually being plenty of
      free pages remaining.  It will also probably have the wrong alignment.
      With this change, we will get the correct offset into the bitmap.
      
      One affected user is powerpc KVM, which has kvm_cma->order_per_bit set to
      KVM_CMA_CHUNK_ORDER - PAGE_SHIFT, or 18 - 12 = 6.
      
      [gregory.0xf0@gmail.com: changelog additions]
      Signed-off-by: NDanesh Petigara <dpetigara@broadcom.com>
      Reviewed-by: NGregory Fong <gregory.0xf0@gmail.com>
      Acked-by: NMichal Nazarewicz <mina86@mina86.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      850fc430
  13. 12 2月, 2015 1 次提交
  14. 19 12月, 2014 1 次提交
    • P
      mm: cma: split cma-reserved in dmesg log · e48322ab
      Pintu Kumar 提交于
      When the system boots up, in the dmesg logs we can see the memory
      statistics along with total reserved as below.  Memory: 458840k/458840k
      available, 65448k reserved, 0K highmem
      
      When CMA is enabled, still the total reserved memory remains the same.
      However, the CMA memory is not considered as reserved.  But, when we see
      /proc/meminfo, the CMA memory is part of free memory.  This creates
      confusion.  This patch corrects the problem by properly subtracting the
      CMA reserved memory from the total reserved memory in dmesg logs.
      
      Below is the dmesg snapshot from an arm based device with 512MB RAM and
      12MB single CMA region.
      
      Before this change:
        Memory: 458840k/458840k available, 65448k reserved, 0K highmem
      
      After this change:
        Memory: 458840k/458840k available, 53160k reserved, 12288k cma-reserved, 0K highmem
      Signed-off-by: NPintu Kumar <pintu.k@samsung.com>
      Signed-off-by: NVishnu Pratap Singh <vishnu.ps@samsung.com>
      Acked-by: NMichal Nazarewicz <mina86@mina86.com>
      Cc: Rafael Aquini <aquini@redhat.com>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e48322ab
  15. 14 12月, 2014 2 次提交
    • T
      mm/cma: make kmemleak ignore CMA regions · 620951e2
      Thierry Reding 提交于
      kmemleak will add allocations as objects to a pool.  The memory allocated
      for each object in this pool is periodically searched for pointers to
      other allocated objects.  This only works for memory that is mapped into
      the kernel's virtual address space, which happens not to be the case for
      most CMA regions.
      
      Furthermore, CMA regions are typically used to store data transferred to
      or from a device and therefore don't contain pointers to other objects.
      
      Without this, the kernel crashes on the first execution of the
      scan_gray_list() because it tries to access highmem.  Perhaps a more
      appropriate fix would be to reject any object that can't map to a kernel
      virtual address?
      
      [akpm@linux-foundation.org: add comment]
      [akpm@linux-foundation.org: fix comment, per Catalin]
      [sfr@canb.auug.org.au: include linux/io.h for phys_to_virt()]
      Signed-off-by: NThierry Reding <treding@nvidia.com>
      Cc: Michal Nazarewicz <mina86@mina86.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      620951e2
    • G
      mm: cma: align to physical address, not CMA region position · b5be83e3
      Gregory Fong 提交于
      The alignment in cma_alloc() was done w.r.t. the bitmap.  This is a
      problem when, for example:
      
      - a device requires 16M (order 12) alignment
      - the CMA region is not 16 M aligned
      
      In such a case, can result with the CMA region starting at, say,
      0x2f800000 but any allocation you make from there will be aligned from
      there.  Requesting an allocation of 32 M with 16 M alignment will result
      in an allocation from 0x2f800000 to 0x31800000, which doesn't work very
      well if your strange device requires 16M alignment.
      
      Change to use bitmap_find_next_zero_area_off() to account for the
      difference in alignment at reserve-time and alloc-time.
      Signed-off-by: NGregory Fong <gregory.0xf0@gmail.com>
      Acked-by: NMichal Nazarewicz <mina86@mina86.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Kukjin Kim <kgene.kim@samsung.com>
      Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
      Cc: Laura Abbott <lauraa@codeaurora.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b5be83e3
  16. 11 12月, 2014 1 次提交
    • J
      mm/CMA: fix boot regression due to physical address of high_memory · 6b101e2a
      Joonsoo Kim 提交于
      high_memory isn't direct mapped memory so retrieving it's physical address
      isn't appropriate.  But, it would be useful to check physical address of
      highmem boundary so it's justfiable to get physical address from it.  In
      x86, there is a validation check if CONFIG_DEBUG_VIRTUAL and it triggers
      following boot failure reported by Ingo.
      
        ...
        BUG: Int 6: CR2 00f06f53
        ...
        Call Trace:
          dump_stack+0x41/0x52
          early_idt_handler+0x6b/0x6b
          cma_declare_contiguous+0x33/0x212
          dma_contiguous_reserve_area+0x31/0x4e
          dma_contiguous_reserve+0x11d/0x125
          setup_arch+0x7b5/0xb63
          start_kernel+0xb8/0x3e6
          i386_start_kernel+0x79/0x7d
      
      To fix boot regression, this patch implements workaround to avoid
      validation check in x86 when retrieving physical address of high_memory.
      __pa_nodebug() used by this patch is implemented only in x86 so there is
      no choice but to use dirty #ifdef.
      
      [akpm@linux-foundation.org: tweak comment]
      Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Reported-by: NIngo Molnar <mingo@kernel.org>
      Tested-by: NIngo Molnar <mingo@kernel.org>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6b101e2a
  17. 27 10月, 2014 4 次提交
  18. 14 10月, 2014 2 次提交
  19. 10 10月, 2014 1 次提交
  20. 07 8月, 2014 7 次提交
  21. 24 6月, 2014 1 次提交
  22. 05 6月, 2014 1 次提交
    • A
      cma: add placement specifier for "cma=" kernel parameter · 5ea3b1b2
      Akinobu Mita 提交于
      Currently, "cma=" kernel parameter is used to specify the size of CMA,
      but we can't specify where it is located.  We want to locate CMA below
      4GB for devices only supporting 32-bit addressing on 64-bit systems
      without iommu.
      
      This enables to specify the placement of CMA by extending "cma=" kernel
      parameter.
      
      Examples:
       1. locate 64MB CMA below 4GB by "cma=64M@0-4G"
       2. locate 64MB CMA exact at 512MB by "cma=64M@512M"
      
      Note that the DMA contiguous memory allocator on x86 assumes that
      page_address() works for the pages to allocate.  So this change requires
      to limit end address of contiguous memory area upto max_pfn_mapped to
      prevent from locating it on highmem area by the argument of
      dma_contiguous_reserve().
      Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Don Dutile <ddutile@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5ea3b1b2
  23. 29 5月, 2014 1 次提交
  24. 22 5月, 2014 1 次提交