1. 22 1月, 2015 1 次提交
  2. 08 1月, 2015 12 次提交
  3. 07 1月, 2015 1 次提交
    • C
      s390/timex: fix get_tod_clock_ext() inline assembly · e38f9781
      Chen Gang 提交于
      For C language, it treats array parameter as a pointer, so sizeof for an
      array parameter is equal to sizeof for a pointer, which causes compiler
      warning (with allmodconfig by gcc 5):
      
        ./arch/s390/include/asm/timex.h: In function 'get_tod_clock_ext':
        ./arch/s390/include/asm/timex.h:76:32: warning: 'sizeof' on array function parameter 'clk' will return size of 'char *' [-Wsizeof-array-argument]
          typedef struct { char _[sizeof(clk)]; } addrtype;
                                        ^
      Can use macro CLOCK_STORE_SIZE instead of all related hard code numbers,
      which also can avoid this warning. And also add a tab to CLOCK_TICK_RATE
      definition to match coding styles.
      
      [heiko.carstens@de.ibm.com]:
      Chen's patch actually fixes a bug within the get_tod_clock_ext() inline assembly
      where we incorrectly tell the compiler that only 8 bytes of memory get changed
      instead of 16 bytes.
      This would allow gcc to generate incorrect code. Right now this doesn't seem to
      be the case.
      Also slightly changed the patch a bit.
      - renamed CLOCK_STORE_SIZE to STORE_CLOCK_EXT_SIZE
      - changed get_tod_clock_ext() to receive a char pointer parameter
      Signed-off-by: NChen Gang <gang.chen.5i5j@gmail.com>
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      e38f9781
  4. 18 12月, 2014 4 次提交
  5. 17 12月, 2014 1 次提交
    • L
      microblaze: Fix mmap for cache coherent memory · 3a8e3265
      Lars-Peter Clausen 提交于
      When running in non-cache coherent configuration the memory that was
      allocated with dma_alloc_coherent() has a custom mapping and so there is no
      1-to-1 relationship between the kernel virtual address and the PFN. This
      means that virt_to_pfn() will not work correctly for those addresses and the
      default mmap implementation in the form of dma_common_mmap() will map some
      random, but not the requested, memory area.
      
      Fix this by providing a custom mmap implementation that looks up the PFN
      from the page table rather than using virt_to_pfn.
      Signed-off-by: NLars-Peter Clausen <lars@metafoo.de>
      Signed-off-by: NMichal Simek <michal.simek@xilinx.com>
      3a8e3265
  6. 16 12月, 2014 5 次提交
  7. 14 12月, 2014 7 次提交
  8. 13 12月, 2014 1 次提交
  9. 12 12月, 2014 6 次提交
  10. 11 12月, 2014 2 次提交
    • M
      arm64: mm: dump: don't skip final region · fb59d007
      Mark Rutland 提交于
      If the final page table entry we walk is a valid mapping, the page table
      dumping code will not log the region this entry is part of, as the final
      note_page call in ptdump_show will trigger an early return. Luckily this
      isn't seen on contemporary systems as they typically don't have enough
      RAM to extend the linear mapping right to the end of the address space.
      
      In note_page, we log a region  when we reach its end (i.e. we hit an
      entry immediately afterwards which has different prot bits or is
      invalid). The final entry has no subsequent entry, so we will not log
      this immediately. We try to cater for this with a subsequent call to
      note_page in ptdump_show, but this returns early as 0 < LOWEST_ADDR, and
      hence we will skip a valid mapping if it spans to the final entry we
      note.
      
      Unlike 32-bit ARM, the pgd with the kernel mapping is never shared with
      user mappings, so we do not need the check to ensure we don't log user
      page tables. Due to the way addr is constructed in the walk_* functions,
      it can never be less than LOWEST_ADDR when walking the page tables, so
      it is not necessary to avoid dereferencing invalid table addresses. The
      existing checks for st->current_prot and st->marker[1].start_address are
      sufficient to ensure we will not print and/or dereference garbage when
      trying to log information.
      
      This patch removes the unnecessary check against LOWEST_ADDR, ensuring
      we log all regions in the kernel page table, including those which span
      right to the end of the address space.
      
      Cc: Kees Cook <keescook@chromium.org>
      Acked-by: NLaura Abbott <lauraa@codeaurora.org>
      Acked-by: NSteve Capper <steve.capper@linaro.org>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      fb59d007
    • M
      arm64: mm: dump: fix shift warning · 35545f0c
      Mark Rutland 提交于
      When building with 48-bit VAs, it's possible to get the following
      warning when building the arm64 page table dumping code:
      
      arch/arm64/mm/dump.c: In function ‘walk_pgd’:
      arch/arm64/mm/dump.c:266:2: warning: right shift count >= width of type
        pgd_t *pgd = pgd_offset(mm, 0);
        ^
      
      As pgd_offset is a macro and the second argument is not cast to any
      particular type, the zero will be given integer type by the compiler.
      As pgd_offset passes the pargument to pgd_index, we then try to shift
      the 32-bit integer by at least 39 bits (for 4k pages).
      
      Elsewhere the pgd_offset is passed a second argument of unsigned long
      type, so let's do the same here by passing '0UL' rather than '0'.
      
      Cc: Kees Cook <keescook@chromium.org>
      Acked-by: NLaura Abbott <lauraa@codeaurora.org>
      Acked-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      35545f0c