1. 25 9月, 2013 1 次提交
  2. 29 8月, 2013 1 次提交
    • M
      s390/mm: implement software referenced bits · 0944fe3f
      Martin Schwidefsky 提交于
      The last remaining use for the storage key of the s390 architecture
      is reference counting. The alternative is to make page table entries
      invalid while they are old. On access the fault handler marks the
      pte/pmd as young which makes the pte/pmd valid if the access rights
      allow read access. The pte/pmd invalidations required for software
      managed reference bits cost a bit of performance, on the other hand
      the RRBE/RRBM instructions to read and reset the referenced bits are
      quite expensive as well.
      Reviewed-by: NGerald Schaefer <gerald.schaefer@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      0944fe3f
  3. 27 8月, 2013 1 次提交
  4. 16 8月, 2013 1 次提交
    • L
      Fix TLB gather virtual address range invalidation corner cases · 2b047252
      Linus Torvalds 提交于
      Ben Tebulin reported:
      
       "Since v3.7.2 on two independent machines a very specific Git
        repository fails in 9/10 cases on git-fsck due to an SHA1/memory
        failures.  This only occurs on a very specific repository and can be
        reproduced stably on two independent laptops.  Git mailing list ran
        out of ideas and for me this looks like some very exotic kernel issue"
      
      and bisected the failure to the backport of commit 53a59fc6 ("mm:
      limit mmu_gather batching to fix soft lockups on !CONFIG_PREEMPT").
      
      That commit itself is not actually buggy, but what it does is to make it
      much more likely to hit the partial TLB invalidation case, since it
      introduces a new case in tlb_next_batch() that previously only ever
      happened when running out of memory.
      
      The real bug is that the TLB gather virtual memory range setup is subtly
      buggered.  It was introduced in commit 597e1c35 ("mm/mmu_gather:
      enable tlb flush range in generic mmu_gather"), and the range handling
      was already fixed at least once in commit e6c495a9 ("mm: fix the TLB
      range flushed when __tlb_remove_page() runs out of slots"), but that fix
      was not complete.
      
      The problem with the TLB gather virtual address range is that it isn't
      set up by the initial tlb_gather_mmu() initialization (which didn't get
      the TLB range information), but it is set up ad-hoc later by the
      functions that actually flush the TLB.  And so any such case that forgot
      to update the TLB range entries would potentially miss TLB invalidates.
      
      Rather than try to figure out exactly which particular ad-hoc range
      setup was missing (I personally suspect it's the hugetlb case in
      zap_huge_pmd(), which didn't have the same logic as zap_pte_range()
      did), this patch just gets rid of the problem at the source: make the
      TLB range information available to tlb_gather_mmu(), and initialize it
      when initializing all the other tlb gather fields.
      
      This makes the patch larger, but conceptually much simpler.  And the end
      result is much more understandable; even if you want to play games with
      partial ranges when invalidating the TLB contents in chunks, now the
      range information is always there, and anybody who doesn't want to
      bother with it won't introduce subtle bugs.
      
      Ben verified that this fixes his problem.
      Reported-bisected-and-tested-by: NBen Tebulin <tebulin@googlemail.com>
      Build-testing-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Build-testing-by: NRichard Weinberger <richard.weinberger@gmail.com>
      Reviewed-by: NMichal Hocko <mhocko@suse.cz>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: stable@vger.kernel.org
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2b047252
  5. 14 8月, 2013 3 次提交
  6. 27 7月, 2013 1 次提交
    • S
      tracing: Add __tracepoint_string() to export string pointers · 102c9323
      Steven Rostedt (Red Hat) 提交于
      There are several tracepoints (mostly in RCU), that reference a string
      pointer and uses the print format of "%s" to display the string that
      exists in the kernel, instead of copying the actual string to the
      ring buffer (saves time and ring buffer space).
      
      But this has an issue with userspace tools that read the binary buffers
      that has the address of the string but has no access to what the string
      itself is. The end result is just output that looks like:
      
       rcu_dyntick:          ffffffff818adeaa 1 0
       rcu_dyntick:          ffffffff818adeb5 0 140000000000000
       rcu_dyntick:          ffffffff818adeb5 0 140000000000000
       rcu_utilization:      ffffffff8184333b
       rcu_utilization:      ffffffff8184333b
      
      The above is pretty useless when read by the userspace tools. Ideally
      we would want something that looks like this:
      
       rcu_dyntick:          Start 1 0
       rcu_dyntick:          End 0 140000000000000
       rcu_dyntick:          Start 140000000000000 0
       rcu_callback:         rcu_preempt rhp=0xffff880037aff710 func=put_cred_rcu 0/4
       rcu_callback:         rcu_preempt rhp=0xffff880078961980 func=file_free_rcu 0/5
       rcu_dyntick:          End 0 1
      
      The trace_printk() which also only stores the address of the string
      format instead of recording the string into the buffer itself, exports
      the mapping of kernel addresses to format strings via the printk_format
      file in the debugfs tracing directory.
      
      The tracepoint strings can use this same method and output the format
      to the same file and the userspace tools will be able to decipher
      the address without any modification.
      
      The tracepoint strings need its own section to save the strings because
      the trace_printk section will cause the trace_printk() buffers to be
      allocated if anything exists within the section. trace_printk() is only
      used for debugging and should never exist in the kernel, we can not use
      the trace_printk sections.
      
      Add a new tracepoint_str section that will also be examined by the output
      of the printk_format file.
      
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      102c9323
  7. 04 7月, 2013 3 次提交
    • A
      rapidio: convert switch drivers to modules · 2ec3ba69
      Alexandre Bounine 提交于
      Rework RapidIO switch drivers to add an option to build them as loadable
      kernel modules.
      
      This patch removes RapidIO-specific vmlinux section and converts switch
      drivers to be compatible with LDM driver registration method.  To simplify
      registration of device-specific callback routines this patch introduces
      rio_switch_ops data structure.  The sw_sysfs() callback is removed from
      the list of device-specific operations because under the new structure its
      functions can be handled by switch driver's probe() and remove() routines.
      
      If a specific switch device driver is not loaded the RapidIO subsystem
      core will use default standard-based operations to configure a switch.
      Because the current implementation of RapidIO enumeration/discovery method
      relies on availability of device-specific operations for error management,
      switch device drivers must be loaded before the RapidIO
      enumeration/discovery starts.
      
      This patch also moves several common routines from enumeration/discovery
      module into the RapidIO core code to make switch-specific operations
      accessible to all components of RapidIO subsystem.
      Signed-off-by: NAlexandre Bounine <alexandre.bounine@idt.com>
      Cc: Matt Porter <mporter@kernel.crashing.org>
      Cc: Li Yang <leoli@freescale.com>
      Cc: Kumar Gala <galak@kernel.crashing.org>
      Cc: Andre van Herk <andre.van.herk@Prodrive.nl>
      Cc: Micha Nelissen <micha.nelissen@Prodrive.nl>
      Cc: Stef van Os <stef.van.os@Prodrive.nl>
      Cc: Jean Delvare <jdelvare@suse.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2ec3ba69
    • J
      vmlinux.lds: add comments for global variables and clean up useless declarations · 1622d1ab
      Jiang Liu 提交于
      The original goal of this patchset is to fix the bug reported by
      https://bugzilla.kernel.org/show_bug.cgi?id=53501 Now it has also been
      expanded to reduce common code used by memory initializion.
      
      Patch 1-7:
      	1) add comments for global variables exported by vmlinux.lds
      	2) normalize global variables exported by vmlinux.lds
      Patch 8:
      	Introduce helper functions mem_init_print_info() and
      	get_num_physpages()
      Patch 9:
      	Avoid using global variable num_physpages at runtime
      Patch 10:
      	Don't update num_physpages in memory_hotplug.c
      Patch 11-40:
      	Modify arch mm initialization code to:
      	1) Simplify mem_init() by using mem_init_print_info()
      	2) Prepare for killing global variable num_physpages
      Patch 41:
      	Kill the global variable num_physpages
      
      With all patches applied, mem_init(), free_initmem(), free_initrd_mem()
      could be as simple as below.  This patch series has reduced about 1.2K
      lines of code in total.
      
      #ifndef CONFIG_DISCONTIGMEM
      void __init
      mem_init(void)
      {
      	max_mapnr = max_low_pfn;
      	free_all_bootmem();
      	high_memory = (void *) __va(max_low_pfn * PAGE_SIZE);
      
      	mem_init_print_info(NULL);
      }
      #endif /* CONFIG_DISCONTIGMEM */
      
      void
      free_initmem(void)
      {
      	free_initmem_default(-1);
      }
      
      #ifdef CONFIG_BLK_DEV_INITRD
      void
      free_initrd_mem(unsigned long start, unsigned long end)
      {
      	free_reserved_area(start, end, -1, "initrd");
      }
      #endif
      
      Due to hardware resource limitations, I have only tested this on x86_64.
      And the messages reported on an x86_64 system are:
      
      Log message before applying patches:
      Memory: 7745676k/8910848k available (6934k kernel code, 836024k absent, 329148k reserved, 6343k data, 1012k init)
      
      Log message after applying patches:
      Memory: 7744624K/8074824K available (6969K kernel code, 1011K data, 2828K rodata, 1016K init, 9640K bss, 330200K reserved)
      
      Great thanks to Vineet Gupta for testing on ARC.
      
      This patch:
      
      Document global variables exported from vmlinux.lds.
      
      1) Add comments about usage guidelines for global variables exported
         from vmlinux.lds.S.
      2) Remove unused __initdata_begin[] and __initdata_end[].
      Signed-off-by: NJiang Liu <jiang.liu@huawei.com>
      Acked-by: NArnd Bergmann <arnd@arndb.de>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1622d1ab
    • P
      mm: soft-dirty bits for user memory changes tracking · 0f8975ec
      Pavel Emelyanov 提交于
      The soft-dirty is a bit on a PTE which helps to track which pages a task
      writes to.  In order to do this tracking one should
      
        1. Clear soft-dirty bits from PTEs ("echo 4 > /proc/PID/clear_refs)
        2. Wait some time.
        3. Read soft-dirty bits (55'th in /proc/PID/pagemap2 entries)
      
      To do this tracking, the writable bit is cleared from PTEs when the
      soft-dirty bit is.  Thus, after this, when the task tries to modify a
      page at some virtual address the #PF occurs and the kernel sets the
      soft-dirty bit on the respective PTE.
      
      Note, that although all the task's address space is marked as r/o after
      the soft-dirty bits clear, the #PF-s that occur after that are processed
      fast.  This is so, since the pages are still mapped to physical memory,
      and thus all the kernel does is finds this fact out and puts back
      writable, dirty and soft-dirty bits on the PTE.
      
      Another thing to note, is that when mremap moves PTEs they are marked
      with soft-dirty as well, since from the user perspective mremap modifies
      the virtual memory at mremap's new address.
      Signed-off-by: NPavel Emelyanov <xemul@parallels.com>
      Cc: Matt Mackall <mpm@selenic.com>
      Cc: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
      Cc: Glauber Costa <glommer@parallels.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@gmail.com>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0f8975ec
  8. 29 6月, 2013 1 次提交
  9. 27 6月, 2013 1 次提交
  10. 26 6月, 2013 1 次提交
  11. 20 6月, 2013 1 次提交
  12. 06 6月, 2013 1 次提交
    • P
      arch, mm: Remove tlb_fast_mode() · 29eb7782
      Peter Zijlstra 提交于
      Since the introduction of preemptible mmu_gather TLB fast mode has been
      broken. TLB fast mode relies on there being absolutely no concurrency;
      it frees pages first and invalidates TLBs later.
      
      However now we can get concurrency and stuff goes *bang*.
      
      This patch removes all tlb_fast_mode() code; it was found the better
      option vs trying to patch the hole by entangling tlb invalidation with
      the scheduler.
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Tony Luck <tony.luck@intel.com>
      Reported-by: NMax Filippov <jcmvbkbc@gmail.com>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      29eb7782
  13. 05 6月, 2013 1 次提交
  14. 04 6月, 2013 1 次提交
  15. 28 5月, 2013 1 次提交
  16. 22 5月, 2013 1 次提交
    • M
      kernel: Fix s390 absolute memory access for /dev/mem · 576ebd74
      Michael Holzheu 提交于
      On s390 the prefix page and absolute zero pages are not correctly
      returned when reading /dev/mem. The reason is that the s390 asm/io.h
      file includes the asm-generic/io.h file which then defines
      xlate_dev_mem_ptr() and therefore overwrites the s390 specific
      version that does the correct swap operation for prefix and absolute
      zero pages. The problem is a regression that was introduced with git
      commit cd248341 (s390/pci: base support).
      
      To fix the problem add "#ifndef xlate_dev_mem_ptr" in asm-generic/io.h
      and "#define xlate_dev_mem_ptr" in asm/io.h. This ensures that the
      s390 version is used. For completeness also add the "#ifndef"
      construct for xlate_dev_kmem_ptr().
      Signed-off-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      576ebd74
  17. 30 4月, 2013 2 次提交
    • H
      mm: allow arch code to control the user page table ceiling · 6ee8630e
      Hugh Dickins 提交于
      On architectures where a pgd entry may be shared between user and kernel
      (e.g.  ARM+LPAE), freeing page tables needs a ceiling other than 0.
      This patch introduces a generic USER_PGTABLES_CEILING that arch code can
      override.  It is the responsibility of the arch code setting the ceiling
      to ensure the complete freeing of the page tables (usually in
      pgd_free()).
      
      [catalin.marinas@arm.com: commit log; shift_arg_pages(), asm-generic/pgtables.h changes]
      Signed-off-by: NHugh Dickins <hughd@google.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: <stable@vger.kernel.org>	[3.3+]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6ee8630e
    • G
      mm/hugetlb: add more arch-defined huge_pte functions · 106c992a
      Gerald Schaefer 提交于
      Commit abf09bed ("s390/mm: implement software dirty bits")
      introduced another difference in the pte layout vs.  the pmd layout on
      s390, thoroughly breaking the s390 support for hugetlbfs.  This requires
      replacing some more pte_xxx functions in mm/hugetlbfs.c with a
      huge_pte_xxx version.
      
      This patch introduces those huge_pte_xxx functions and their generic
      implementation in asm-generic/hugetlb.h, which will now be included on
      all architectures supporting hugetlbfs apart from s390.  This change
      will be a no-op for those architectures.
      
      [akpm@linux-foundation.org: fix warning]
      Signed-off-by: NGerald Schaefer <gerald.schaefer@de.ibm.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Hillf Danton <dhillf@gmail.com>
      Acked-by: Michal Hocko <mhocko@suse.cz>	[for !s390 parts]
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      106c992a
  18. 27 4月, 2013 1 次提交
    • K
      cputime_nsecs: use math64.h for nsec resolution conversion helpers · 8c23b80e
      Kevin Hilman 提交于
      For the nsec resolution conversions to be useable on non 64-bit
      architectures, the helpers in <linux/math64.h> need to be used so the
      right arch-specific 64-bit math helpers can be used (e.g. do_div())
      Signed-off-by: NKevin Hilman <khilman@linaro.org>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Hakan Akkan <hakanakkan@gmail.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Kevin Hilman <khilman@linaro.org>
      Cc: Li Zhong <zhong@linux.vnet.ibm.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      8c23b80e
  19. 25 4月, 2013 1 次提交
  20. 13 4月, 2013 1 次提交
    • D
      x86-32: Fix possible incomplete TLB invalidate with PAE pagetables · 1de14c3c
      Dave Hansen 提交于
      This patch attempts to fix:
      
      	https://bugzilla.kernel.org/show_bug.cgi?id=56461
      
      The symptom is a crash and messages like this:
      
      	chrome: Corrupted page table at address 34a03000
      	*pdpt = 0000000000000000 *pde = 0000000000000000
      	Bad pagetable: 000f [#1] PREEMPT SMP
      
      Ingo guesses this got introduced by commit 611ae8e3 ("x86/tlb:
      enable tlb flush range support for x86") since that code started to free
      unused pagetables.
      
      On x86-32 PAE kernels, that new code has the potential to free an entire
      PMD page and will clear one of the four page-directory-pointer-table
      (aka pgd_t entries).
      
      The hardware aggressively "caches" these top-level entries and invlpg
      does not actually affect the CPU's copy.  If we clear one we *HAVE* to
      do a full TLB flush, otherwise we might continue using a freed pmd page.
      (note, we do this properly on the population side in pud_populate()).
      
      This patch tracks whenever we clear one of these entries in the 'struct
      mmu_gather', and ensures that we follow up with a full tlb flush.
      
      BTW, I disassembled and checked that:
      
      	if (tlb->fullmm == 0)
      and
      	if (!tlb->fullmm && !tlb->need_flush_all)
      
      generate essentially the same code, so there should be zero impact there
      to the !PAE case.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Cc: Peter Anvin <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Artem S Tashkinov <t.artem@mailcity.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1de14c3c
  21. 15 3月, 2013 1 次提交
    • R
      CONFIG_SYMBOL_PREFIX: cleanup. · b92021b0
      Rusty Russell 提交于
      We have CONFIG_SYMBOL_PREFIX, which three archs define to the string
      "_".  But Al Viro broke this in "consolidate cond_syscall and
      SYSCALL_ALIAS declarations" (in linux-next), and he's not the first to
      do so.
      
      Using CONFIG_SYMBOL_PREFIX is awkward, since we usually just want to
      prefix it so something.  So various places define helpers which are
      defined to nothing if CONFIG_SYMBOL_PREFIX isn't set:
      
      1) include/asm-generic/unistd.h defines __SYMBOL_PREFIX.
      2) include/asm-generic/vmlinux.lds.h defines VMLINUX_SYMBOL(sym)
      3) include/linux/export.h defines MODULE_SYMBOL_PREFIX.
      4) include/linux/kernel.h defines SYMBOL_PREFIX (which differs from #7)
      5) kernel/modsign_certificate.S defines ASM_SYMBOL(sym)
      6) scripts/modpost.c defines MODULE_SYMBOL_PREFIX
      7) scripts/Makefile.lib defines SYMBOL_PREFIX on the commandline if
         CONFIG_SYMBOL_PREFIX is set, so that we have a non-string version
         for pasting.
      
      (arch/h8300/include/asm/linkage.h defines SYMBOL_NAME(), too).
      
      Let's solve this properly:
      1) No more generic prefix, just CONFIG_HAVE_UNDERSCORE_SYMBOL_PREFIX.
      2) Make linux/export.h usable from asm.
      3) Define VMLINUX_SYMBOL() and VMLINUX_SYMBOL_STR().
      4) Make everyone use them.
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      Reviewed-by: NJames Hogan <james.hogan@imgtec.com>
      Tested-by: James Hogan <james.hogan@imgtec.com> (metag)
      b92021b0
  22. 13 3月, 2013 1 次提交
  23. 04 3月, 2013 1 次提交
  24. 03 3月, 2013 2 次提交
  25. 24 2月, 2013 1 次提交
  26. 19 2月, 2013 1 次提交
  27. 14 2月, 2013 3 次提交
    • M
      s390/mm: implement software dirty bits · abf09bed
      Martin Schwidefsky 提交于
      The s390 architecture is unique in respect to dirty page detection,
      it uses the change bit in the per-page storage key to track page
      modifications. All other architectures track dirty bits by means
      of page table entries. This property of s390 has caused numerous
      problems in the past, e.g. see git commit ef5d437f
      "mm: fix XFS oops due to dirty pages without buffers on s390".
      
      To avoid future issues in regard to per-page dirty bits convert
      s390 to a fault based software dirty bit detection mechanism. All
      user page table entries which are marked as clean will be hardware
      read-only, even if the pte is supposed to be writable. A write by
      the user process will trigger a protection fault which will cause
      the user pte to be marked as dirty and the hardware read-only bit
      is removed.
      
      With this change the dirty bit in the storage key is irrelevant
      for Linux as a host, but the storage key is still required for
      KVM guests. The effect is that page_test_and_clear_dirty and the
      related code can be removed. The referenced bit in the storage
      key is still used by the page_test_and_clear_young primitive to
      provide page age information.
      
      For page cache pages of mappings with mapping_cap_account_dirty
      there will not be any change in behavior as the dirty bit tracking
      already uses read-only ptes to control the amount of dirty pages.
      Only for swap cache pages and pages of mappings without
      mapping_cap_account_dirty there can be additional protection faults.
      To avoid an excessive number of additional faults the mk_pte
      primitive checks for PageDirty if the pgprot value allows for writes
      and pre-dirties the pte. That avoids all additional faults for
      tmpfs and shmem pages until these pages are added to the swap cache.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      abf09bed
    • H
      asm-generic/io.h: convert readX defines to functions · 7292e7e0
      Heiko Carstens 提交于
      E.g. readl is defined like this
      
       #define readl(addr) __le32_to_cpu(__raw_readl(addr))
      
      If a there is a readl() call that doesn't check the return value
      this will cause a compile warning on big endian machines due to
      the __le32_to_cpu macro magic.
      
      E.g. code like this:
      
      	readl(addr);
      
      will generate the following compile warning:
      
      warning: value computed is not used [-Wunused-value]
      
      With this patch we get rid of dozens of compile warnings on s390.
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Acked-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      7292e7e0
    • A
      burying unused conditionals · d64008a8
      Al Viro 提交于
      __ARCH_WANT_SYS_RT_SIGACTION,
      __ARCH_WANT_SYS_RT_SIGSUSPEND,
      __ARCH_WANT_COMPAT_SYS_RT_SIGSUSPEND,
      __ARCH_WANT_COMPAT_SYS_SCHED_RR_GET_INTERVAL - not used anymore
      CONFIG_GENERIC_{SIGALTSTACK,COMPAT_RT_SIG{ACTION,QUEUEINFO,PENDING,PROCMASK}} -
      can be assumed always set.
      d64008a8
  28. 12 2月, 2013 2 次提交
  29. 11 2月, 2013 3 次提交