1. 26 4月, 2013 1 次提交
  2. 25 4月, 2013 1 次提交
  3. 23 4月, 2013 1 次提交
    • R
      Revert "MIPS: page.h: Provide more readable definition for PAGE_MASK." · 3b5e50ed
      Ralf Baechle 提交于
      This reverts commit c17a6554.
      
      Manuel Lauss writes:
      
      lmo commit c17a6554 (MIPS: page.h: Provide more readable definition for
      PAGE_MASK) apparently breaks ioremap of 36-bit addresses on my Alchemy
      systems (PCI and PCMCIA) The reason is that in arch/mips/mm/ioremap.c
      line 157  (phys_addr &= PAGE_MASK) bits 32-35 are cut off.  Seems the
      new PAGE_MASK is explicitly 32bit, or one could make it signed instead
      of unsigned long.
      3b5e50ed
  4. 20 4月, 2013 3 次提交
    • H
      x86, microcode: Verify the family before dispatching microcode patching · 74c3e3fc
      H. Peter Anvin 提交于
      For each CPU vendor that implements CPU microcode patching, there will
      be a minimum family for which this is implemented.  Verify this
      minimum level of support.
      
      This can be done in the dispatch function or early in the application
      functions.  Doing the latter turned out to be somewhat awkward because
      of the ineviable split between the BSP and the AP paths, and rather
      than pushing deep into the application functions, do this in
      the dispatch function.
      Reported-by: N"Bryan O'Donoghue" <bryan.odonoghue.lkml@nexus-software.ie>
      Suggested-by: NBorislav Petkov <bp@alien8.de>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Link: http://lkml.kernel.org/r/1366392183-4149-1-git-send-email-bryan.odonoghue.lkml@nexus-software.ie
      74c3e3fc
    • D
      sparc64: Fix race in TLB batch processing. · f36391d2
      David S. Miller 提交于
      As reported by Dave Kleikamp, when we emit cross calls to do batched
      TLB flush processing we have a race because we do not synchronize on
      the sibling cpus completing the cross call.
      
      So meanwhile the TLB batch can be reset (tb->tlb_nr set to zero, etc.)
      and either flushes are missed or flushes will flush the wrong
      addresses.
      
      Fix this by using generic infrastructure to synchonize on the
      completion of the cross call.
      
      This first required getting the flush_tlb_pending() call out from
      switch_to() which operates with locks held and interrupts disabled.
      The problem is that smp_call_function_many() cannot be invoked with
      IRQs disabled and this is explicitly checked for with WARN_ON_ONCE().
      
      We get the batch processing outside of locked IRQ disabled sections by
      using some ideas from the powerpc port. Namely, we only batch inside
      of arch_{enter,leave}_lazy_mmu_mode() calls.  If we're not in such a
      region, we flush TLBs synchronously.
      
      1) Get rid of xcall_flush_tlb_pending and per-cpu type
         implementations.
      
      2) Do TLB batch cross calls instead via:
      
      	smp_call_function_many()
      		tlb_pending_func()
      			__flush_tlb_pending()
      
      3) Batch only in lazy mmu sequences:
      
      	a) Add 'active' member to struct tlb_batch
      	b) Define __HAVE_ARCH_ENTER_LAZY_MMU_MODE
      	c) Set 'active' in arch_enter_lazy_mmu_mode()
      	d) Run batch and clear 'active' in arch_leave_lazy_mmu_mode()
      	e) Check 'active' in tlb_batch_add_one() and do a synchronous
                 flush if it's clear.
      
      4) Add infrastructure for synchronous TLB page flushes.
      
      	a) Implement __flush_tlb_page and per-cpu variants, patch
      	   as needed.
      	b) Likewise for xcall_flush_tlb_page.
      	c) Implement smp_flush_tlb_page() to invoke the cross-call.
      	d) Wire up global_flush_tlb_page() to the right routine based
                 upon CONFIG_SMP
      
      5) It turns out that singleton batches are very common, 2 out of every
         3 batch flushes have only a single entry in them.
      
         The batch flush waiting is very expensive, both because of the poll
         on sibling cpu completeion, as well as because passing the tlb batch
         pointer to the sibling cpus invokes a shared memory dereference.
      
         Therefore, in flush_tlb_pending(), if there is only one entry in
         the batch perform a completely asynchronous global_flush_tlb_page()
         instead.
      Reported-by: NDave Kleikamp <dave.kleikamp@oracle.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      Acked-by: NDave Kleikamp <dave.kleikamp@oracle.com>
      f36391d2
    • S
      ARM: 7699/1: sched_clock: Add more notrace to prevent recursion · cea15092
      Stephen Boyd 提交于
      cyc_to_sched_clock() is called by sched_clock() and cyc_to_ns()
      is called by cyc_to_sched_clock(). I suspect that some compilers
      inline both of these functions into sched_clock() and so we've
      been getting away without having a notrace marking. It seems that
      my compiler isn't inlining cyc_to_sched_clock() though, so I'm
      hitting a recursion bug when I enable the function graph tracer,
      causing my system to crash. Marking these functions notrace fixes
      it. Technically cyc_to_ns() doesn't need the notrace because it's
      already marked inline, but let's just add it so that if we ever
      remove inline from that function it doesn't blow up.
      Signed-off-by: NStephen Boyd <sboyd@codeaurora.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      cea15092
  5. 19 4月, 2013 1 次提交
    • R
      ARM: highbank: fix cache flush ordering for cpu hotplug · 73053d97
      Rob Herring 提交于
      The L1 data cache flush needs to be after highbank_set_cpu_jump call which
      pollutes the cache with the l2x0_lock. This causes other cores to deadlock
      waiting for the l2x0_lock. Moving the flush of the entire data cache after
      highbank_set_cpu_jump fixes the problem. Use flush_cache_louis instead of
      flush_cache_all are that is sufficient to flush only the L1 data cache.
      flush_cache_louis did not exist when highbank_cpu_die was originally
      written.
      
      With PL310 errata 769419 enabled, a wmb is inserted into idle which takes
      the l2x0_lock. This makes the problem much more easily hit and causes
      reset to hang.
      Reported-by: NPaolo Pisati <p.pisati@gmail.com>
      Signed-off-by: NRob Herring <rob.herring@calxeda.com>
      Signed-off-by: NOlof Johansson <olof@lixom.net>
      73053d97
  6. 18 4月, 2013 6 次提交
  7. 17 4月, 2013 11 次提交
    • W
      ARM: 7698/1: perf: fix group validation when using enable_on_exec · cb2d8b34
      Will Deacon 提交于
      Events may be created with attr->disabled == 1 and attr->enable_on_exec
      == 1, which confuses the group validation code because events with the
      PERF_EVENT_STATE_OFF are not considered candidates for scheduling, which
      may lead to failure at group scheduling time.
      
      This patch fixes the validation check for ARM, so that events in the
      OFF state are still considered when enable_on_exec is true.
      
      Cc: stable@vger.kernel.org
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Reported-by: NSudeep KarkadaNagesha <Sudeep.KarkadaNagesha@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      cb2d8b34
    • B
      ARM: 7697/1: hw_breakpoint: do not use __cpuinitdata for dbg_cpu_pm_nb · 50acff3c
      Bastian Hecht 提交于
      We must not declare dbg_cpu_pm_nb as __cpuinitdata as we need it after
      system initialization for Suspend and CPUIdle.
      
      This was done in commit 9a6eb310 ("ARM: hw_breakpoint: Debug powerdown
      support for self-hosted debug").
      
      Cc: stable@vger.kernel.org
      Cc: Dietmar Eggemann <Dietmar.Eggemann@arm.com>
      Signed-off-by: NBastian Hecht <hechtb+renesas@gmail.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      50acff3c
    • I
      ARM: 7696/1: Fix kexec by setting outer_cache.inv_all for Feroceon · cd272d1e
      Illia Ragozin 提交于
      On Feroceon the L2 cache becomes non-coherent with the CPU
      when the L1 caches are disabled. Thus the L2 needs to be invalidated
      after both L1 caches are disabled.
      
      On kexec before the starting the code for relocation the kernel,
      the L1 caches are disabled in cpu_froc_fin (cpu_v7_proc_fin for Feroceon),
      but after L2 cache is never invalidated, because inv_all is not set
      in cache-feroceon-l2.c.
      So kernel relocation and decompression may has (and usually has) errors.
      Setting the function enables L2 invalidation and fixes the issue.
      
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NIllia Ragozin <illia.ragozin@grapecom.com>
      Acked-by: NJason Cooper <jason@lakedaemon.net>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      cd272d1e
    • J
      ARM: 7694/1: ARM, TCM: initialize TCM in paging_init(), instead of setup_arch() · de40614e
      Joonsoo Kim 提交于
      tcm_init() call iotable_init() and it use early_alloc variants which
      do memblock allocation. Directly using memblock allocation after
      initializing bootmem should not permitted, because bootmem can't know
      where are additinally reserved.
      So move tcm_init() to a safe place before initalizing bootmem.
      
      (On the U300)
      Tested-by: NLinus Walleij <linus.walleij@linaro.org>
      Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      de40614e
    • A
      ARM: 7692/1: iop3xx: move IOP3XX_PERIPHERAL_VIRT_BASE · f5d6a144
      Aaro Koskinen 提交于
      Currently IOP3XX_PERIPHERAL_VIRT_BASE conflicts with PCI_IO_VIRT_BASE:
      
      					address         size
      	PCI_IO_VIRT_BASE                0xfee00000      0x200000
      	IOP3XX_PERIPHERAL_VIRT_BASE     0xfeffe000      0x2000
      
      Fix by moving IOP3XX_PERIPHERAL_VIRT_BASE below PCI_IO_VIRT_BASE.
      
      The patch fixes the following kernel panic with 3.9-rc1 on iop3xx boards:
      
      [    0.000000] Booting Linux on physical CPU 0x0
      [    0.000000] Initializing cgroup subsys cpu
      [    0.000000] Linux version 3.9.0-rc1-iop32x (aaro@blackmetal) (gcc version 4.7.2 (GCC) ) #20 PREEMPT Tue Mar 5 16:44:36 EET 2013
      [    0.000000] bootconsole [earlycon0] enabled
      [    0.000000] ------------[ cut here ]------------
      [    0.000000] kernel BUG at mm/vmalloc.c:1145!
      [    0.000000] Internal error: Oops - BUG: 0 [#1] PREEMPT ARM
      [    0.000000] Modules linked in:
      [    0.000000] CPU: 0    Not tainted  (3.9.0-rc1-iop32x #20)
      [    0.000000] PC is at vm_area_add_early+0x4c/0x88
      [    0.000000] LR is at add_static_vm_early+0x14/0x68
      [    0.000000] pc : [<c03e74a8>]    lr : [<c03e1c40>]    psr: 800000d3
      [    0.000000] sp : c03ffee4  ip : dfffdf88  fp : c03ffef4
      [    0.000000] r10: 00000002  r9 : 000000cf  r8 : 00000653
      [    0.000000] r7 : c040eca8  r6 : c03e2408  r5 : dfffdf60  r4 : 00200000
      [    0.000000] r3 : dfffdfd8  r2 : feffe000  r1 : ff000000  r0 : dfffdf60
      [    0.000000] Flags: Nzcv  IRQs off  FIQs off  Mode SVC_32  ISA ARM  Segment kernel
      [    0.000000] Control: 0000397f  Table: a0004000  DAC: 00000017
      [    0.000000] Process swapper (pid: 0, stack limit = 0xc03fe1b8)
      [    0.000000] Stack: (0xc03ffee4 to 0xc0400000)
      [    0.000000] fee0:          00200000 c03fff0c c03ffef8 c03e1c40 c03e7468 00200000 fee00000
      [    0.000000] ff00: c03fff2c c03fff10 c03e23e4 c03e1c38 feffe000 c0408ee4 ff000000 c0408f04
      [    0.000000] ff20: c03fff3c c03fff30 c03e2434 c03e23b4 c03fff84 c03fff40 c03e2c94 c03e2414
      [    0.000000] ff40: c03f8878 c03f6410 ffff0000 000bffff 00001000 00000008 c03fff84 c03f6410
      [    0.000000] ff60: c04227e8 c03fffd4 a0008000 c03f8878 69052e30 c02f96eb c03fffbc c03fff88
      [    0.000000] ff80: c03e044c c03e268c 00000000 0000397f c0385130 00000001 ffffffff c03f8874
      [    0.000000] ffa0: dfffffff a0004000 69052e30 a03f61a0 c03ffff4 c03fffc0 c03dd5cc c03e0184
      [    0.000000] ffc0: 00000000 00000000 00000000 00000000 00000000 c03f8878 0000397d c040601c
      [    0.000000] ffe0: c03f8874 c0408674 00000000 c03ffff8 a0008040 c03dd558 00000000 00000000
      [    0.000000] Backtrace:
      [    0.000000] [<c03e745c>] (vm_area_add_early+0x0/0x88) from [<c03e1c40>] (add_static_vm_early+0x14/0x68)
      Tested-by: NMikael Pettersson <mikpe@it.uu.se>
      Signed-off-by: NAaro Koskinen <aaro.koskinen@iki.fi>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      f5d6a144
    • L
      s390: move dummy io_remap_pfn_range() to asm/pgtable.h · 4f2e2903
      Linus Torvalds 提交于
      Commit b4cbb197 ("vm: add vm_iomap_memory() helper function") added
      a helper function wrapper around io_remap_pfn_range(), and every other
      architecture defined it in <asm/pgtable.h>.
      
      The s390 choice of <asm/io.h> may make sense, but is not very convenient
      for this case, and gratuitous differences like that cause unexpected errors like this:
      
         mm/memory.c: In function 'vm_iomap_memory':
         mm/memory.c:2439:2: error: implicit declaration of function 'io_remap_pfn_range' [-Werror=implicit-function-declaration]
      
      Glory be the kbuild test robot who noticed this, bisected it, and
      reported it to the guilty parties (ie me).
      
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4f2e2903
    • R
      x86,efi: Implement efi_no_storage_paranoia parameter · 8c58bf3e
      Richard Weinberger 提交于
      Using this parameter one can disable the storage_size/2 check if
      he is really sure that the UEFI does sane gc and fulfills the spec.
      
      This parameter is useful if a devices uses more than 50% of the
      storage by default.
      The Intel DQSW67 desktop board is such a sucker for exmaple.
      Signed-off-by: NRichard Weinberger <richard@nod.at>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      8c58bf3e
    • M
      ARM: KVM: fix L_PTE_S2_RDWR to actually be Read/Write · 865499ea
      Marc Zyngier 提交于
      Looks like our L_PTE_S2_RDWR definition is slightly wrong,
      and is actually write only (see ARM ARM Table B3-9, Stage 2 control
      of access permissions). Didn't make a difference for normal pages,
      as we OR the flags together, but I'm still wondering how it worked
      for Stage-2 mapped devices, such as the GIC.
      
      Brown paper bag time, again.
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <cdall@cs.columbia.edu>
      865499ea
    • M
      ARM: KVM: fix KVM_CAP_ARM_SET_DEVICE_ADDR reporting · ca46e10f
      Marc Zyngier 提交于
      Commit 3401d546 (KVM: ARM: Introduce KVM_ARM_SET_DEVICE_ADDR
      ioctl) added support for the KVM_CAP_ARM_SET_DEVICE_ADDR capability,
      but failed to add a break in the relevant case statement, returning
      the number of CPUs instead.
      
      Luckilly enough, the CONFIG_NR_CPUS=0 patch hasn't been merged yet
      (https://lkml.org/lkml/diff/2012/3/31/131/1), so the bug wasn't
      noticed.
      
      Just give it a break!
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <cdall@cs.columbia.edu>
      ca46e10f
    • S
      efi: Export efi_query_variable_store() for efivars.ko · 3668011d
      Sergey Vlasov 提交于
      Fixes build with CONFIG_EFI_VARS=m which was broken after the commit
      "x86, efivars: firmware bug workarounds should be in platform code".
      Signed-off-by: NSergey Vlasov <vsu@altlinux.ru>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      3668011d
    • S
      x86/Kconfig: Make EFI select UCS2_STRING · f6ce5002
      Sergey Vlasov 提交于
      The commit "efi: Distinguish between "remaining space" and actually used
      space" added usage of ucs2_*() functions to arch/x86/platform/efi/efi.c,
      but the only thing which selected UCS2_STRING was EFI_VARS, which is
      technically optional and can be built as a module.
      Signed-off-by: NSergey Vlasov <vsu@altlinux.ru>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      f6ce5002
  8. 16 4月, 2013 3 次提交
    • S
      perf/x86: Fix offcore_rsp valid mask for SNB/IVB · f1923820
      Stephane Eranian 提交于
      The valid mask for both offcore_response_0 and
      offcore_response_1 was wrong for SNB/SNB-EP,
      IVB/IVB-EP. It was possible to write to
      reserved bit and cause a GP fault crashing
      the kernel.
      
      This patch fixes the problem by correctly marking the
      reserved bits in the valid mask for all the processors
      mentioned above.
      
      A distinction between desktop and server parts is introduced
      because bits 24-30 are only available on the server parts.
      
      This version of the  patch is just a rebase to perf/urgent tree
      and should apply to older kernels as well.
      Signed-off-by: NStephane Eranian <eranian@google.com>
      Cc: peterz@infradead.org
      Cc: jolsa@redhat.com
      Cc: gregkh@linuxfoundation.org
      Cc: security@kernel.org
      Cc: ak@linux.intel.com
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      f1923820
    • M
      efi: Distinguish between "remaining space" and actually used space · 31ff2f20
      Matthew Garrett 提交于
      EFI implementations distinguish between space that is actively used by a
      variable and space that merely hasn't been garbage collected yet. Space
      that hasn't yet been garbage collected isn't available for use and so isn't
      counted in the remaining_space field returned by QueryVariableInfo().
      
      Combined with commit 68d92986 this can cause problems. Some implementations
      don't garbage collect until the remaining space is smaller than the maximum
      variable size, and as a result check_var_size() will always fail once more
      than 50% of the variable store has been used even if most of that space is
      marked as available for garbage collection. The user is unable to create
      new variables, and deleting variables doesn't increase the remaining space.
      
      The problem that 68d92986 was attempting to avoid was one where certain
      platforms fail if the actively used space is greater than 50% of the
      available storage space. We should be able to calculate that by simply
      summing the size of each available variable and subtracting that from
      the total storage space. With luck this will fix the problem described in
      https://bugzilla.kernel.org/show_bug.cgi?id=55471 without permitting
      damage to occur to the machines 68d92986 was attempting to fix.
      Signed-off-by: NMatthew Garrett <matthew.garrett@nebula.com>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      31ff2f20
    • M
      efi: Pass boot services variable info to runtime code · cc5a080c
      Matthew Garrett 提交于
      EFI variables can be flagged as being accessible only within boot services.
      This makes it awkward for us to figure out how much space they use at
      runtime. In theory we could figure this out by simply comparing the results
      from QueryVariableInfo() to the space used by all of our variables, but
      that fails if the platform doesn't garbage collect on every boot. Thankfully,
      calling QueryVariableInfo() while still inside boot services gives a more
      reliable answer. This patch passes that information from the EFI boot stub
      up to the efi platform code.
      Signed-off-by: NMatthew Garrett <matthew.garrett@nebula.com>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      cc5a080c
  9. 15 4月, 2013 2 次提交
    • K
      powerpc: add a missing label in resume_kernel · d8b92292
      Kevin Hao 提交于
      A label 0 was missed in the patch a9c4e541 (powerpc/kprobe: Complete
      kprobe and migrate exception frame). This will cause the kernel
      branch to an undetermined address if there really has a conflict when
      updating the thread flags.
      Signed-off-by: NKevin Hao <haokexin@gmail.com>
      Cc: stable@vger.kernel.org
      Acked-By: NTiejun Chen <tiejun.chen@windriver.com>
      Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      d8b92292
    • A
      powerpc: Fix audit crash due to save/restore PPR changes · 05e38e5d
      Alistair Popple 提交于
      The current mainline crashes when hitting userspace with the following:
      
      kernel BUG at kernel/auditsc.c:1769!
      cpu 0x1: Vector: 700 (Program Check) at [c000000023883a60]
          pc: c0000000001047a8: .__audit_syscall_entry+0x38/0x130
          lr: c00000000000ed64: .do_syscall_trace_enter+0xc4/0x270
          sp: c000000023883ce0
         msr: 8000000000029032
        current = 0xc000000023800000
        paca    = 0xc00000000f080380   softe: 0        irq_happened: 0x01
          pid   = 1629, comm = start_udev
      kernel BUG at kernel/auditsc.c:1769!
      enter ? for help
      [c000000023883d80] c00000000000ed64 .do_syscall_trace_enter+0xc4/0x270
      [c000000023883e30] c000000000009b08 syscall_dotrace+0xc/0x38
       --- Exception: c00 (System Call) at 0000008010ec50dc
      
      Bisecting found the following patch caused it:
      
      commit 44e9309f
      Author: Haren Myneni <haren@linux.vnet.ibm.com>
      powerpc: Implement PPR save/restore
      
      It was found this patch corrupted r9 when calling
      SET_DEFAULT_THREAD_PPR()
      
      Using r10 as a scratch register instead of r9 solved the problem.
      Signed-off-by: NAlistair Popple <alistair@popple.id.au>
      Acked-by: NMichael Neuling <mikey@neuling.org>
      Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      05e38e5d
  10. 13 4月, 2013 1 次提交
    • D
      x86-32: Fix possible incomplete TLB invalidate with PAE pagetables · 1de14c3c
      Dave Hansen 提交于
      This patch attempts to fix:
      
      	https://bugzilla.kernel.org/show_bug.cgi?id=56461
      
      The symptom is a crash and messages like this:
      
      	chrome: Corrupted page table at address 34a03000
      	*pdpt = 0000000000000000 *pde = 0000000000000000
      	Bad pagetable: 000f [#1] PREEMPT SMP
      
      Ingo guesses this got introduced by commit 611ae8e3 ("x86/tlb:
      enable tlb flush range support for x86") since that code started to free
      unused pagetables.
      
      On x86-32 PAE kernels, that new code has the potential to free an entire
      PMD page and will clear one of the four page-directory-pointer-table
      (aka pgd_t entries).
      
      The hardware aggressively "caches" these top-level entries and invlpg
      does not actually affect the CPU's copy.  If we clear one we *HAVE* to
      do a full TLB flush, otherwise we might continue using a freed pmd page.
      (note, we do this properly on the population side in pud_populate()).
      
      This patch tracks whenever we clear one of these entries in the 'struct
      mmu_gather', and ensures that we follow up with a full tlb flush.
      
      BTW, I disassembled and checked that:
      
      	if (tlb->fullmm == 0)
      and
      	if (!tlb->fullmm && !tlb->need_flush_all)
      
      generate essentially the same code, so there should be zero impact there
      to the !PAE case.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Cc: Peter Anvin <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Artem S Tashkinov <t.artem@mailcity.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1de14c3c
  11. 12 4月, 2013 2 次提交
  12. 11 4月, 2013 8 次提交