1. 26 4月, 2013 1 次提交
  2. 24 4月, 2013 1 次提交
  3. 23 4月, 2013 12 次提交
  4. 22 4月, 2013 7 次提交
  5. 20 4月, 2013 3 次提交
    • H
      x86, microcode: Verify the family before dispatching microcode patching · 74c3e3fc
      H. Peter Anvin 提交于
      For each CPU vendor that implements CPU microcode patching, there will
      be a minimum family for which this is implemented.  Verify this
      minimum level of support.
      
      This can be done in the dispatch function or early in the application
      functions.  Doing the latter turned out to be somewhat awkward because
      of the ineviable split between the BSP and the AP paths, and rather
      than pushing deep into the application functions, do this in
      the dispatch function.
      Reported-by: N"Bryan O'Donoghue" <bryan.odonoghue.lkml@nexus-software.ie>
      Suggested-by: NBorislav Petkov <bp@alien8.de>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Link: http://lkml.kernel.org/r/1366392183-4149-1-git-send-email-bryan.odonoghue.lkml@nexus-software.ie
      74c3e3fc
    • D
      sparc64: Fix race in TLB batch processing. · f36391d2
      David S. Miller 提交于
      As reported by Dave Kleikamp, when we emit cross calls to do batched
      TLB flush processing we have a race because we do not synchronize on
      the sibling cpus completing the cross call.
      
      So meanwhile the TLB batch can be reset (tb->tlb_nr set to zero, etc.)
      and either flushes are missed or flushes will flush the wrong
      addresses.
      
      Fix this by using generic infrastructure to synchonize on the
      completion of the cross call.
      
      This first required getting the flush_tlb_pending() call out from
      switch_to() which operates with locks held and interrupts disabled.
      The problem is that smp_call_function_many() cannot be invoked with
      IRQs disabled and this is explicitly checked for with WARN_ON_ONCE().
      
      We get the batch processing outside of locked IRQ disabled sections by
      using some ideas from the powerpc port. Namely, we only batch inside
      of arch_{enter,leave}_lazy_mmu_mode() calls.  If we're not in such a
      region, we flush TLBs synchronously.
      
      1) Get rid of xcall_flush_tlb_pending and per-cpu type
         implementations.
      
      2) Do TLB batch cross calls instead via:
      
      	smp_call_function_many()
      		tlb_pending_func()
      			__flush_tlb_pending()
      
      3) Batch only in lazy mmu sequences:
      
      	a) Add 'active' member to struct tlb_batch
      	b) Define __HAVE_ARCH_ENTER_LAZY_MMU_MODE
      	c) Set 'active' in arch_enter_lazy_mmu_mode()
      	d) Run batch and clear 'active' in arch_leave_lazy_mmu_mode()
      	e) Check 'active' in tlb_batch_add_one() and do a synchronous
                 flush if it's clear.
      
      4) Add infrastructure for synchronous TLB page flushes.
      
      	a) Implement __flush_tlb_page and per-cpu variants, patch
      	   as needed.
      	b) Likewise for xcall_flush_tlb_page.
      	c) Implement smp_flush_tlb_page() to invoke the cross-call.
      	d) Wire up global_flush_tlb_page() to the right routine based
                 upon CONFIG_SMP
      
      5) It turns out that singleton batches are very common, 2 out of every
         3 batch flushes have only a single entry in them.
      
         The batch flush waiting is very expensive, both because of the poll
         on sibling cpu completeion, as well as because passing the tlb batch
         pointer to the sibling cpus invokes a shared memory dereference.
      
         Therefore, in flush_tlb_pending(), if there is only one entry in
         the batch perform a completely asynchronous global_flush_tlb_page()
         instead.
      Reported-by: NDave Kleikamp <dave.kleikamp@oracle.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      Acked-by: NDave Kleikamp <dave.kleikamp@oracle.com>
      f36391d2
    • S
      ARM: 7699/1: sched_clock: Add more notrace to prevent recursion · cea15092
      Stephen Boyd 提交于
      cyc_to_sched_clock() is called by sched_clock() and cyc_to_ns()
      is called by cyc_to_sched_clock(). I suspect that some compilers
      inline both of these functions into sched_clock() and so we've
      been getting away without having a notrace marking. It seems that
      my compiler isn't inlining cyc_to_sched_clock() though, so I'm
      hitting a recursion bug when I enable the function graph tracer,
      causing my system to crash. Marking these functions notrace fixes
      it. Technically cyc_to_ns() doesn't need the notrace because it's
      already marked inline, but let's just add it so that if we ever
      remove inline from that function it doesn't blow up.
      Signed-off-by: NStephen Boyd <sboyd@codeaurora.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      cea15092
  6. 19 4月, 2013 1 次提交
    • R
      ARM: highbank: fix cache flush ordering for cpu hotplug · 73053d97
      Rob Herring 提交于
      The L1 data cache flush needs to be after highbank_set_cpu_jump call which
      pollutes the cache with the l2x0_lock. This causes other cores to deadlock
      waiting for the l2x0_lock. Moving the flush of the entire data cache after
      highbank_set_cpu_jump fixes the problem. Use flush_cache_louis instead of
      flush_cache_all are that is sufficient to flush only the L1 data cache.
      flush_cache_louis did not exist when highbank_cpu_die was originally
      written.
      
      With PL310 errata 769419 enabled, a wmb is inserted into idle which takes
      the l2x0_lock. This makes the problem much more easily hit and causes
      reset to hang.
      Reported-by: NPaolo Pisati <p.pisati@gmail.com>
      Signed-off-by: NRob Herring <rob.herring@calxeda.com>
      Signed-off-by: NOlof Johansson <olof@lixom.net>
      73053d97
  7. 18 4月, 2013 6 次提交
  8. 17 4月, 2013 9 次提交
    • W
      ARM: 7698/1: perf: fix group validation when using enable_on_exec · cb2d8b34
      Will Deacon 提交于
      Events may be created with attr->disabled == 1 and attr->enable_on_exec
      == 1, which confuses the group validation code because events with the
      PERF_EVENT_STATE_OFF are not considered candidates for scheduling, which
      may lead to failure at group scheduling time.
      
      This patch fixes the validation check for ARM, so that events in the
      OFF state are still considered when enable_on_exec is true.
      
      Cc: stable@vger.kernel.org
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Reported-by: NSudeep KarkadaNagesha <Sudeep.KarkadaNagesha@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      cb2d8b34
    • B
      ARM: 7697/1: hw_breakpoint: do not use __cpuinitdata for dbg_cpu_pm_nb · 50acff3c
      Bastian Hecht 提交于
      We must not declare dbg_cpu_pm_nb as __cpuinitdata as we need it after
      system initialization for Suspend and CPUIdle.
      
      This was done in commit 9a6eb310 ("ARM: hw_breakpoint: Debug powerdown
      support for self-hosted debug").
      
      Cc: stable@vger.kernel.org
      Cc: Dietmar Eggemann <Dietmar.Eggemann@arm.com>
      Signed-off-by: NBastian Hecht <hechtb+renesas@gmail.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      50acff3c
    • I
      ARM: 7696/1: Fix kexec by setting outer_cache.inv_all for Feroceon · cd272d1e
      Illia Ragozin 提交于
      On Feroceon the L2 cache becomes non-coherent with the CPU
      when the L1 caches are disabled. Thus the L2 needs to be invalidated
      after both L1 caches are disabled.
      
      On kexec before the starting the code for relocation the kernel,
      the L1 caches are disabled in cpu_froc_fin (cpu_v7_proc_fin for Feroceon),
      but after L2 cache is never invalidated, because inv_all is not set
      in cache-feroceon-l2.c.
      So kernel relocation and decompression may has (and usually has) errors.
      Setting the function enables L2 invalidation and fixes the issue.
      
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NIllia Ragozin <illia.ragozin@grapecom.com>
      Acked-by: NJason Cooper <jason@lakedaemon.net>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      cd272d1e
    • J
      ARM: 7694/1: ARM, TCM: initialize TCM in paging_init(), instead of setup_arch() · de40614e
      Joonsoo Kim 提交于
      tcm_init() call iotable_init() and it use early_alloc variants which
      do memblock allocation. Directly using memblock allocation after
      initializing bootmem should not permitted, because bootmem can't know
      where are additinally reserved.
      So move tcm_init() to a safe place before initalizing bootmem.
      
      (On the U300)
      Tested-by: NLinus Walleij <linus.walleij@linaro.org>
      Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      de40614e
    • A
      ARM: 7692/1: iop3xx: move IOP3XX_PERIPHERAL_VIRT_BASE · f5d6a144
      Aaro Koskinen 提交于
      Currently IOP3XX_PERIPHERAL_VIRT_BASE conflicts with PCI_IO_VIRT_BASE:
      
      					address         size
      	PCI_IO_VIRT_BASE                0xfee00000      0x200000
      	IOP3XX_PERIPHERAL_VIRT_BASE     0xfeffe000      0x2000
      
      Fix by moving IOP3XX_PERIPHERAL_VIRT_BASE below PCI_IO_VIRT_BASE.
      
      The patch fixes the following kernel panic with 3.9-rc1 on iop3xx boards:
      
      [    0.000000] Booting Linux on physical CPU 0x0
      [    0.000000] Initializing cgroup subsys cpu
      [    0.000000] Linux version 3.9.0-rc1-iop32x (aaro@blackmetal) (gcc version 4.7.2 (GCC) ) #20 PREEMPT Tue Mar 5 16:44:36 EET 2013
      [    0.000000] bootconsole [earlycon0] enabled
      [    0.000000] ------------[ cut here ]------------
      [    0.000000] kernel BUG at mm/vmalloc.c:1145!
      [    0.000000] Internal error: Oops - BUG: 0 [#1] PREEMPT ARM
      [    0.000000] Modules linked in:
      [    0.000000] CPU: 0    Not tainted  (3.9.0-rc1-iop32x #20)
      [    0.000000] PC is at vm_area_add_early+0x4c/0x88
      [    0.000000] LR is at add_static_vm_early+0x14/0x68
      [    0.000000] pc : [<c03e74a8>]    lr : [<c03e1c40>]    psr: 800000d3
      [    0.000000] sp : c03ffee4  ip : dfffdf88  fp : c03ffef4
      [    0.000000] r10: 00000002  r9 : 000000cf  r8 : 00000653
      [    0.000000] r7 : c040eca8  r6 : c03e2408  r5 : dfffdf60  r4 : 00200000
      [    0.000000] r3 : dfffdfd8  r2 : feffe000  r1 : ff000000  r0 : dfffdf60
      [    0.000000] Flags: Nzcv  IRQs off  FIQs off  Mode SVC_32  ISA ARM  Segment kernel
      [    0.000000] Control: 0000397f  Table: a0004000  DAC: 00000017
      [    0.000000] Process swapper (pid: 0, stack limit = 0xc03fe1b8)
      [    0.000000] Stack: (0xc03ffee4 to 0xc0400000)
      [    0.000000] fee0:          00200000 c03fff0c c03ffef8 c03e1c40 c03e7468 00200000 fee00000
      [    0.000000] ff00: c03fff2c c03fff10 c03e23e4 c03e1c38 feffe000 c0408ee4 ff000000 c0408f04
      [    0.000000] ff20: c03fff3c c03fff30 c03e2434 c03e23b4 c03fff84 c03fff40 c03e2c94 c03e2414
      [    0.000000] ff40: c03f8878 c03f6410 ffff0000 000bffff 00001000 00000008 c03fff84 c03f6410
      [    0.000000] ff60: c04227e8 c03fffd4 a0008000 c03f8878 69052e30 c02f96eb c03fffbc c03fff88
      [    0.000000] ff80: c03e044c c03e268c 00000000 0000397f c0385130 00000001 ffffffff c03f8874
      [    0.000000] ffa0: dfffffff a0004000 69052e30 a03f61a0 c03ffff4 c03fffc0 c03dd5cc c03e0184
      [    0.000000] ffc0: 00000000 00000000 00000000 00000000 00000000 c03f8878 0000397d c040601c
      [    0.000000] ffe0: c03f8874 c0408674 00000000 c03ffff8 a0008040 c03dd558 00000000 00000000
      [    0.000000] Backtrace:
      [    0.000000] [<c03e745c>] (vm_area_add_early+0x0/0x88) from [<c03e1c40>] (add_static_vm_early+0x14/0x68)
      Tested-by: NMikael Pettersson <mikpe@it.uu.se>
      Signed-off-by: NAaro Koskinen <aaro.koskinen@iki.fi>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      f5d6a144
    • L
      s390: move dummy io_remap_pfn_range() to asm/pgtable.h · 4f2e2903
      Linus Torvalds 提交于
      Commit b4cbb197 ("vm: add vm_iomap_memory() helper function") added
      a helper function wrapper around io_remap_pfn_range(), and every other
      architecture defined it in <asm/pgtable.h>.
      
      The s390 choice of <asm/io.h> may make sense, but is not very convenient
      for this case, and gratuitous differences like that cause unexpected errors like this:
      
         mm/memory.c: In function 'vm_iomap_memory':
         mm/memory.c:2439:2: error: implicit declaration of function 'io_remap_pfn_range' [-Werror=implicit-function-declaration]
      
      Glory be the kbuild test robot who noticed this, bisected it, and
      reported it to the guilty parties (ie me).
      
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4f2e2903
    • R
      x86,efi: Implement efi_no_storage_paranoia parameter · 8c58bf3e
      Richard Weinberger 提交于
      Using this parameter one can disable the storage_size/2 check if
      he is really sure that the UEFI does sane gc and fulfills the spec.
      
      This parameter is useful if a devices uses more than 50% of the
      storage by default.
      The Intel DQSW67 desktop board is such a sucker for exmaple.
      Signed-off-by: NRichard Weinberger <richard@nod.at>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      8c58bf3e
    • M
      ARM: KVM: fix L_PTE_S2_RDWR to actually be Read/Write · 865499ea
      Marc Zyngier 提交于
      Looks like our L_PTE_S2_RDWR definition is slightly wrong,
      and is actually write only (see ARM ARM Table B3-9, Stage 2 control
      of access permissions). Didn't make a difference for normal pages,
      as we OR the flags together, but I'm still wondering how it worked
      for Stage-2 mapped devices, such as the GIC.
      
      Brown paper bag time, again.
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <cdall@cs.columbia.edu>
      865499ea
    • M
      ARM: KVM: fix KVM_CAP_ARM_SET_DEVICE_ADDR reporting · ca46e10f
      Marc Zyngier 提交于
      Commit 3401d546 (KVM: ARM: Introduce KVM_ARM_SET_DEVICE_ADDR
      ioctl) added support for the KVM_CAP_ARM_SET_DEVICE_ADDR capability,
      but failed to add a break in the relevant case statement, returning
      the number of CPUs instead.
      
      Luckilly enough, the CONFIG_NR_CPUS=0 patch hasn't been merged yet
      (https://lkml.org/lkml/diff/2012/3/31/131/1), so the bug wasn't
      noticed.
      
      Just give it a break!
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <cdall@cs.columbia.edu>
      ca46e10f