1. 04 3月, 2013 1 次提交
    • W
      ARM: 7659/1: mm: make mm->context.id an atomic64_t variable · 8a4e3a9e
      Will Deacon 提交于
      mm->context.id is updated under asid_lock when a new ASID is allocated
      to an mm_struct. However, it is also read without the lock when a task
      is being scheduled and checking whether or not the current ASID
      generation is up-to-date.
      
      If two threads of the same process are being scheduled in parallel and
      the bottom bits of the generation in their mm->context.id match the
      current generation (that is, the mm_struct has not been used for ~2^24
      rollovers) then the non-atomic, lockless access to mm->context.id may
      yield the incorrect ASID.
      
      This patch fixes this issue by making mm->context.id and atomic64_t,
      ensuring that the generation is always read consistently. For code that
      only requires access to the ASID bits (e.g. TLB flushing by mm), then
      the value is accessed directly, which GCC converts to an ldrb.
      
      Cc: <stable@vger.kernel.org> # 3.8
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      8a4e3a9e
  2. 12 2月, 2013 2 次提交
  3. 24 1月, 2013 1 次提交
    • C
      KVM: ARM: World-switch implementation · f7ed45be
      Christoffer Dall 提交于
      Provides complete world-switch implementation to switch to other guests
      running in non-secure modes. Includes Hyp exception handlers that
      capture necessary exception information and stores the information on
      the VCPU and KVM structures.
      
      The following Hyp-ABI is also documented in the code:
      
      Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
         Switching to Hyp mode is done through a simple HVC #0 instruction. The
         exception vector code will check that the HVC comes from VMID==0 and if
         so will push the necessary state (SPSR, lr_usr) on the Hyp stack.
         - r0 contains a pointer to a HYP function
         - r1, r2, and r3 contain arguments to the above function.
         - The HYP function will be called with its arguments in r0, r1 and r2.
         On HYP function return, we return directly to SVC.
      
      A call to a function executing in Hyp mode is performed like the following:
      
              <svc code>
              ldr     r0, =BSYM(my_hyp_fn)
              ldr     r1, =my_param
              hvc #0  ; Call my_hyp_fn(my_param) from HYP mode
              <svc code>
      
      Otherwise, the world-switch is pretty straight-forward. All state that
      can be modified by the guest is first backed up on the Hyp stack and the
      VCPU values is loaded onto the hardware. State, which is not loaded, but
      theoretically modifiable by the guest is protected through the
      virtualiation features to generate a trap and cause software emulation.
      Upon guest returns, all state is restored from hardware onto the VCPU
      struct and the original state is restored from the Hyp-stack onto the
      hardware.
      
      SMP support using the VMPIDR calculated on the basis of the host MPIDR
      and overriding the low bits with KVM vcpu_id contributed by Marc Zyngier.
      
      Reuse of VMIDs has been implemented by Antonios Motakis and adapated from
      a separate patch into the appropriate patches introducing the
      functionality. Note that the VMIDs are stored per VM as required by the ARM
      architecture reference manual.
      
      To support VFP/NEON we trap those instructions using the HPCTR. When
      we trap, we switch the FPU.  After a guest exit, the VFP state is
      returned to the host.  When disabling access to floating point
      instructions, we also mask FPEXC_EN in order to avoid the guest
      receiving Undefined instruction exceptions before we have a chance to
      switch back the floating point state.  We are reusing vfp_hard_struct,
      so we depend on VFPv3 being enabled in the host kernel, if not, we still
      trap cp10 and cp11 in order to inject an undefined instruction exception
      whenever the guest tries to use VFP/NEON. VFP/NEON developed by
      Antionios Motakis and Rusty Russell.
      
      Aborts that are permission faults, and not stage-1 page table walk, do
      not report the faulting address in the HPFAR.  We have to resolve the
      IPA, and store it just like the HPFAR register on the VCPU struct. If
      the IPA cannot be resolved, it means another CPU is playing with the
      page tables, and we simply restart the guest.  This quirk was fixed by
      Marc Zyngier.
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      Reviewed-by: NMarcelo Tosatti <mtosatti@redhat.com>
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NAntonios Motakis <a.motakis@virtualopensystems.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <c.dall@virtualopensystems.com>
      f7ed45be
  4. 29 8月, 2012 1 次提交
  5. 17 10月, 2011 1 次提交
  6. 10 7月, 2011 1 次提交
    • R
      ARM: vfp: fix a hole in VFP thread migration · f8f2a852
      Russell King 提交于
      Fix a hole in the VFP thread migration.  Lets define two threads.
      
      Thread 1, we'll call 'interesting_thread' which is a thread which is
      running on CPU0, using VFP (so vfp_current_hw_state[0] =
      &interesting_thread->vfpstate) and gets migrated off to CPU1, where
      it continues execution of VFP instructions.
      
      Thread 2, we'll call 'new_cpu0_thread' which is the thread which takes
      over on CPU0.  This has also been using VFP, and last used VFP on CPU0,
      but doesn't use it again.
      
      The following code will be executed twice:
      
      		cpu = thread->cpu;
      
      		/*
      		 * On SMP, if VFP is enabled, save the old state in
      		 * case the thread migrates to a different CPU. The
      		 * restoring is done lazily.
      		 */
      		if ((fpexc & FPEXC_EN) && vfp_current_hw_state[cpu]) {
      			vfp_save_state(vfp_current_hw_state[cpu], fpexc);
      			vfp_current_hw_state[cpu]->hard.cpu = cpu;
      		}
      		/*
      		 * Thread migration, just force the reloading of the
      		 * state on the new CPU in case the VFP registers
      		 * contain stale data.
      		 */
      		if (thread->vfpstate.hard.cpu != cpu)
      			vfp_current_hw_state[cpu] = NULL;
      
      The first execution will be on CPU0 to switch away from 'interesting_thread'.
      interesting_thread->cpu will be 0.
      
      So, vfp_current_hw_state[0] points at interesting_thread->vfpstate.
      The hardware state will be saved, along with the CPU number (0) that
      it was executing on.
      
      'thread' will be 'new_cpu0_thread' with new_cpu0_thread->cpu = 0.
      Also, because it was executing on CPU0, new_cpu0_thread->vfpstate.hard.cpu = 0,
      and so the thread migration check is not triggered.
      
      This means that vfp_current_hw_state[0] remains pointing at interesting_thread.
      
      The second execution will be on CPU1 to switch _to_ 'interesting_thread'.
      So, 'thread' will be 'interesting_thread' and interesting_thread->cpu now
      will be 1.  The previous thread executing on CPU1 is not relevant to this
      so we shall ignore that.
      
      We get to the thread migration check.  Here, we discover that
      interesting_thread->vfpstate.hard.cpu = 0, yet interesting_thread->cpu is
      now 1, indicating thread migration.  We set vfp_current_hw_state[1] to
      NULL.
      
      So, at this point vfp_current_hw_state[] contains the following:
      
      [0] = &interesting_thread->vfpstate
      [1] = NULL
      
      Our interesting thread now executes a VFP instruction, takes a fault
      which loads the state into the VFP hardware.  Now, through the assembly
      we now have:
      
      [0] = &interesting_thread->vfpstate
      [1] = &interesting_thread->vfpstate
      
      CPU1 stops due to ptrace (and so saves its VFP state) using the thread
      switch code above), and CPU0 calls vfp_sync_hwstate().
      
      	if (vfp_current_hw_state[cpu] == &thread->vfpstate) {
      		vfp_save_state(&thread->vfpstate, fpexc | FPEXC_EN);
      
      BANG, we corrupt interesting_thread's VFP state by overwriting the
      more up-to-date state saved by CPU1 with the old VFP state from CPU0.
      
      Fix this by ensuring that we have sane semantics for the various state
      describing variables:
      
      1. vfp_current_hw_state[] points to the current owner of the context
         information stored in each CPUs hardware, or NULL if that state
         information is invalid.
      2. thread->vfpstate.hard.cpu always contains the most recent CPU number
         which the state was loaded into or NR_CPUS if no CPU owns the state.
      
      So, for a particular CPU to be a valid owner of the VFP state for a
      particular thread t, two things must be true:
      
       vfp_current_hw_state[cpu] == &t->vfpstate && t->vfpstate.hard.cpu == cpu.
      
      and that is valid from the moment a CPU loads the saved VFP context
      into the hardware.  This gives clear and consistent semantics to
      interpreting these variables.
      
      This patch also fixes thread copying, ensuring that t->vfpstate.hard.cpu
      is invalidated, otherwise CPU0 may believe it was the last owner.  The
      hole can happen thus:
      
      - thread1 runs on CPU2 using VFP, migrates to CPU3, exits and thread_info
        freed.
      - New thread allocated from a previously running thread on CPU2, reusing
        memory for thread1 and copying vfp.hard.cpu.
      
      At this point, the following are true:
      
      	new_thread1->vfpstate.hard.cpu == 2
      	&new_thread1->vfpstate == vfp_current_hw_state[2]
      
      Lastly, this also addresses thread flushing in a similar way to thread
      copying.  Hole is:
      
      - thread runs on CPU0, using VFP, migrates to CPU1 but does not use VFP.
      - thread calls execve(), so thread flush happens, leaving
        vfp_current_hw_state[0] intact.  This vfpstate is memset to 0 causing
        thread->vfpstate.hard.cpu = 0.
      - thread migrates back to CPU0 before using VFP.
      
      At this point, the following are true:
      
      	thread->vfpstate.hard.cpu == 0
      	&thread->vfpstate == vfp_current_hw_state[0]
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      f8f2a852
  7. 23 2月, 2011 1 次提交
  8. 12 2月, 2011 1 次提交
  9. 20 10月, 2010 1 次提交
    • N
      arm: remove machine_desc.io_pg_offst and .phys_io · 6451d778
      Nicolas Pitre 提交于
      Since we're now using addruart to establish the debug mapping, we can
      remove the io_pg_offst and phys_io members of struct machine_desc.
      
      The various declarations were removed using the following script:
      
        grep -rl MACHINE_START arch/arm | xargs \
        sed -i '/MACHINE_START/,/MACHINE_END/ { /\.\(phys_io\|io_pg_offst\)/d }'
      
      [ Initial patch was from Jeremy Kerr, example script from Russell King ]
      Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
      Acked-by: Eric Miao <eric.miao at canonical.com>
      6451d778
  10. 15 6月, 2010 1 次提交
    • N
      ARM: stack protector: change the canary value per task · df0698be
      Nicolas Pitre 提交于
      A new random value for the canary is stored in the task struct whenever
      a new task is forked.  This is meant to allow for different canary values
      per task.  On ARM, GCC expects the canary value to be found in a global
      variable called __stack_chk_guard.  So this variable has to be updated
      with the value stored in the task struct whenever a task switch occurs.
      
      Because the variable GCC expects is global, this cannot work on SMP
      unfortunately.  So, on SMP, the same initial canary value is kept
      throughout, making this feature a bit less effective although it is still
      useful.
      
      One way to overcome this GCC limitation would be to locate the
      __stack_chk_guard variable into a memory page of its own for each CPU,
      and then use TLB locking to have each CPU see its own page at the same
      virtual address for each of them.
      Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
      df0698be
  11. 15 2月, 2010 1 次提交
  12. 29 4月, 2008 1 次提交
  13. 19 4月, 2008 2 次提交
  14. 17 5月, 2007 1 次提交
    • R
      [ARM] ARMv6: add CPU_HAS_ASID configuration · 516793c6
      Russell King 提交于
      Presently, we check for the minimum ARM architecture that we're
      building for to determine whether we need ASID support.  This is
      wrong - if we're going to support a range of CPUs which include
      ARMv6 or higher, we need the ASID.
      
      Convert the checks to use a new configuration symbol, and arrange
      for ARMv6 and higher CPU entries to select it.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      516793c6
  15. 30 11月, 2006 1 次提交
  16. 30 6月, 2006 1 次提交
    • R
      [ARM] Set bit 4 on section mappings correctly depending on CPU · 8799ee9f
      Russell King 提交于
      On some CPUs, bit 4 of section mappings means "update the
      cache when written to".  On others, this bit is required to
      be one, and others it's required to be zero.  Finally, on
      ARMv6 and above, setting it turns on "no execute" and prevents
      speculative prefetches.
      
      With all these combinations, no one value fits all CPUs, so we
      have to pick a value depending on the CPU type, and the area
      we're mapping.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      8799ee9f
  17. 29 6月, 2006 1 次提交
  18. 16 5月, 2006 1 次提交
  19. 05 5月, 2006 1 次提交
  20. 13 3月, 2006 1 次提交
  21. 09 1月, 2006 1 次提交
  22. 30 10月, 2005 1 次提交
  23. 26 4月, 2005 1 次提交
  24. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4