1. 28 4月, 2006 1 次提交
  2. 21 4月, 2006 1 次提交
  3. 05 4月, 2006 1 次提交
  4. 29 3月, 2006 2 次提交
  5. 28 3月, 2006 1 次提交
  6. 27 3月, 2006 1 次提交
  7. 25 3月, 2006 1 次提交
  8. 24 3月, 2006 1 次提交
  9. 23 3月, 2006 1 次提交
  10. 08 3月, 2006 1 次提交
  11. 28 2月, 2006 2 次提交
  12. 16 2月, 2006 7 次提交
  13. 10 2月, 2006 2 次提交
  14. 09 2月, 2006 1 次提交
  15. 08 2月, 2006 3 次提交
  16. 07 2月, 2006 1 次提交
    • R
      [IA64-SGI] Shub2 BTE address fix · 913e4a75
      Russ Anderson 提交于
      After converting the cpu physical address to shub2 physical
      addressing, the address was run through TO_PHYS() which
      clobbered a high node offset bit causing the BTE to fail
      on shub2 nodes with large memory.  This fix corrects
      that problem.
      
      Signed-off-by: Russ Anderson (rja@sgi.com)
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      913e4a75
  17. 03 2月, 2006 4 次提交
  18. 27 1月, 2006 4 次提交
    • B
      [IA64] hooks to wait for mmio writes to drain when migrating processes · e08e6c52
      Brent Casavant 提交于
      On SN2, MMIO writes which are issued from separate processors are not
      guaranteed to arrive in any particular order at the IO hardware.  When
      performing such writes from the kernel this is not a problem, as a
      kernel thread will not migrate to another CPU during execution, and
      mmiowb() calls can guarantee write ordering when control of the IO
      resource is allowed to move between threads.
      
      However, when MMIO writes can be performed from user space (e.g. DRM)
      there are no such guarantees and mechanisms, as the process may
      context-switch at any time, and may migrate to a different CPU as part
      of the switch.  For such programs/hardware to operate correctly, it is
      required that the MMIO writes from the old CPU be accepted by the IO
      hardware before subsequent writes from the new CPU can be issued.
      
      The following patch implements this behavior on SN2 by waiting for a
      Shub register to indicate that these writes have been accepted.  This
      is placed in the context switch-in path, and only performs the wait
      when the newly scheduled task changes CPUs.
      Signed-off-by: NPrarit Bhargava <prarit@sgi.com>
      Signed-off-by: NBrent Casavant <bcasavan@sgi.com>
      e08e6c52
    • J
      [IA64-SGI] Update TLB flushing code for SN platform · 61a34a02
      Jack Steiner 提交于
      This patch finishes support for SHUB2 (the new chipset). Most of the
      changes are performance related. A few changes are workarounds for
      "interesting" chipset features.
      
      Some temporary debugging code has also been deleted.
      Signed-off-by: NJack Steiner <steiner@sgi.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      61a34a02
    • P
      [IA64-SGI] Add PROM feature set for device flush list · 61d67f2e
      Prarit Bhargava 提交于
      Introduce PRF_DEVICE_FLUSH_LIST flag for older PROMs.
      Signed-off-by: NPrarit Bhargava <prarit@sgi.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      61d67f2e
    • K
      [IA64-SGI] Recursive flags do not work for selective builds · 103ec091
      Keith Owens 提交于
      arch/ia64/sn/Makefile sets CPPFLAGS, expecting that setting to
      propogate to all the subdirectories.  For a normal build with its
      recursive descent it does work, but doing a selective build like
      'make arch/ia64/sn/kernel/io_init.i' does not do a recursive descent,
      it goes directly to arch/ia64/sn/kernel/Makefile so the flags do not
      get set.
      
      To support selective builds, set the flags in all the subordinate Makefiles.
      Signed-off-by: NKeith Owens <kaos@sgi.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      103ec091
  19. 25 1月, 2006 1 次提交
  20. 18 1月, 2006 2 次提交
  21. 17 1月, 2006 1 次提交
  22. 14 1月, 2006 1 次提交
    • P
      [IA64-SGI] Fix sn_flush_device_kernel & spinlock initialization · 6d6e4200
      Prarit Bhargava 提交于
      This patch separates the sn_flush_device_list struct into kernel and
      common (both kernel and PROM accessible) structures.  As it was, if the
      size of a spinlock_t changed (due to additional CONFIG options, etc.) the
      sal call which populated the sn_flush_device_list structs would erroneously
      write data (and cause memory corruption and/or a panic).
      
      This patch does the following:
      
      1.  Removes sn_flush_device_list and adds sn_flush_device_common and
      sn_flush_device_kernel.
      
      2.  Adds a new SAL call to populate a sn_flush_device_common struct per
      device, not per widget as previously done.
      
      3.  Correctly initializes each device's sn_flush_device_kernel spinlock_t
      struct (before it was only doing each widget's first device).
      Signed-off-by: NPrarit Bhargava <prarit@sgi.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      6d6e4200