1. 07 2月, 2009 25 次提交
  2. 31 1月, 2009 8 次提交
  3. 30 1月, 2009 1 次提交
  4. 25 1月, 2009 1 次提交
    • I
      x86: use standard PIT frequency · e1b4d114
      Ingo Molnar 提交于
      the RDC and ELAN platforms use slighly different PIT clocks, resulting in
      a timex.h hack that changes PIT_TICK_RATE during build time. But if a
      tester enables any of these platform support .config options, the PIT
      will be miscalibrated on standard PC platforms.
      
      So use one frequency - in a subsequent patch we'll add a quirk to allow
      x86 platforms to define different PIT frequencies.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e1b4d114
  5. 24 1月, 2009 1 次提交
    • P
      x86, mm: fix pte_free() · 42ef73fe
      Peter Zijlstra 提交于
      On -rt we were seeing spurious bad page states like:
      
      Bad page state in process 'firefox'
      page:c1bc2380 flags:0x40000000 mapping:c1bc2390 mapcount:0 count:0
      Trying to fix it up, but a reboot is needed
      Backtrace:
      Pid: 503, comm: firefox Not tainted 2.6.26.8-rt13 #3
      [<c043d0f3>] ? printk+0x14/0x19
      [<c0272d4e>] bad_page+0x4e/0x79
      [<c0273831>] free_hot_cold_page+0x5b/0x1d3
      [<c02739f6>] free_hot_page+0xf/0x11
      [<c0273a18>] __free_pages+0x20/0x2b
      [<c027d170>] __pte_alloc+0x87/0x91
      [<c027d25e>] handle_mm_fault+0xe4/0x733
      [<c043f680>] ? rt_mutex_down_read_trylock+0x57/0x63
      [<c043f680>] ? rt_mutex_down_read_trylock+0x57/0x63
      [<c0218875>] do_page_fault+0x36f/0x88a
      
      This is the case where a concurrent fault already installed the PTE and
      we get to free the newly allocated one.
      
      This is due to pgtable_page_ctor() doing the spin_lock_init(&page->ptl)
      which is overlaid with the {private, mapping} struct.
      
      union {
          struct {
              unsigned long private;
              struct address_space *mapping;
          };
          spinlock_t ptl;
          struct kmem_cache *slab;
          struct page *first_page;
      };
      
      Normally the spinlock is small enough to not stomp on page->mapping, but
      PREEMPT_RT=y has huge 'spin'locks.
      
      But lockdep kernels should also be able to trigger this splat, as the
      lock tracking code grows the spinlock to cover page->mapping.
      
      The obvious fix is calling pgtable_page_dtor() like the regular pte free
      path __pte_free_tlb() does.
      
      It seems all architectures except x86 and nm10300 already do this, and
      nm10300 doesn't seem to use pgtable_page_ctor(), which suggests it
      doesn't do SMP or simply doesnt do MMU at all or something.
      Signed-off-by: NPeter Zijlstra <a.p.zijlsta@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Cc: <stable@kernel.org>
      42ef73fe
  6. 22 1月, 2009 1 次提交
  7. 21 1月, 2009 1 次提交
  8. 16 1月, 2009 1 次提交
    • J
      x86: fix assumed to be contiguous leaf page tables for kmap_atomic region (take 2) · a3c6018e
      Jan Beulich 提交于
      Debugging and original patch from Nick Piggin <npiggin@suse.de>
      
      The early fixmap pmd entry inserted at the very top of the KVA is causing the
      subsequent fixmap mapping code to not provide physically linear pte pages over
      the kmap atomic portion of the fixmap (which relies on said property to
      calculate pte addresses).
      
      This has caused weird boot failures in kmap_atomic much later in the boot
      process (initial userspace faults) on a 32-bit PAE system with a larger number
      of CPUs (smaller CPU counts tend not to run over into the next page so don't
      show up the problem).
      
      Solve this by attempting to clear out the page table, and copy any of its
      entries to the new one. Also, add a bug if a nonlinear condition is encountered
      and can't be resolved, which might save some hours of debugging if this fragile
      scheme ever breaks again...
      
      Once we have such logic, we can also use it to eliminate the early ioremap
      trickery around the page table setup for the fixmap area. This also fixes
      potential issues with FIX_* entries sharing the leaf page table with the early
      ioremap ones getting discarded by early_ioremap_clear() and not restored by
      early_ioremap_reset(). It at once eliminates the temporary (and configuration,
      namely NR_CPUS, dependent) unavailability of early fixed mappings during the
      time the fixmap area page tables get constructed.
      
      Finally, also replace the hard coded calculation of the initial table space
      needed for the fixmap area with a proper one, allowing kernels configured for
      large CPU counts to actually boot.
      
      Based-on: Nick Piggin <npiggin@suse.de>
      Signed-off-by: NJan Beulich <jbeulich@novell.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a3c6018e
  9. 15 1月, 2009 1 次提交