1. 24 3月, 2011 5 次提交
  2. 23 3月, 2011 3 次提交
  3. 21 3月, 2011 1 次提交
    • S
      introduce sys_syncfs to sync a single file system · b7ed78f5
      Sage Weil 提交于
      It is frequently useful to sync a single file system, instead of all
      mounted file systems via sync(2):
      
       - On machines with many mounts, it is not at all uncommon for some of
         them to hang (e.g. unresponsive NFS server).  sync(2) will get stuck on
         those and may never get to the one you do care about (e.g., /).
       - Some applications write lots of data to the file system and then
         want to make sure it is flushed to disk.  Calling fsync(2) on each
         file introduces unnecessary ordering constraints that result in a large
         amount of sub-optimal writeback/flush/commit behavior by the file
         system.
      
      There are currently two ways (that I know of) to sync a single super_block:
      
       - BLKFLSBUF ioctl on the block device: That also invalidates the bdev
         mapping, which isn't usually desirable, and doesn't work for non-block
         file systems.
       - 'mount -o remount,rw' will call sync_filesystem as an artifact of the
         current implemention.  Relying on this little-known side effect for
         something like data safety sounds foolish.
      
      Both of these approaches require root privileges, which some applications
      do not have (nor should they need?) given that sync(2) is an unprivileged
      operation.
      
      This patch introduces a new system call syncfs(2) that takes an fd and
      syncs only the file system it references.  Maybe someday we can
      
       $ sync /some/path
      
      and not get
      
       sync: ignoring all arguments
      
      The syscall is motivated by comments by Al and Christoph at the last LSF.
      syncfs(2) seems like an appropriate name given statfs(2).
      
      A similar ioctl was also proposed a while back, see
      	http://marc.info/?l=linux-fsdevel&m=127970513829285&w=2Signed-off-by: NSage Weil <sage@newdream.net>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      b7ed78f5
  4. 18 3月, 2011 10 次提交
    • S
      x86: Flush TLB if PGD entry is changed in i386 PAE mode · 4981d01e
      Shaohua Li 提交于
      According to intel CPU manual, every time PGD entry is changed in i386 PAE
      mode, we need do a full TLB flush. Current code follows this and there is
      comment for this too in the code.
      
      But current code misses the multi-threaded case. A changed page table
      might be used by several CPUs, every such CPU should flush TLB. Usually
      this isn't a problem, because we prepopulate all PGD entries at process
      fork. But when the process does munmap and follows new mmap, this issue
      will be triggered.
      
      When it happens, some CPUs keep doing page faults:
      
        http://marc.info/?l=linux-kernel&m=129915020508238&w=2
      
      Reported-by: Yasunori Goto<y-goto@jp.fujitsu.com>
      Tested-by: Yasunori Goto<y-goto@jp.fujitsu.com>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: Shaohua Li<shaohua.li@intel.com>
      Cc: Mallick Asit K <asit.k.mallick@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: linux-mm <linux-mm@kvack.org>
      Cc: stable <stable@kernel.org>
      LKML-Reference: <1300246649.2337.95.camel@sli10-conroe>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      4981d01e
    • N
      x86, dumpstack: Correct stack dump info when frame pointer is available · e8e999cf
      Namhyung Kim 提交于
      Current stack dump code scans entire stack and check each entry
      contains a pointer to kernel code. If CONFIG_FRAME_POINTER=y it
      could mark whether the pointer is valid or not based on value of
      the frame pointer. Invalid entries could be preceded by '?' sign.
      
      However this was not going to happen because scan start point
      was always higher than the frame pointer so that they could not
      meet.
      
      Commit 9c0729dc ("x86: Eliminate bp argument from the stack
      tracing routines") delayed bp acquisition point, so the bp was
      read in lower frame, thus all of the entries were marked
      invalid.
      
      This patch fixes this by reverting above commit while retaining
      stack_frame() helper as suggested by Frederic Weisbecker.
      
      End result looks like below:
      
      before:
      
       [    3.508329] Call Trace:
       [    3.508551]  [<ffffffff814f35c9>] ? panic+0x91/0x199
       [    3.508662]  [<ffffffff814f3739>] ? printk+0x68/0x6a
       [    3.508770]  [<ffffffff81a981b2>] ? mount_block_root+0x257/0x26e
       [    3.508876]  [<ffffffff81a9821f>] ? mount_root+0x56/0x5a
       [    3.508975]  [<ffffffff81a98393>] ? prepare_namespace+0x170/0x1a9
       [    3.509216]  [<ffffffff81a9772b>] ? kernel_init+0x1d2/0x1e2
       [    3.509335]  [<ffffffff81003894>] ? kernel_thread_helper+0x4/0x10
       [    3.509442]  [<ffffffff814f6880>] ? restore_args+0x0/0x30
       [    3.509542]  [<ffffffff81a97559>] ? kernel_init+0x0/0x1e2
       [    3.509641]  [<ffffffff81003890>] ? kernel_thread_helper+0x0/0x10
      
      after:
      
       [    3.522991] Call Trace:
       [    3.523351]  [<ffffffff814f35b9>] panic+0x91/0x199
       [    3.523468]  [<ffffffff814f3729>] ? printk+0x68/0x6a
       [    3.523576]  [<ffffffff81a981b2>] mount_block_root+0x257/0x26e
       [    3.523681]  [<ffffffff81a9821f>] mount_root+0x56/0x5a
       [    3.523780]  [<ffffffff81a98393>] prepare_namespace+0x170/0x1a9
       [    3.523885]  [<ffffffff81a9772b>] kernel_init+0x1d2/0x1e2
       [    3.523987]  [<ffffffff81003894>] kernel_thread_helper+0x4/0x10
       [    3.524228]  [<ffffffff814f6880>] ? restore_args+0x0/0x30
       [    3.524345]  [<ffffffff81a97559>] ? kernel_init+0x0/0x1e2
       [    3.524445]  [<ffffffff81003890>] ? kernel_thread_helper+0x0/0x10
      
       -v5:
         * fix build breakage with oprofile
      
       -v4:
         * use 0 instead of regs->bp
         * separate out printk changes
      
       -v3:
         * apply comment from Frederic
         * add a couple of printk fixes
      Signed-off-by: NNamhyung Kim <namhyung@gmail.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Soren Sandmann <ssp@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Robert Richter <robert.richter@amd.com>
      LKML-Reference: <1300416006-3163-1-git-send-email-namhyung@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e8e999cf
    • L
      x86: Fix common misspellings · 0d2eb44f
      Lucas De Marchi 提交于
      They were generated by 'codespell' and then manually reviewed.
      Signed-off-by: NLucas De Marchi <lucas.demarchi@profusion.mobi>
      Cc: trivial@kernel.org
      LKML-Reference: <1300389856-1099-3-git-send-email-lucas.demarchi@profusion.mobi>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0d2eb44f
    • X
      KVM: MMU: cleanup pte write path · 0f53b5b1
      Xiao Guangrong 提交于
      This patch does:
      - call vcpu->arch.mmu.update_pte directly
      - use gfn_to_pfn_atomic in update_pte path
      
      The suggestion is from Avi.
      Signed-off-by: NXiao Guangrong <xiaoguangrong@cn.fujitsu.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      0f53b5b1
    • G
      KVM: emulator: Fix io permission checking for 64bit guest · 5601d05b
      Gleb Natapov 提交于
      Current implementation truncates upper 32bit of TR base address during IO
      permission bitmap check. The patch fixes this.
      Reported-and-tested-by: NFrancis Moreau <francis.moro@gmail.com>
      Signed-off-by: NGleb Natapov <gleb@redhat.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      5601d05b
    • X
      KVM: MMU: do not record gfn in kvm_mmu_pte_write · 49b26e26
      Xiao Guangrong 提交于
      No need to record the gfn to verifier the pte has the same mode as
      current vcpu, it's because we only speculatively update the pte only
      if the pte and vcpu have the same mode
      Signed-off-by: NXiao Guangrong <xiaoguangrong@cn.fujitsu.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      49b26e26
    • J
      KVM: x86: Convert tsc_write_lock to raw_spinlock · 038f8c11
      Jan Kiszka 提交于
      Code under this lock requires non-preemptibility. Ensure this also over
      -rt by converting it to raw spinlock.
      Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      038f8c11
    • J
      KVM: Convert kvm_lock to raw_spinlock · e935b837
      Jan Kiszka 提交于
      Code under this lock requires non-preemptibility. Ensure this also over
      -rt by converting it to raw spinlock.
      Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      e935b837
    • A
      KVM: x86 emulator: vendor specific instructions · d867162c
      Avi Kivity 提交于
      Mark some instructions as vendor specific, and allow the caller to request
      emulation only of vendor specific instructions.  This is useful in some
      circumstances (responding to a #UD fault).
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      d867162c
    • J
      KVM: x86: handle guest access to BBL_CR_CTL3 MSR · 91c9c3ed
      john cooper 提交于
      A correction to Intel cpu model CPUID data (patch queued)
      caused winxp to BSOD when booted with a Penryn model.
      This was traced to the CPUID "model" field correction from
      6 -> 23 (as is proper for a Penryn class of cpu).  Only in
      this case does the problem surface.
      
      The cause for this failure is winxp accessing the BBL_CR_CTL3
      MSR which is unsupported by current kvm, appears to be a
      legacy MSR not fully characterized yet existing in current
      silicon, and is apparently carried forward in MSR space to
      accommodate vintage code as here.  It is not yet conclusive
      whether this MSR implements any of its legacy functionality
      or is just an ornamental dud for compatibility.  While I
      found no silicon version specific documentation link to
      this MSR, a general description exists in Intel's developer's
      reference which agrees with the functional behavior of
      other bootloader/kernel code I've examined accessing
      BBL_CR_CTL3.  Regrettably winxp appears to be setting bit #19
      called out as "reserved" in the above document.
      
      So to minimally accommodate this MSR, kvm msr get will provide
      the equivalent mock data and kvm msr write will simply toss the
      guest passed data without interpretation.  While this treatment
      of BBL_CR_CTL3 addresses the immediate problem, the approach may
      be modified pending clarification from Intel.
      Signed-off-by: Njohn cooper <john.cooper@redhat.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      91c9c3ed
  5. 15 3月, 2011 2 次提交
  6. 14 3月, 2011 5 次提交
    • S
      xen/m2p: Check whether the MFN has IDENTITY_FRAME bit set.. · 706cc9d2
      Stefano Stabellini 提交于
      If there is no proper PFN value in the M2P for the MFN
      (so we get 0xFFFFF.. or 0x55555, or 0x0), we should
      consult the M2P override to see if there is an entry for this.
      [Note: we also consult the M2P override if the MFN
      is past our machine_to_phys size].
      
      We consult the P2M with the PFN. In case the returned
      MFN is one of the special values: 0xFFF.., 0x5555
      (which signify that the MFN can be either "missing" or it
      belongs to DOMID_IO) or the p2m(m2p(mfn)) != mfn, we check
      the M2P override. If we fail the M2P override check, we reset
      the PFN value to INVALID_P2M_ENTRY.
      
      Next we try to find the MFN in the P2M using the MFN
      value (not the PFN value) and if found, we know
      that this MFN is an identity value and return it as so.
      
      Otherwise we have exhausted all the posibilities and we
      return the PFN, which at this stage can either be a real
      PFN value found in the machine_to_phys.. array, or
      INVALID_P2M_ENTRY value.
      
      [v1: Added Review-by tag]
      Reviewed-by: NIan Campbell <ian.campbell@citrix.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      706cc9d2
    • K
      xen/m2p: No need to catch exceptions when we know that there is no RAM · 146c4e51
      Konrad Rzeszutek Wilk 提交于
      .. beyound what we think is the end of memory. However there might
      be more System RAM - but assigned to a guest. Hence jump to the
      M2P override check and consult.
      
      [v1: Added Review-by tag]
      Reviewed-by: NIan Campbell <ian.campbell@citrix.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      146c4e51
    • K
      xen/debugfs: Add 'p2m' file for printing out the P2M layout. · 2222e71b
      Konrad Rzeszutek Wilk 提交于
      We walk over the whole P2M tree and construct a simplified view of
      which PFN regions belong to what level and what type they are.
      
      Only enabled if CONFIG_XEN_DEBUG_FS is set.
      
      [v2: UNKN->UNKNOWN, use uninitialized_var]
      [v3: Rebased on top of mmu->p2m code split]
      [v4: Fixed the else if]
      Reviewed-by: NIan Campbell <Ian.Campbell@eu.citrix.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      2222e71b
    • K
      xen/mmu: Add the notion of identity (1-1) mapping. · f4cec35b
      Konrad Rzeszutek Wilk 提交于
      Our P2M tree structure is a three-level. On the leaf nodes
      we set the Machine Frame Number (MFN) of the PFN. What this means
      is that when one does: pfn_to_mfn(pfn), which is used when creating
      PTE entries, you get the real MFN of the hardware. When Xen sets
      up a guest it initially populates a array which has descending
      (or ascending) MFN values, as so:
      
       idx: 0,  1,       2
       [0x290F, 0x290E, 0x290D, ..]
      
      so pfn_to_mfn(2)==0x290D. If you start, restart many guests that list
      starts looking quite random.
      
      We graft this structure on our P2M tree structure and stick in
      those MFN in the leafs. But for all other leaf entries, or for the top
      root, or middle one, for which there is a void entry, we assume it is
      "missing". So
       pfn_to_mfn(0xc0000)=INVALID_P2M_ENTRY.
      
      We add the possibility of setting 1-1 mappings on certain regions, so
      that:
       pfn_to_mfn(0xc0000)=0xc0000
      
      The benefit of this is, that we can assume for non-RAM regions (think
      PCI BARs, or ACPI spaces), we can create mappings easily b/c we
      get the PFN value to match the MFN.
      
      For this to work efficiently we introduce one new page p2m_identity and
      allocate (via reserved_brk) any other pages we need to cover the sides
      (1GB or 4MB boundary violations). All entries in p2m_identity are set to
      INVALID_P2M_ENTRY type (Xen toolstack only recognizes that and MFNs,
      no other fancy value).
      
      On lookup we spot that the entry points to p2m_identity and return the identity
      value instead of dereferencing and returning INVALID_P2M_ENTRY. If the entry
      points to an allocated page, we just proceed as before and return the PFN.
      If the PFN has IDENTITY_FRAME_BIT set we unmask that in appropriate functions
      (pfn_to_mfn).
      
      The reason for having the IDENTITY_FRAME_BIT instead of just returning the
      PFN is that we could find ourselves where pfn_to_mfn(pfn)==pfn for a
      non-identity pfn. To protect ourselves against we elect to set (and get) the
      IDENTITY_FRAME_BIT on all identity mapped PFNs.
      
      This simplistic diagram is used to explain the more subtle piece of code.
      There is also a digram of the P2M at the end that can help.
      Imagine your E820 looking as so:
      
                         1GB                                           2GB
      /-------------------+---------\/----\         /----------\    /---+-----\
      | System RAM        | Sys RAM ||ACPI|         | reserved |    | Sys RAM |
      \-------------------+---------/\----/         \----------/    \---+-----/
                                    ^- 1029MB                       ^- 2001MB
      
      [1029MB = 263424 (0x40500), 2001MB = 512256 (0x7D100), 2048MB = 524288 (0x80000)]
      
      And dom0_mem=max:3GB,1GB is passed in to the guest, meaning memory past 1GB
      is actually not present (would have to kick the balloon driver to put it in).
      
      When we are told to set the PFNs for identity mapping (see patch: "xen/setup:
      Set identity mapping for non-RAM E820 and E820 gaps.") we pass in the start
      of the PFN and the end PFN (263424 and 512256 respectively). The first step is
      to reserve_brk a top leaf page if the p2m[1] is missing. The top leaf page
      covers 512^2 of page estate (1GB) and in case the start or end PFN is not
      aligned on 512^2*PAGE_SIZE (1GB) we loop on aligned 1GB PFNs from start pfn to
      end pfn.  We reserve_brk top leaf pages if they are missing (means they point
      to p2m_mid_missing).
      
      With the E820 example above, 263424 is not 1GB aligned so we allocate a
      reserve_brk page which will cover the PFNs estate from 0x40000 to 0x80000.
      Each entry in the allocate page is "missing" (points to p2m_missing).
      
      Next stage is to determine if we need to do a more granular boundary check
      on the 4MB (or 2MB depending on architecture) off the start and end pfn's.
      We check if the start pfn and end pfn violate that boundary check, and if
      so reserve_brk a middle (p2m[x][y]) leaf page. This way we have a much finer
      granularity of setting which PFNs are missing and which ones are identity.
      In our example 263424 and 512256 both fail the check so we reserve_brk two
      pages. Populate them with INVALID_P2M_ENTRY (so they both have "missing" values)
      and assign them to p2m[1][2] and p2m[1][488] respectively.
      
      At this point we would at minimum reserve_brk one page, but could be up to
      three. Each call to set_phys_range_identity has at maximum a three page
      cost. If we were to query the P2M at this stage, all those entries from
      start PFN through end PFN (so 1029MB -> 2001MB) would return INVALID_P2M_ENTRY
      ("missing").
      
      The next step is to walk from the start pfn to the end pfn setting
      the IDENTITY_FRAME_BIT on each PFN. This is done in 'set_phys_range_identity'.
      If we find that the middle leaf is pointing to p2m_missing we can swap it over
      to p2m_identity - this way covering 4MB (or 2MB) PFN space.  At this point we
      do not need to worry about boundary aligment (so no need to reserve_brk a middle
      page, figure out which PFNs are "missing" and which ones are identity), as that
      has been done earlier.  If we find that the middle leaf is not occupied by
      p2m_identity or p2m_missing, we dereference that page (which covers
      512 PFNs) and set the appropriate PFN with IDENTITY_FRAME_BIT. In our example
      263424 and 512256 end up there, and we set from p2m[1][2][256->511] and
      p2m[1][488][0->256] with IDENTITY_FRAME_BIT set.
      
      All other regions that are void (or not filled) either point to p2m_missing
      (considered missing) or have the default value of INVALID_P2M_ENTRY (also
      considered missing). In our case, p2m[1][2][0->255] and p2m[1][488][257->511]
      contain the INVALID_P2M_ENTRY value and are considered "missing."
      
      This is what the p2m ends up looking (for the E820 above) with this
      fabulous drawing:
      
         p2m         /--------------\
       /-----\       | &mfn_list[0],|                           /-----------------\
       |  0  |------>| &mfn_list[1],|    /---------------\      | ~0, ~0, ..      |
       |-----|       |  ..., ~0, ~0 |    | ~0, ~0, [x]---+----->| IDENTITY [@256] |
       |  1  |---\   \--------------/    | [p2m_identity]+\     | IDENTITY [@257] |
       |-----|    \                      | [p2m_identity]+\\    | ....            |
       |  2  |--\  \-------------------->|  ...          | \\   \----------------/
       |-----|   \                       \---------------/  \\
       |  3  |\   \                                          \\  p2m_identity
       |-----| \   \-------------------->/---------------\   /-----------------\
       | ..  +->+                        | [p2m_identity]+-->| ~0, ~0, ~0, ... |
       \-----/ /                         | [p2m_identity]+-->| ..., ~0         |
              / /---------------\        | ....          |   \-----------------/
             /  | IDENTITY[@0]  |      /-+-[x], ~0, ~0.. |
            /   | IDENTITY[@256]|<----/  \---------------/
           /    | ~0, ~0, ....  |
          |     \---------------/
          |
          p2m_missing             p2m_missing
      /------------------\     /------------\
      | [p2m_mid_missing]+---->| ~0, ~0, ~0 |
      | [p2m_mid_missing]+---->| ..., ~0    |
      \------------------/     \------------/
      
      where ~0 is INVALID_P2M_ENTRY. IDENTITY is (PFN | IDENTITY_BIT)
      Reviewed-by: NIan Campbell <ian.campbell@citrix.com>
      [v5: Changed code to use ranges, added ASCII art]
      [v6: Rebased on top of xen->p2m code split]
      [v4: Squished patches in just this one]
      [v7: Added RESERVE_BRK for potentially allocated pages]
      [v8: Fixed alignment problem]
      [v9: Changed 1<<3X to 1<<BITS_PER_LONG-X]
      [v10: Copied git commit description in the p2m code + Add Review tag]
      [v11: Title had '2-1' - should be '1-1' mapping]
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      f4cec35b
    • S
      x86: ce4100: Set pci ops via callback instead of module init · 03150171
      Sebastian Andrzej Siewior 提交于
      Setting the pci ops on subsys initcall unconditionally will break
      multi platform kernels on anything except ce4100.
      
      Use x86_init.pci.init ops to call this only on real ce4100 platforms.
      Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Cc: sodaville@linutronix.de
      LKML-Reference: <20110314093340.GA21026@www.tglx.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      03150171
  7. 11 3月, 2011 3 次提交
    • M
      futex: Sanitize futex ops argument types · 8d7718aa
      Michel Lespinasse 提交于
      Change futex_atomic_op_inuser and futex_atomic_cmpxchg_inatomic
      prototypes to use u32 types for the futex as this is the data type the
      futex core code uses all over the place.
      Signed-off-by: NMichel Lespinasse <walken@google.com>
      Cc: Darren Hart <darren@dvhart.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      LKML-Reference: <20110311025058.GD26122@google.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      8d7718aa
    • M
      futex: Sanitize cmpxchg_futex_value_locked API · 37a9d912
      Michel Lespinasse 提交于
      The cmpxchg_futex_value_locked API was funny in that it returned either
      the original, user-exposed futex value OR an error code such as -EFAULT.
      This was confusing at best, and could be a source of livelocks in places
      that retry the cmpxchg_futex_value_locked after trying to fix the issue
      by running fault_in_user_writeable().
          
      This change makes the cmpxchg_futex_value_locked API more similar to the
      get_futex_value_locked one, returning an error code and updating the
      original value through a reference argument.
      Signed-off-by: NMichel Lespinasse <walken@google.com>
      Acked-by: Chris Metcalf <cmetcalf@tilera.com>  [tile]
      Acked-by: Tony Luck <tony.luck@intel.com>  [ia64]
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Tested-by: Michal Simek <monstr@monstr.eu>  [microblaze]
      Acked-by: David Howells <dhowells@redhat.com> [frv]
      Cc: Darren Hart <darren@dvhart.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      LKML-Reference: <20110311024851.GC26122@google.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      37a9d912
    • H
      x86: Clean up apic.c and apic.h · 25874a29
      Henrik Kretzschmar 提交于
      This patch moves some functions and variables into init
      sections, makes a function static and removes some lines of
      cruft.
      Signed-off-by: NHenrik Kretzschmar <henne@nachtwindheim.de>
      Acked-by: NCyrill Gorcunov <gorcunov@openvz.org>
      LKML-Reference: <1299826956-8607-2-git-send-email-henne@nachtwindheim.de>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      25874a29
  8. 09 3月, 2011 1 次提交
  9. 05 3月, 2011 1 次提交
  10. 04 3月, 2011 3 次提交
    • A
      perf: Add support for supplementary event registers · a7e3ed1e
      Andi Kleen 提交于
      Change logs against Andi's original version:
      
      - Extends perf_event_attr:config to config{,1,2} (Peter Zijlstra)
      - Fixed a major event scheduling issue. There cannot be a ref++ on an
        event that has already done ref++ once and without calling
        put_constraint() in between. (Stephane Eranian)
      - Use thread_cpumask for percore allocation. (Lin Ming)
      - Use MSR names in the extra reg lists. (Lin Ming)
      - Remove redundant "c = NULL" in intel_percore_constraints
      - Fix comment of perf_event_attr::config1
      
      Intel Nehalem/Westmere have a special OFFCORE_RESPONSE event
      that can be used to monitor any offcore accesses from a core.
      This is a very useful event for various tunings, and it's
      also needed to implement the generic LLC-* events correctly.
      
      Unfortunately this event requires programming a mask in a separate
      register. And worse this separate register is per core, not per
      CPU thread.
      
      This patch:
      
      - Teaches perf_events that OFFCORE_RESPONSE needs extra parameters.
        The extra parameters are passed by user space in the
        perf_event_attr::config1 field.
      
      - Adds support to the Intel perf_event core to schedule per
        core resources. This adds fairly generic infrastructure that
        can be also used for other per core resources.
        The basic code has is patterned after the similar AMD northbridge
        constraints code.
      
      Thanks to Stephane Eranian who pointed out some problems
      in the original version and suggested improvements.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NLin Ming <ming.m.lin@intel.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1299119690-13991-2-git-send-email-ming.m.lin@intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a7e3ed1e
    • T
      x86-64, NUMA: Revert NUMA affine page table allocation · f8911250
      Tejun Heo 提交于
      This patch reverts NUMA affine page table allocation added by commit
      1411e0ec (x86-64, numa: Put pgtable to local node memory).
      
      The commit made an undocumented change where the kernel linear mapping
      strictly follows intersection of e820 memory map and NUMA
      configuration.  If the physical memory configuration has holes or NUMA
      nodes are not properly aligned, this leads to using unnecessarily
      smaller mapping size which leads to increased TLB pressure.  For
      details,
      
        http://thread.gmane.org/gmane.linux.kernel/1104672
      
      Patches to fix the problem have been proposed but the underlying code
      needs more cleanup and the approach itself seems a bit heavy handed
      and it has been determined to revert the feature for now and come back
      to it in the next developement cycle.
      
        http://thread.gmane.org/gmane.linux.kernel/1105959
      
      As init_memory_mapping_high() callsites have been consolidated since
      the commit, reverting is done manually.  Also, the RED-PEN comment in
      arch/x86/mm/init.c is not restored as the problem no longer exists
      with memblock based top-down early memory allocation.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      f8911250
    • K
      xen: Mark all initial reserved pages for the balloon as INVALID_P2M_ENTRY. · 6eaa412f
      Konrad Rzeszutek Wilk 提交于
      With this patch, we diligently set regions that will be used by the
      balloon driver to be INVALID_P2M_ENTRY and under the ownership
      of the balloon driver. We are OK using the __set_phys_to_machine
      as we do not expect to be allocating any P2M middle or entries pages.
      The set_phys_to_machine has the side-effect of potentially allocating
      new pages and we do not want that at this stage.
      
      We can do this because xen_build_mfn_list_list will have already
      allocated all such pages up to xen_max_p2m_pfn.
      
      We also move the check for auto translated physmap down the
      stack so it is present in __set_phys_to_machine.
      
      [v2: Rebased with mmu->p2m code split]
      Reviewed-by: NIan Campbell <ian.campbell@citrix.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      6eaa412f
  11. 03 3月, 2011 2 次提交
  12. 01 3月, 2011 1 次提交
  13. 28 2月, 2011 2 次提交
  14. 26 2月, 2011 1 次提交