1. 13 2月, 2013 1 次提交
  2. 12 2月, 2013 1 次提交
  3. 05 2月, 2013 1 次提交
    • V
      cfg80211: expand per-station byte counters to 64bit · 42745e03
      Vladimir Kondratiev 提交于
      In per-station statistics, present 32bit counters are too small
      for practical purposes - with gigabit speeds, it get overlapped
      every few seconds.
      
      Expand counters in the struct station_info to be 64-bit.
      Driver can still fill only 32-bit and indicate in @filled
      only bits like STATION_INFO_[TR]X_BYTES; in case driver provides
      full 64-bit counter, it should also set in @filled
      bit STATION_INFO_[TR]RX_BYTES64
      
      Netlink sends both 32-bit and 64-bit counters, if present, to not
      break userspace.
      Signed-off-by: NVladimir Kondratiev <qca_vkondrat@qca.qualcomm.com>
      [change to also have 32-bit counters if driver advertises 64-bit]
      Signed-off-by: NJohannes Berg <johannes.berg@intel.com>
      42745e03
  4. 31 1月, 2013 1 次提交
  5. 26 1月, 2013 1 次提交
  6. 17 1月, 2013 3 次提交
  7. 14 1月, 2013 1 次提交
  8. 10 1月, 2013 1 次提交
    • S
      NFC: Initial Secure Element API · 390a1bd8
      Samuel Ortiz 提交于
      Each NFC adapter can have several links to different secure elements and
      that property needs to be exported by the drivers.
      A secure element link can be enabled and disabled, and card emulation will
      be handled by the currently active one. Otherwise card emulation will be
      host implemented.
      Signed-off-by: NSamuel Ortiz <sameo@linux.intel.com>
      390a1bd8
  9. 03 1月, 2013 1 次提交
    • J
      nl80211/mac80211: support full station state in AP mode · d582cffb
      Johannes Berg 提交于
      Today, stations are added already associated. That is
      inefficient if, for example, the driver has no room
      for stations any more because then the station will
      go through the entire auth/assoc handshake, only to
      be kicked out afterwards.
      
      To address this a bit better, at least with drivers
      using the new station state callback, allow hostapd
      to add stations in unauthenticated mode, just after
      receiving the AUTH frame, before even replying. Thus
      if there's no more space at that point, it can send
      a negative auth frame back. It still needs to handle
      later state transition errors though, of course.
      Signed-off-by: NJohannes Berg <johannes.berg@intel.com>
      d582cffb
  10. 22 12月, 2012 1 次提交
  11. 20 12月, 2012 1 次提交
  12. 18 12月, 2012 2 次提交
  13. 16 12月, 2012 1 次提交
  14. 15 12月, 2012 1 次提交
    • E
      drm/exynos: add ipp subsystem · cb471f14
      Eunchul Kim 提交于
      This patch adds Image Post Processing(IPP) support for exynos drm driver.
      
      IPP supports image scaler/rotator and input/output DMA operations
      using IPP subsystem framework to control FIMC, Rotator and GSC hardware
      and supports some user interfaces for user side.
      
      And each IPP-based drivers support Memory to Memory operations
      with various converting. And in case of FIMC hardware, it also supports
      Writeback and Display output operations through local path.
      
      Features:
      - Memory to Memory operation support.
      - Various pixel formats support.
      - Image scaling support.
      - Color Space Conversion support.
      - Image crop operation support.
      - Rotate operation support to 90, 180 or 270 degree.
      - Flip operation support to vertical, horizontal or both.
      - Writeback operation support to display blended image of FIMD fifo on screen
      
      A summary to IPP Subsystem operations:
      First of all, user should get property capabilities from IPP subsystem
      and set these properties to hardware registers for desired operations.
      The properties could be pixel format, position, rotation degree and
      flip operation.
      
      And next, user should set source and destination buffer data using
      DRM_EXYNOS_IPP_QUEUE_BUF ioctl command with gem handles to source and
      destinition buffers.
      
      And next, user can control user-desired hardware with desired operations
      such as play, stop, pause and resume controls.
      
      And finally, user can aware of dma operation completion and also get
      destination buffer that it contains user-desried result through dequeue
      command.
      
      IOCTL commands:
      - DRM_EXYNOS_IPP_GET_PROPERTY
        . get ipp driver capabilitis and id.
      - DRM_EXYNOS_IPP_SET_PROPERTY
        . set format, position, rotation, flip to source and destination buffers
      - DRM_EXYNOS_IPP_QUEUE_BUF
        . enqueue/dequeue buffer and make event list.
      - DRM_EXYNOS_IPP_CMD_CTRL
        . play/stop/pause/resume control.
      
      Event:
      - DRM_EXYNOS_IPP_EVENT
        . a event to notify dma operation completion to user side.
      
      Basic control flow:
      Open -> Get properties -> User choose desired IPP sub driver(FIMC, Rotator
      or GSCALER) -> Set Property -> Create gem handle -> Enqueue to source and
      destination buffers -> Command control(Play) -> Event is notified to User
      -> User gets destinition buffer complated -> (Enqueue to source and
      destination buffers -> Event is notified to User) * N -> Queue/Dequeue to
      source and destination buffers -> Command control(Stop) -> Free gem handle
      -> Close
      
      Changelog v1 ~ v5:
      - added comments, code fixups and cleanups.
      Signed-off-by: NEunchul Kim <chulspro.kim@samsung.com>
      Signed-off-by: NJinyoung Jeon <jy0.jeon@samsung.com>
      Signed-off-by: NInki Dae <inki.dae@samsung.com>
      Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com>
      cb471f14
  15. 14 12月, 2012 4 次提交
  16. 13 12月, 2012 4 次提交
  17. 12 12月, 2012 1 次提交
    • A
      mm: support more pagesizes for MAP_HUGETLB/SHM_HUGETLB · 42d7395f
      Andi Kleen 提交于
      There was some desire in large applications using MAP_HUGETLB or
      SHM_HUGETLB to use 1GB huge pages on some mappings, and stay with 2MB on
      others.  This is useful together with NUMA policy: use 2MB interleaving
      on some mappings, but 1GB on local mappings.
      
      This patch extends the IPC/SHM syscall interfaces slightly to allow
      specifying the page size.
      
      It borrows some upper bits in the existing flag arguments and allows
      encoding the log of the desired page size in addition to the *_HUGETLB
      flag.  When 0 is specified the default size is used, this makes the
      change fully compatible.
      
      Extending the internal hugetlb code to handle this is straight forward.
      Instead of a single mount it just keeps an array of them and selects the
      right mount based on the specified page size.  When no page size is
      specified it uses the mount of the default page size.
      
      The change is not visible in /proc/mounts because internal mounts don't
      appear there.  It also has very little overhead: the additional mounts
      just consume a super block, but not more memory when not used.
      
      I also exported the new flags to the user headers (they were previously
      under __KERNEL__).  Right now only symbols for x86 and some other
      architecture for 1GB and 2MB are defined.  The interface should already
      work for all other architectures though.  Only architectures that define
      multiple hugetlb sizes actually need it (that is currently x86, tile,
      powerpc).  However tile and powerpc have user configurable hugetlb
      sizes, so it's not easy to add defines.  A program on those
      architectures would need to query sysfs and use the appropiate log2.
      
      [akpm@linux-foundation.org: cleanups]
      [rientjes@google.com: fix build]
      [akpm@linux-foundation.org: checkpatch fixes]
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Acked-by: NRik van Riel <riel@redhat.com>
      Acked-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Hillf Danton <dhillf@gmail.com>
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      42d7395f
  18. 11 12月, 2012 7 次提交
    • M
      mm: numa: Migrate on reference policy · 5606e387
      Mel Gorman 提交于
      This is the simplest possible policy that still does something of note.
      When a pte_numa is faulted, it is moved immediately. Any replacement
      policy must at least do better than this and in all likelihood this
      policy regresses normal workloads.
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      Acked-by: NRik van Riel <riel@redhat.com>
      5606e387
    • M
      mm: mempolicy: Hide MPOL_NOOP and MPOL_MF_LAZY from userspace for now · a720094d
      Mel Gorman 提交于
      The use of MPOL_NOOP and MPOL_MF_LAZY to allow an application to
      explicitly request lazy migration is a good idea but the actual
      API has not been well reviewed and once released we have to support it.
      For now this patch prevents an application using the services. This
      will need to be revisited.
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      a720094d
    • L
      mm: mempolicy: Add MPOL_MF_LAZY · b24f53a0
      Lee Schermerhorn 提交于
      NOTE: Once again there is a lot of patch stealing and the end result
      	is sufficiently different that I had to drop the signed-offs.
      	Will re-add if the original authors are ok with that.
      
      This patch adds another mbind() flag to request "lazy migration".  The
      flag, MPOL_MF_LAZY, modifies MPOL_MF_MOVE* such that the selected
      pages are marked PROT_NONE. The pages will be migrated in the fault
      path on "first touch", if the policy dictates at that time.
      
      "Lazy Migration" will allow testing of migrate-on-fault via mbind().
      Also allows applications to specify that only subsequently touched
      pages be migrated to obey new policy, instead of all pages in range.
      This can be useful for multi-threaded applications working on a
      large shared data area that is initialized by an initial thread
      resulting in all pages on one [or a few, if overflowed] nodes.
      After PROT_NONE, the pages in regions assigned to the worker threads
      will be automatically migrated local to the threads on 1st touch.
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      b24f53a0
    • L
      mm: mempolicy: Check for misplaced page · 771fb4d8
      Lee Schermerhorn 提交于
      This patch provides a new function to test whether a page resides
      on a node that is appropriate for the mempolicy for the vma and
      address where the page is supposed to be mapped.  This involves
      looking up the node where the page belongs.  So, the function
      returns that node so that it may be used to allocated the page
      without consulting the policy again.
      
      A subsequent patch will call this function from the fault path.
      Because of this, I don't want to go ahead and allocate the page, e.g.,
      via alloc_page_vma() only to have to free it if it has the correct
      policy.  So, I just mimic the alloc_page_vma() node computation
      logic--sort of.
      
      Note:  we could use this function to implement a MPOL_MF_STRICT
      behavior when migrating pages to match mbind() mempolicy--e.g.,
      to ensure that pages in an interleaved range are reinterleaved
      rather than left where they are when they reside on any page in
      the interleave nodemask.
      Signed-off-by: NLee Schermerhorn <lee.schermerhorn@hp.com>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      [ Added MPOL_F_LAZY to trigger migrate-on-fault;
        simplified code now that we don't have to bother
        with special crap for interleaved ]
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      771fb4d8
    • L
      mm: mempolicy: Add MPOL_NOOP · d3a71033
      Lee Schermerhorn 提交于
      This patch augments the MPOL_MF_LAZY feature by adding a "NOOP" policy
      to mbind().  When the NOOP policy is used with the 'MOVE and 'LAZY
      flags, mbind() will map the pages PROT_NONE so that they will be
      migrated on the next touch.
      
      This allows an application to prepare for a new phase of operation
      where different regions of shared storage will be assigned to
      worker threads, w/o changing policy.  Note that we could just use
      "default" policy in this case.  However, this also allows an
      application to request that pages be migrated, only if necessary,
      to follow any arbitrary policy that might currently apply to a
      range of pages, without knowing the policy, or without specifying
      multiple mbind()s for ranges with different policies.
      
      [ Bug in early version of mpol_parse_str() reported by Fengguang Wu. ]
      Bug-Reported-by: NReported-by: Fengguang Wu <fengguang.wu@intel.com>
      Signed-off-by: NLee Schermerhorn <lee.schermerhorn@hp.com>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      d3a71033
    • P
      mm: mempolicy: Make MPOL_LOCAL a real policy · 479e2802
      Peter Zijlstra 提交于
      Make MPOL_LOCAL a real and exposed policy such that applications that
      relied on the previous default behaviour can explicitly request it.
      Requested-by: NChristoph Lameter <cl@linux.com>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      479e2802
    • J
      f2fs: add superblock and major in-memory structure · 39a53e0c
      Jaegeuk Kim 提交于
      This adds the following major in-memory structures in f2fs.
      
      - f2fs_sb_info:
        contains f2fs-specific information, two special inode pointers for node and
        meta address spaces, and orphan inode management.
      
      - f2fs_inode_info:
        contains vfs_inode and other fs-specific information.
      
      - f2fs_nm_info:
        contains node manager information such as NAT entry cache, free nid list,
        and NAT page management.
      
      - f2fs_node_info:
        represents a node as node id, inode number, block address, and its version.
      
      - f2fs_sm_info:
        contains segment manager information such as SIT entry cache, free segment
        map, current active logs, dirty segment management, and segment utilization.
        The specific structures are sit_info, free_segmap_info, dirty_seglist_info,
        curseg_info.
      
      In addition, add F2FS_SUPER_MAGIC in magic.h.
      Signed-off-by: NChul Lee <chur.lee@samsung.com>
      Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
      39a53e0c
  19. 09 12月, 2012 1 次提交
    • J
      virtio_net: multiqueue support · 986a4f4d
      Jason Wang 提交于
      This patch adds the multiqueue (VIRTIO_NET_F_MQ) support to virtio_net
      driver. VIRTIO_NET_F_MQ capable device could allow the driver to do packet
      transmission and reception through multiple queue pairs and does the packet
      steering to get better performance. By default, one one queue pair is used, user
      could change the number of queue pairs by ethtool in the next patch.
      
      When multiple queue pairs is used and the number of queue pairs is equal to the
      number of vcpus. Driver does the following optimizations to implement per-cpu
      virt queue pairs:
      
      - select the txq based on the smp processor id.
      - smp affinity hint to the cpu that owns the queue pairs.
      
      This could be used with the flow steering support of the device to guarantee the
      packets of a single flow is handled by the same cpu.
      Signed-off-by: NKrishna Kumar <krkumar2@in.ibm.com>
      Signed-off-by: NJason Wang <jasowang@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      986a4f4d
  20. 08 12月, 2012 4 次提交
  21. 06 12月, 2012 2 次提交
    • D
      byteorder: allow arch to opt to use GCC intrinsics for byteswapping · cf66bb93
      David Woodhouse 提交于
      Since GCC 4.4, there have been __builtin_bswap32() and __builtin_bswap16()
      intrinsics. A __builtin_bswap16() came a little later (4.6 for PowerPC,
      48 for other platforms).
      
      By using these instead of the inline assembler that most architectures
      have in their __arch_swabXX() macros, we let the compiler see what's
      actually happening. The resulting code should be at least as good, and
      much *better* in the cases where it can be combined with a nearby load
      or store, using a load-and-byteswap or store-and-byteswap instruction
      (e.g. lwbrx/stwbrx on PowerPC, movbe on Atom).
      
      When GCC is sufficiently recent *and* the architecture opts in to using
      the intrinsics by setting CONFIG_ARCH_USE_BUILTIN_BSWAP, they will be
      used in preference to the __arch_swabXX() macros. An architecture which
      does not set ARCH_USE_BUILTIN_BSWAP will continue to use its own
      hand-crafted macros.
      Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
      Acked-by: NH. Peter Anvin <hpa@linux.intel.com>
      cf66bb93
    • P
      KVM: PPC: Book3S HV: Provide a method for userspace to read and write the HPT · a2932923
      Paul Mackerras 提交于
      A new ioctl, KVM_PPC_GET_HTAB_FD, returns a file descriptor.  Reads on
      this fd return the contents of the HPT (hashed page table), writes
      create and/or remove entries in the HPT.  There is a new capability,
      KVM_CAP_PPC_HTAB_FD, to indicate the presence of the ioctl.  The ioctl
      takes an argument structure with the index of the first HPT entry to
      read out and a set of flags.  The flags indicate whether the user is
      intending to read or write the HPT, and whether to return all entries
      or only the "bolted" entries (those with the bolted bit, 0x10, set in
      the first doubleword).
      
      This is intended for use in implementing qemu's savevm/loadvm and for
      live migration.  Therefore, on reads, the first pass returns information
      about all HPTEs (or all bolted HPTEs).  When the first pass reaches the
      end of the HPT, it returns from the read.  Subsequent reads only return
      information about HPTEs that have changed since they were last read.
      A read that finds no changed HPTEs in the HPT following where the last
      read finished will return 0 bytes.
      
      The format of the data provides a simple run-length compression of the
      invalid entries.  Each block of data starts with a header that indicates
      the index (position in the HPT, which is just an array), the number of
      valid entries starting at that index (may be zero), and the number of
      invalid entries following those valid entries.  The valid entries, 16
      bytes each, follow the header.  The invalid entries are not explicitly
      represented.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      [agraf: fix documentation]
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      a2932923