1. 03 5月, 2011 2 次提交
    • C
      arch/tile: support TIF_NOTIFY_RESUME · 313ce674
      Chris Metcalf 提交于
      This support is required for CONFIG_KEYS, NFSv4 kernel DNS, etc.
      The change is slightly more complex than the minimal thing, since
      I took advantage of having to go into the assembly code to just
      move a bunch of stuff into C code: specifically, the schedule(),
      do_async_page_fault(), do_signal(), and single_step_once() support,
      in addition to the TIF_NOTIFY_RESUME support.
      Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
      313ce674
    • C
      arch/tile: refactor backtracing code · 93013a0f
      Chris Metcalf 提交于
      This change is the result of some work to make the backtrace code more
      shareable between kernel, libc, and gdb.
      
      For the kernel, some good effects are to eliminate the hacky
      "VirtualAddress" typedef in favor of "unsigned long", to eliminate a
      bunch of spurious kernel doc comments, to remove the dead "bt_read_memory"
      function, and to use "__tilegx__" in #ifdefs instead of "TILE_CHIP".
      Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
      93013a0f
  2. 31 3月, 2011 1 次提交
  3. 26 3月, 2011 2 次提交
  4. 23 3月, 2011 1 次提交
  5. 18 3月, 2011 1 次提交
  6. 11 3月, 2011 2 次提交
    • C
      arch/tile: support 4KB page size as well as 64KB · 76c567fb
      Chris Metcalf 提交于
      The Tilera architecture traditionally supports 64KB page sizes
      to improve TLB utilization and improve performance when the
      hardware is being used primarily to run a single application.
      
      For more generic server scenarios, it can be beneficial to run
      with 4KB page sizes, so this commit allows that to be specified
      (by modifying the arch/tile/include/hv/pagesize.h header).
      
      As part of this change, we also re-worked the PTE management
      slightly so that PTE writes all go through a __set_pte() function
      where we can do some additional validation.  The set_pte_order()
      function was eliminated since the "order" argument wasn't being used.
      
      One bug uncovered was in the PCI DMA code, which wasn't properly
      flushing the specified range.  This was benign with 64KB pages,
      but with 4KB pages we were getting some larger flushes wrong.
      
      The per-cpu memory reservation code also needed updating to
      conform with the newer percpu stuff; before it always chose 64KB,
      and that was always correct, but with 4KB granularity we now have
      to pay closer attention and reserve the amount of memory that will
      be requested when the percpu code starts allocating.
      Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
      76c567fb
    • C
      arch/tile: fix some comments and whitespace · 5fb682b0
      Chris Metcalf 提交于
      This is a grab bag of changes with no actual change to generated code.
      This includes whitespace and comment typos, plus a couple of stale
      comments being removed.
      Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
      5fb682b0
  7. 02 3月, 2011 7 次提交
    • C
      arch/tile: fix two bugs in the backtracer code · 3cebbafd
      Chris Metcalf 提交于
      The first is that we were using an incorrect hand-rolled variant
      of __kernel_text_address() which didn't handle module PCs.  We now
      just use the standard API.
      
      The second was that we weren't accounting for the three-level
      page table when we were trying to pre-verify the addresses on
      the 64-bit TILE-Gx processor; we now do that correctly.
      Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
      3cebbafd
    • C
      arch/tile: use a cleaner technique to enable interrupt for cpu_idle() · 0b989cac
      Chris Metcalf 提交于
      Previously we used iret to atomically return to kernel PL with
      interrupts enabled.  However, it turns out that we are architecturally
      guaranteed that we can just set and clear the "interrupt critical
      section" and only interrupt on the following instruction, so we
      now do that instead, since it's cleaner.
      Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
      0b989cac
    • C
      arch/tile: warn and retry if an IPI is not accepted by the target cpu · bbeee4b2
      Chris Metcalf 提交于
      Previously we assumed this was impossible, but in fact it can happen.
      Handle it gracefully by retrying after issuing a warning.
      Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
      bbeee4b2
    • C
      arch/tile: stop disabling INTCTRL_1 interrupts during hypervisor downcalls · b2ce2bda
      Chris Metcalf 提交于
      The problem was that this could lead to IPIs being disabled during
      the softirq processing after a hypervisor downcall (e.g. for I/O),
      since both IPI and device interrupts use the INCTRL_1 downcall mechanism.
      When this happened at the wrong time, it could lead to deadlock.
      
      Luckily, we were already maintaining the per-interrupt state we need,
      and using it in the proper way in the hypervisor, so all we had to do
      was to change Linux to stop blocking downcall interrupts for the entire
      length of the downcall.  (Now they're blocked while we're executing the
      downcall routine itself, but not while we're executing any subsequent
      softirq routines.)  The hypervisor is doing a very small amount of
      work it no longer needs to do (masking INTCTRL_1 on entry to the client
      interrupt routine), but doing so means that older versions of Tile Linux
      will continue to work with a current hypervisor, so that seems reasonable.
      Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
      b2ce2bda
    • C
      arch/tile: fix __ndelay etc to work better · 13371731
      Chris Metcalf 提交于
      The current implementations of __ndelay and __udelay call a hypervisor
      service to delay, but the hypervisor service isn't actually implemented
      very well, and the consensus is that Linux should handle figuring this
      out natively and not use a hypervisor service.
      
      By converting nanoseconds to cycles, and then spinning until the
      cycle counter reaches the desired cycle, we get several benefits:
      first, we are sensitive to the actual clock speed; second, we use
      less power by issuing a slow SPR read once every six cycles while
      we delay; and third, we properly handle the case of an interrupt by
      exiting at the target time rather than after some number of cycles.
      Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
      13371731
    • C
      arch/tile: bug fix: exec'ed task thought it was still single-stepping · 04f7a3f1
      Chris Metcalf 提交于
      To handle single-step, tile mmap's a page of memory in the process
      space for each thread and uses it to construct a version of the
      instruction that we want to single step.  If the process exec's,
      though, we lose that mapping, and the kernel needs to be aware that
      it will need to recreate it if the exec'ed process than tries to
      single-step as well.
      
      Also correct some int32_t to s32 for better kernel style.
      Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
      04f7a3f1
    • C
      arch/tile: catch up with section naming convention in 2.6.35 · 2cb82400
      Chris Metcalf 提交于
      The convention changed to, e.g., ".data..page_aligned".  This commit
      fixes the places in the tile architecture that were still using the
      old convention.  One tile-specific section (.init.page) was dropped
      in favor of just using an "aligned" attribute.
      
      Sam Ravnborg <sam@ravnborg.org> pointed out __PAGE_ALIGNED_BSS, etc.
      Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
      2cb82400
  8. 24 2月, 2011 2 次提交
  9. 25 1月, 2011 1 次提交
    • T
      percpu: align percpu readmostly subsection to cacheline · 19df0c2f
      Tejun Heo 提交于
      Currently percpu readmostly subsection may share cachelines with other
      percpu subsections which may result in unnecessary cacheline bounce
      and performance degradation.
      
      This patch adds @cacheline parameter to PERCPU() and PERCPU_VADDR()
      linker macros, makes each arch linker scripts specify its cacheline
      size and use it to align percpu subsections.
      
      This is based on Shaohua's x86 only patch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Shaohua Li <shaohua.li@intel.com>
      19df0c2f
  10. 18 12月, 2010 2 次提交
    • C
      arch/tile: handle rt_sigreturn() more cleanly · 81711cee
      Chris Metcalf 提交于
      The current tile rt_sigreturn() syscall pattern uses the common idiom
      of loading up pt_regs with all the saved registers from the time of
      the signal, then anticipating the fact that we will clobber the ABI
      "return value" register (r0) as we return from the syscall by setting
      the rt_sigreturn return value to whatever random value was in the pt_regs
      for r0.
      
      However, this breaks in our 64-bit kernel when running "compat" tasks,
      since we always sign-extend the "return value" register to properly
      handle returned pointers that are in the upper 2GB of the 32-bit compat
      address space.  Doing this to the sigreturn path then causes occasional
      random corruption of the 64-bit r0 register.
      
      Instead, we stop doing the crazy "load the return-value register"
      hack in sigreturn.  We already have some sigreturn-specific assembly
      code that we use to pass the pt_regs pointer to C code.  We extend that
      code to also set the link register to point to a spot a few instructions
      after the usual syscall return address so we don't clobber the saved r0.
      Now it no longer matters what the rt_sigreturn syscall returns, and the
      pt_regs structure can be cleanly and completely reloaded.
      Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
      81711cee
    • C
      arch/tile: handle CLONE_SETTLS in copy_thread(), not user space · bc4cf2bb
      Chris Metcalf 提交于
      Previously we were just setting up the "tp" register in the
      new task as started by clone() in libc.  However, this is not
      quite right, since in principle a signal might be delivered to
      the new task before it had its TLS set up.  (Of course, this race
      window still exists for resetting the libc getpid() cached value
      in the new task, in principle.  But in any case, we are now doing
      this exactly the way all other architectures do it.)
      
      This change is important for 2.6.37 since the tile glibc we will
      be submitting upstream will not set TLS in user space any more,
      so it will only work on a kernel that has this fix.  It should
      also be taken for 2.6.36.x in the stable tree if possible.
      Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
      Cc: stable <stable@kernel.org>
      bc4cf2bb
  11. 25 11月, 2010 2 次提交
    • C
      arch/tile: make glibc's sysconf(_SC_NPROCESSORS_CONF) work correctly · 4d658d13
      Chris Metcalf 提交于
      glibc assumes that it can count /sys/devices/system/cpu/cpu* to get
      the number of configured cpus.  For this to be valid on tile, we need
      to generate a "cpu" entry for all cpus, including the ones that are
      not currently allocated for Linux's use.
      Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
      4d658d13
    • C
      pci root complex: support for tile architecture · f02cbbe6
      Chris Metcalf 提交于
      This change enables PCI root complex support for TILEPro.  Unlike
      TILE-Gx, TILEPro has no support for memory-mapped I/O, so the PCI
      support consists of hypervisor upcalls for PIO, DMA, etc.  However,
      the performance is fine for the devices we have tested with so far
      (1Gb Ethernet, SATA, etc.).
      
      The <asm/io.h> header was tweaked to be a little bit more aggressive
      about disabling attempts to map/unmap IO port space.  The hacky
      <asm/pci-bridge.h> header was rolled into the <asm/pci.h> header
      and the result was simplified.  Both of the latter two headers were
      preliminary versions not meant for release before now - oh well.
      
      There is one quirk for our TILEmpower platform, which accidentally
      negotiates up to 5GT and needs to be kicked down to 2.5GT.
      Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
      f02cbbe6
  12. 18 11月, 2010 1 次提交
  13. 02 11月, 2010 6 次提交
    • C
      arch/tile: mark "hardwall" device as non-seekable · d02db4f8
      Chris Metcalf 提交于
      Arnd's recent patch series tagged this device with noop_llseek,
      conservatively.  In fact, it should be no_llseek, which we arrange
      for by opening the device with nonseekable_open().
      Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
      d02db4f8
    • C
      asm-generic/stat.h: support 64-bit file time_t for stat() · 2c7387ef
      Chris Metcalf 提交于
      The existing asm-generic/stat.h specifies st_mtime, etc., as a 32-value,
      and works well for 32-bit architectures (currently microblaze, score,
      and 32-bit tile).  However, for 64-bit architectures it isn't sufficient
      to return 32 bits of time_t; this isn't good insurance against the 2037
      rollover.  (It also makes glibc support less convenient, since we can't
      use glibc's handy STAT_IS_KERNEL_STAT mode.)
      
      This change extends the two "timespec" fields for each of the three atime,
      mtime, and ctime fields from "int" to "long".  As a result, on 32-bit
      platforms nothing changes, and 64-bit platforms will now work as expected.
      
      The only wrinkle is 32-bit userspace under 64-bit kernels taking advantage
      of COMPAT mode.  For these, we leave the "struct stat64" definitions with
      the "int" versions of the time_t and nsec fields, so that architectures
      can implement compat_sys_stat64() and friends with sys_stat64(), etc.,
      and get the expected 32-bit structure layout.  This requires a
      field-by-field copy in the kernel, implemented by the code guarded
      under __ARCH_WANT_STAT64.
      
      This does mean that the shape of the "struct stat" and "struct stat64"
      structures is different on a 64-bit kernel, but only one of the two
      structures should ever be used by any given process: "struct stat"
      is meant for 64-bit userspace only, and "struct stat64" for 32-bit
      userspace only.  (On a 32-bit kernel the two structures continue to have
      the same shape, since "long" is 32 bits.)
      
      The alternative is keeping the two structures the same shape on 64-bit
      kernels, which means a 64-bit time_t in "struct stat64" for 32-bit
      processes.  This is a little unnatural since 32-bit userspace can't
      do anything with 64 bits of time_t information, since time_t is just
      "long", not "int64_t"; and in any case 32-bit userspace might expect
      to be running under a 32-bit kernel, which can't provide the high 32
      bits anyway.  In the case of a 32-bit kernel we'd then be extending the
      kernel's 32-bit time_t to 64 bits, then truncating it back to 32 bits
      again in userspace, for no particular reason.  And, as mentioned above,
      if we have 64-bit time_t for 32-bit processes we can't easily use glibc's
      STAT_IS_KERNEL_STAT, since glibc's stat structure requires an embedded
      "struct timespec", which is a pair of "long" (32-bit) values in a 32-bit
      userspace.  "Inventive" solutions are possible, but are pretty hacky.
      Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
      Acked-by: NArnd Bergmann <arnd@arndb.de>
      2c7387ef
    • C
      arch/tile: don't allow user code to set the PL via ptrace or signal return · 1deb9c5d
      Chris Metcalf 提交于
      The kernel was allowing any component of the pt_regs to be updated either
      by signal handlers writing to the stack, or by processes writing via
      PTRACE_POKEUSR or PTRACE_SETREGS, which meant they could set their PL
      up from 0 to 1 and get access to kernel code and data (or, in practice,
      cause a kernel panic).  We now always reset the ex1 field, allowing the
      user to set their ICS bit only.
      Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
      1deb9c5d
    • C
      arch/tile: correct double syscall restart for nested signals · 34a89d26
      Chris Metcalf 提交于
      This change is modelled on similar fixes for other architectures.
      The pt_regs "faultnum" member is set to the trap (fault) number that
      caused us to enter the kernel, and is INT_SWINT_1 for the syscall software
      interrupt.  We already supported a pseudo value, INT_SWINT_1_SIGRETURN,
      that we used for the rt_sigreturn syscall; it avoided the case where
      one signal was handled, then we "tail-called" to another handler.
      
      This change avoids the similar case where we start to call one handler,
      then are preempted into another handler when we start trying to run
      the first handler.  We clear ->faultnum after calling handle_signal(),
      and to be paranoid also in the case where there was no signal to deliver.
      Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
      34a89d26
    • C
      arch/tile: bomb raw_local_irq_ to arch_local_irq_ · 5d966115
      Chris Metcalf 提交于
      This completes the tile migration to the new naming scheme for
      the architecture-specific irq management code.
      Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
      5d966115
    • C
      arch/tile: complete migration to new kmap_atomic scheme · 38a6f426
      Chris Metcalf 提交于
      This change makes KM_TYPE_NR independent of the actual deprecated
      list of km_type values, which are no longer used in tile code anywhere.
      For now we leave it set to 8, allowing that many nested mappings,
      and thus reserving 32MB of address space.
      
      A few remaining places using KM_* values were cleaned up as well.
      Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
      38a6f426
  14. 28 10月, 2010 3 次提交
  15. 16 10月, 2010 6 次提交
  16. 15 10月, 2010 1 次提交
    • A
      llseek: automatically add .llseek fop · 6038f373
      Arnd Bergmann 提交于
      All file_operations should get a .llseek operation so we can make
      nonseekable_open the default for future file operations without a
      .llseek pointer.
      
      The three cases that we can automatically detect are no_llseek, seq_lseek
      and default_llseek. For cases where we can we can automatically prove that
      the file offset is always ignored, we use noop_llseek, which maintains
      the current behavior of not returning an error from a seek.
      
      New drivers should normally not use noop_llseek but instead use no_llseek
      and call nonseekable_open at open time.  Existing drivers can be converted
      to do the same when the maintainer knows for certain that no user code
      relies on calling seek on the device file.
      
      The generated code is often incorrectly indented and right now contains
      comments that clarify for each added line why a specific variant was
      chosen. In the version that gets submitted upstream, the comments will
      be gone and I will manually fix the indentation, because there does not
      seem to be a way to do that using coccinelle.
      
      Some amount of new code is currently sitting in linux-next that should get
      the same modifications, which I will do at the end of the merge window.
      
      Many thanks to Julia Lawall for helping me learn to write a semantic
      patch that does all this.
      
      ===== begin semantic patch =====
      // This adds an llseek= method to all file operations,
      // as a preparation for making no_llseek the default.
      //
      // The rules are
      // - use no_llseek explicitly if we do nonseekable_open
      // - use seq_lseek for sequential files
      // - use default_llseek if we know we access f_pos
      // - use noop_llseek if we know we don't access f_pos,
      //   but we still want to allow users to call lseek
      //
      @ open1 exists @
      identifier nested_open;
      @@
      nested_open(...)
      {
      <+...
      nonseekable_open(...)
      ...+>
      }
      
      @ open exists@
      identifier open_f;
      identifier i, f;
      identifier open1.nested_open;
      @@
      int open_f(struct inode *i, struct file *f)
      {
      <+...
      (
      nonseekable_open(...)
      |
      nested_open(...)
      )
      ...+>
      }
      
      @ read disable optional_qualifier exists @
      identifier read_f;
      identifier f, p, s, off;
      type ssize_t, size_t, loff_t;
      expression E;
      identifier func;
      @@
      ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
      {
      <+...
      (
         *off = E
      |
         *off += E
      |
         func(..., off, ...)
      |
         E = *off
      )
      ...+>
      }
      
      @ read_no_fpos disable optional_qualifier exists @
      identifier read_f;
      identifier f, p, s, off;
      type ssize_t, size_t, loff_t;
      @@
      ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
      {
      ... when != off
      }
      
      @ write @
      identifier write_f;
      identifier f, p, s, off;
      type ssize_t, size_t, loff_t;
      expression E;
      identifier func;
      @@
      ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
      {
      <+...
      (
        *off = E
      |
        *off += E
      |
        func(..., off, ...)
      |
        E = *off
      )
      ...+>
      }
      
      @ write_no_fpos @
      identifier write_f;
      identifier f, p, s, off;
      type ssize_t, size_t, loff_t;
      @@
      ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
      {
      ... when != off
      }
      
      @ fops0 @
      identifier fops;
      @@
      struct file_operations fops = {
       ...
      };
      
      @ has_llseek depends on fops0 @
      identifier fops0.fops;
      identifier llseek_f;
      @@
      struct file_operations fops = {
      ...
       .llseek = llseek_f,
      ...
      };
      
      @ has_read depends on fops0 @
      identifier fops0.fops;
      identifier read_f;
      @@
      struct file_operations fops = {
      ...
       .read = read_f,
      ...
      };
      
      @ has_write depends on fops0 @
      identifier fops0.fops;
      identifier write_f;
      @@
      struct file_operations fops = {
      ...
       .write = write_f,
      ...
      };
      
      @ has_open depends on fops0 @
      identifier fops0.fops;
      identifier open_f;
      @@
      struct file_operations fops = {
      ...
       .open = open_f,
      ...
      };
      
      // use no_llseek if we call nonseekable_open
      ////////////////////////////////////////////
      @ nonseekable1 depends on !has_llseek && has_open @
      identifier fops0.fops;
      identifier nso ~= "nonseekable_open";
      @@
      struct file_operations fops = {
      ...  .open = nso, ...
      +.llseek = no_llseek, /* nonseekable */
      };
      
      @ nonseekable2 depends on !has_llseek @
      identifier fops0.fops;
      identifier open.open_f;
      @@
      struct file_operations fops = {
      ...  .open = open_f, ...
      +.llseek = no_llseek, /* open uses nonseekable */
      };
      
      // use seq_lseek for sequential files
      /////////////////////////////////////
      @ seq depends on !has_llseek @
      identifier fops0.fops;
      identifier sr ~= "seq_read";
      @@
      struct file_operations fops = {
      ...  .read = sr, ...
      +.llseek = seq_lseek, /* we have seq_read */
      };
      
      // use default_llseek if there is a readdir
      ///////////////////////////////////////////
      @ fops1 depends on !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
      identifier fops0.fops;
      identifier readdir_e;
      @@
      // any other fop is used that changes pos
      struct file_operations fops = {
      ... .readdir = readdir_e, ...
      +.llseek = default_llseek, /* readdir is present */
      };
      
      // use default_llseek if at least one of read/write touches f_pos
      /////////////////////////////////////////////////////////////////
      @ fops2 depends on !fops1 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
      identifier fops0.fops;
      identifier read.read_f;
      @@
      // read fops use offset
      struct file_operations fops = {
      ... .read = read_f, ...
      +.llseek = default_llseek, /* read accesses f_pos */
      };
      
      @ fops3 depends on !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
      identifier fops0.fops;
      identifier write.write_f;
      @@
      // write fops use offset
      struct file_operations fops = {
      ... .write = write_f, ...
      +	.llseek = default_llseek, /* write accesses f_pos */
      };
      
      // Use noop_llseek if neither read nor write accesses f_pos
      ///////////////////////////////////////////////////////////
      
      @ fops4 depends on !fops1 && !fops2 && !fops3 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
      identifier fops0.fops;
      identifier read_no_fpos.read_f;
      identifier write_no_fpos.write_f;
      @@
      // write fops use offset
      struct file_operations fops = {
      ...
       .write = write_f,
       .read = read_f,
      ...
      +.llseek = noop_llseek, /* read and write both use no f_pos */
      };
      
      @ depends on has_write && !has_read && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
      identifier fops0.fops;
      identifier write_no_fpos.write_f;
      @@
      struct file_operations fops = {
      ... .write = write_f, ...
      +.llseek = noop_llseek, /* write uses no f_pos */
      };
      
      @ depends on has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
      identifier fops0.fops;
      identifier read_no_fpos.read_f;
      @@
      struct file_operations fops = {
      ... .read = read_f, ...
      +.llseek = noop_llseek, /* read uses no f_pos */
      };
      
      @ depends on !has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
      identifier fops0.fops;
      @@
      struct file_operations fops = {
      ...
      +.llseek = noop_llseek, /* no read or write fn */
      };
      ===== End semantic patch =====
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Cc: Julia Lawall <julia@diku.dk>
      Cc: Christoph Hellwig <hch@infradead.org>
      6038f373